New threat models in the face of British intelligence and the Five Eyes’ new end-to-end encryption interception strategy

Due to more and more services and messaging applications implementing end-to-end encryption, law enforcement organisations and intelligence agencies have become increasingly concerned about the prospect of “going dark”. This is when law enforcement has the legal right to access a communication (i.e. through a warrant) but doesn’t have the technical capability to do so, because the communication may be end-to-end encrypted.

Earlier proposals from politicians have taken the approach of outright banning end-to-end encryption, which was met with fierce criticism by experts and the tech industry. The intelligence community had been slightly more nuanced, promoting protocols that allow for key escrow, where messages would also be encrypted under an additional key (e.g. controlled by the government). Such protocols have been promoted by intelligence agencies as recently as 2016 and early as the 1990s but were also met with fierce criticism.

More recently, there has been a new set of legislation in the UK, statements from the Five Eyes and proposals from intelligence officials that propose a “different” way of defeating end-to-end encryption, that is akin to key escrow but is enabled on a “per-warrant” basis rather than by default. Let’s look at how this may effect threat models in applications that use end-to-end encryption in the future.

Legislation

On the 31st of August 2018, the governments of the United States, the United Kingdom, Canada, Australia and New Zealand (collectively known as the “Five Eyes”) released a “Statement of Principles on Access to Evidence and Encryption”, where they outlined their position on encryption.

In the statement, it says:

Privacy laws must prevent arbitrary or unlawful interference, but privacy is not absolute. It is an established principle that appropriate government authorities should be able to seek access to otherwise private information when a court or independent authority has authorized such access based on established legal standards.

The statement goes on to set out that technology companies have a mutual responsibility with government authorities to enable this process. At the end of the statement, it describes how technology companies should provide government authorities access to private information:

The Governments of the Five Eyes encourage information and communications technology service providers to voluntarily establish lawful access solutions to their products and services that they create or operate in our countries. Governments should not favor a particular technology; instead, providers may create customized solutions, tailored to their individual system architectures that are capable of meeting lawful access requirements. Such solutions can be a constructive approach to current challenges.

Should governments continue to encounter impediments to lawful access to information necessary to aid the protection of the citizens of our countries, we may pursue technological, enforcement, legislative or other measures to achieve lawful access solutions.

Their position effectively boils down to requiring technology companies to provide a technical means to fulfil court warrants that require them to hand over private data of certain individuals, but the implementation for doing so is open to the technology company.

Continue reading New threat models in the face of British intelligence and the Five Eyes’ new end-to-end encryption interception strategy

Improving the auditability of access to data requests

Data is increasingly collected and shared, with potential benefits for both individuals and society as a whole, but people cannot always be confident that their data will be shared and used appropriately. Decisions made with the help of sensitive data can greatly affect lives, so there is a need for ways to hold data processors accountable. This requires not only ways to audit these data processors, but also ways to verify that the reported results of an audit are accurate, while protecting the privacy of individuals whose data is involved.

We (Alexander Hicks, Vasilios Mavroudis, Mustafa Al-Basam, Sarah Meiklejohn and Steven Murdoch) present a system, VAMS, that allows individuals to check accesses to their sensitive personal data, enables auditors to detect violations of policy, and allows publicly verifiable and privacy-preserving statistics to be published. VAMS has been implemented twice, as a permissioned distributed ledger using Hyperledger Fabric and as a verifiable log-backed map using Trillian. The paper and the code are available.

Use cases and setting

Our work is motivated by two scenarios: controlling the access of law-enforcement personnel to communication records and controlling the access of healthcare professionals to medical data.

The UK Home Office states that 95% of serious and organized criminal cases make use of communications data. Annual reports published by the IOCCO (now under the IPCO name) provide some information about the request and use of communications data. There were over 750 000 requests for data in 2016, a portion of which were audited to provide the usage statistics and errors that can be found in the published report.

Not only is it important that requests are auditable, the requested data can also be used as evidence in legal proceedings. In this case, it is necessary to ensure the integrity of the data or to rely on representatives of data providers and expert witnesses, the latter being more expensive and requiring trust in third parties.

In the healthcare case, individuals usually consent for their GP or any medical professional they interact with to have access to relevant medical records, but may have concerns about the way their information is then used or shared.  The NHS regularly shares data with researchers or companies like DeepMind, sometimes in ways that may reduce the trust levels of individuals, despite the potential benefits to healthcare.

Continue reading Improving the auditability of access to data requests

What the CIA hack and leak teaches us about the bankruptcy of current “Cyber” doctrines

Wikileaks just published a trove of documents resulting from a hack of the CIA Engineering Development Group, the part of the spying agency that is in charge of developing hacking tools. The documents seem genuine and catalog, among other things, a number of exploits against widely deployed commodity devices and systems, including Android, iPhone, OS X and Windows. Also smart TVs. This hack, with appropriate background, teaches us a lesson or two about the direction of public policy related to “cyber” in the US and the UK.

Routine proliferation of weaponry and tactics

The CIA hack is in many ways extraordinary, in that it allowed the attackers to gain access to the source code of the hacking tools of the agency – an extraordinary act of proliferation of attack technologies. In other ways, it is mundane in that it is neither the first, nor probably the last hack or leak of catastrophic proportions to occur to a US/UK government department in charge of offensive cyber operations.

This list of leaks of government attack technologies, illustrates that when it comes to cyber-weaponry the risk of proliferation is not merely theoretical, but very real. In fact it seems to be happening all the time.

I find it particularly amusing – and those in charge of those agencies should probably find it embarrassing – that NSA and GCHQ go around presenting themselves as national technical authorities in assurance; they provide advice to others on how to not get hacked; they keep asserting that they can be trusted to operate extremely dangerous spying infrastructures; and handle in secret extremely dangerous zero-day exploits. Yet, they seem to be routinely hacked and have their secret documents leaked. Instead of chasing whistleblowers and journalists, policy makers should probably take note that there is not a high-enough level of assurance to secure cyber-weaponry, and for sure it is not to be found within those agencies.

In fact the risk of proliferation is at the very heart of cyber attack, and integral to it, even without hacking or leaking from inside government. Many of us quietly laughed at the bureaucratic nightmare discussed in the recent CIA leak, describing the difficulty of classifying the cyber attack techniques while at the same time deploying them on target system. As the press release summarizes:

To attack its targets, the CIA usually requires that its implants communicate with their control programs over the internet. If CIA implants, Command & Control and Listening Post software were classified, then CIA officers could be prosecuted or dismissed for violating rules that prohibit placing classified information onto the Internet. Consequently the CIA has secretly made most of its cyber spying/war code unclassified.

This illustrates very clearly a key dynamic in hacking: once a hacker uses an exploit against an adversary system, there is a very real risk the exploit is captured by monitoring and intrusion detection systems of the target, and then weponized to hack other computers, at a low cost. This is very well established and researched, and such “honey pot” infrastructures have been used in the academic and commercial community for some time to detect and study potentially new attacks. This is not the premise of sophisticated defenders, the explanation of how honeypots work is on Wikipedia! The Flame malware, and Stuxnet before, were in fact found in the wild.

In that respect cyber-war is not like war at all. The weapons you use will be turned against you immediately, and your effective use of weapons relies on your very own infrastructures being utterly vulnerable to them.

What “Cyber” doctrine?

The constant leaks and hacks, leading to proliferation of exploits and hacking tools from the heart of government, as well through operations, should deeply inform policy makers when making choices about “cyber” doctrines. First, it is probably time to ditch the awkward term “Cyber”.

Continue reading What the CIA hack and leak teaches us about the bankruptcy of current “Cyber” doctrines

Microsoft Ireland: winning the battle for privacy but losing the war

On Thursday, Microsoft won an important federal appeals court case against the US government. The case centres on a warrant issued in December 2013, requiring Microsoft to disclose emails and other records for a particular msn.com email address which was related to a narcotics investigation. It transpired that these emails were stored in a Microsoft datacenter in Ireland, but the US government argued that, since Microsoft is a US company and can easily copy the data into the US, a US warrant would suffice. Microsoft argued that the proper way for the US government to obtain the data is through the Mutual Legal Assistance Treaty (MLAT) between the US and Ireland, where an Irish court would decide, according to Irish law, whether the data should be handed over to US authorities. Part of the US government’s objection to this approach was that the MLAT process is sometimes very slow, although though the Irish government has committed to consider any such request “expeditiously”.

The appeal court decision is an important victory for Microsoft (following two lower courts ruling against them) because they sell their european datacenters as giving their european customers confidence that their data will be subject to the more stringent european privacy laws. Microsoft’s case was understandably supported by other technology companies in the same position, as well as civil liberties organisations such as the Electronic Frontier Foundation in the US and the Open Rights Group in the UK. However, I have mixed opinions about the outcome: while probably the right decision in this case, the wider consequences could be detrimental to privacy.

Both sides of the case wanted to set a precedent (if not legally, at least in practice). The US government wanted US law to apply to data held by US companies, wherever in the world the data resides. Microsoft wanted the location of the data to imply which legal regime applied, and so their customers could be confident that their own country’s laws will be respected, provided Microsoft have a datacenter in their own country (or at least one with compatible laws). My concern is that this ruling will give false assurance to customers of US companies, because in other circumstances a different decision could quite easily be taken.

We know about this case because Microsoft chose to challenge it in court, and were able to do so. This is the first time Microsoft has challenged a US warrant for data stored in their Irish datacenter despite it being in operation for three years prior to the case. Had the email address been associated with a more serious crime, or the demand for emails accompanied by a gagging order, it may not have been challenged. Microsoft and other technology companies may still choose to accept, or may even be forced to accept, the applicability of future US warrants to data they control, regardless of the court decision last week. One extreme approach to compel this approach would be for the US to jail employees until their demands are complied with.

For this reason, I have argued that control over data is more important than where data resides. If a company does not have the technical capability to comply with an order, it is easier for them to defend their case, and so protects both the company’s customers and staff. Microsoft have taken precisely this approach for their new German datacenters, which will be operated by staff in Germany working for a German “data trustee” (Deutsche Telekom). In contrast to their Irish datacenter, Microsoft staff will be unable to access customer data, except with the permission of and oversight from the data trustee.

While the data trustee model resists information being obtained through improper legal means, a malicious employee could still break rules for personal gain, or the systems designed to process legal requests could be hacked into. With modern security techniques it is possible to do better. End-to-end encryption for instant messaging is one such example, because (if designed properly) the communications provider does not have access to messages they carry. A more sophisticated approach is “distributed consensus”, where a decision is only taken if a majority of participants agree. The consensus process is automated and enforced through cryptography, ensuring that rules are respected even if some participants are malicious. Critical decisions in the Tor network and in Bitcoin are taken this way. More generally, there is a growing recognition that purely legal or procedural mechanisms are insufficient to protect privacy. This is one of the common threads present in much of the research presented at the Privacy Enhancing Technologies Symposium, being held this week in Darmstadt: recognising that there will always be imperfections in software, people and procedures and showing that nevertheless individual’s privacy can still be protected.

Exceptional access provisions in the Investigatory Powers Bill

The Investigatory Powers Bill, being debated in Parliament this week, proposes the first wide-scale update in 15 years to the surveillance powers of the UK law-enforcement and intelligence agencies.

The Bill has several goals: to consolidate some existing surveillance powers currently either scattered throughout other legislation or not even publicly disclosed, to create a wide range of new surveillance powers, and to change the process of authorisation and oversight surrounding the use of surveillance powers. The Bill is complex and, at 245 pages long, makes scrutiny challenging.

The Bill has had its first and second readings in the House of Commons, and has been examined by relevant committees in the Commons. The Bill will now be debated in the ‘report stage’, where MPs will have the chance to propose amendments following committee scrutiny. After this it will progress to a third reading, and then to the House of Lords for further debate, followed by final agreement by both Houses.

So far, four committee reports have been published examining the draft Bill, from the Intelligence and Security Committee of Parliament, the joint House of Lords/House of Commons committee specifically set up to examine the draft Bill, the House of Commons Science and Technology committee (to which I served as technical advisor) and the Joint Committee on Human Rights.

These committees were faced with a difficult task of meeting an accelerated timetable for the Bill, with the government aiming to have it become law by the end of 2016. The reason for the haste is that the Bill would re-instate and extend the ability of the government to compel companies to collect data about their users, even without there being any suspicion of wrongdoing, known as “data retention”. This power was previously set out in the EU Data Retention Directive, but in 2014 the European Court of Justice found it be unlawful.

Emergency legislation passed to temporarily permit the government to continue their activities will expire in December 2016 (but may be repealed earlier if an appeal to the European Court of Justice succeeds).

The four committees which examined the Bill together made 130 recommendations but since the draft was published, the government only slightly changed the Bill, and only a few minor amendments were accepted by the Public Bills committee.

Many questions remain about whether the powers granted by the Bill are justifiable and subject to adequate oversight, but where insights from computer security research are particularly relevant is on the powers to grant law enforcement the ability to bypass normal security mechanisms, sometimes termed “exceptional access”.

Continue reading Exceptional access provisions in the Investigatory Powers Bill

Insecure by design: protocols for encrypted phone calls

The MIKEY-SAKKE protocol is being promoted by the UK government as a better way to secure phone calls. The reality is that MIKEY-SAKKE is designed to offer minimal security while allowing undetectable mass surveillance, through the introduction a backdoor based around mandatory key-escrow. This weakness has implications which go further than just the security of phone calls.

The current state of security for phone calls leaves a lot to be desired. Land-line calls are almost entirely unencrypted, and cellphone calls are also unencrypted except for the radio link between the handset and the phone network. While the latest cryptography standards for cellphones (3G and 4G) are reasonably strong it is possible to force a phone to fall back to older standards with easy-to-break cryptography, if any. The vast majority of phones will not reveal to their user whether such an attack is under way.

The only reason that eavesdropping on land-line calls is not commonplace is that getting access to the closed phone networks is not as easy compared to the more open Internet, and cellphone cryptography designers relied on the equipment necessary to intercept the radio link being only affordable by well-funded government intelligence agencies, and not by criminals or for corporate espionage. That might have been true in the past but it certainly no longer the case with the necessary equipment now available for $1,500. Governments, companies and individuals are increasingly looking for better security.

A second driver for better phone call encryption is the convergence of Internet and phone networks. The LTE (Long-Term Evolution) 4G cellphone standard – under development by the 3rd Generation Partnership Project (3GPP) – carries voice calls over IP packets, and desktop phones in companies are increasingly carrying voice over IP (VoIP) too. Because voice calls may travel over the Internet, whatever security was offered by the closed phone networks is gone and so other security mechanisms are needed.

Like Internet data encryption, voice encryption can broadly be categorised as either link encryption, where each intermediary may encrypt data before passing it onto the next, or end-to-end encryption, where communications are encrypted such that only the legitimate end-points can have access to the unencrypted communication. End-to-end encryption is preferable for security because it avoids intermediaries being able to eavesdrop on communications and gives the end-points assurance that communications will indeed be encrypted all the way to their other communication partner.

Current cellphone encryption standards are link encryption: the phone encrypts calls between it and the phone network using cryptographic keys stored on the Subscriber Identity Module (SIM). Within the phone network, encryption may also be present but the network provider still has access to unencrypted data, so even ignoring the vulnerability to fall-back attacks on the radio link, the network providers and their suppliers are weak points that are tempting for attackers to compromise. Recent examples of such attacks include the compromise of the phone networks of Vodafone in Greece (2004) and Belgacom in Belgium (2012), and the SIM card supplier Gemalto in France (2010). The identity of the Vodafone Greece hacker remains unknown (though the NSA is suspected) but the attacks against Belgacom and Gemalto were carried out by the UK signals intelligence agency – GCHQ – and only publicly revealed from the Snowden leaks, so it is quite possible there are others attacks which remain hidden.

Email is typically only secured by link encryption, if at all, with HTTPS encrypting access to most webmail and Transport Layer Security (TLS) sometimes encrypting other communication protocols that carry email (SMTP, IMAP and POP). Again, the fact that intermediaries have access to plaintext creates a vulnerability, as demonstrated by the 2009 hack of Google’s Gmail likely originating from China. End-to-end email encryption is possible using the OpenPGP or S/MIME protocols but their use is not common, primarily due to their poor usability, which in turn is at least partially a result of having to stay compatible with older insecure email standards.

In contrast, instant messaging applications had more opportunity to start with a clean-slate (because there is no expectation of compatibility among different networks) and so this is where much innovation in terms of end-to-end security has taken place. Secure voice communication however has had less attention than instant messaging so in the remainder of the article we shall examine what should be expected of a secure voice communication system, and in particular see how one of the latest and up-coming protocols, MIKEY-SAKKE, which comes with UK government backing, meets these criteria.

MIKEY-SAKKE and Secure Chorus

MIKEY-SAKKE is the security protocol behind the Secure Chorus voice (and also video) encryption standard, commissioned and designed by GCHQ through their information security arm, CESG. GCHQ have announced that they will only certify voice encryption products through their Commercial Product Assurance (CPA) security evaluation scheme if the product implements MIKEY-SAKKE and Secure Chorus. As a result, MIKEY-SAKKE has a monopoly over the vast majority of classified UK government voice communication and so companies developing secure voice communication systems must implement it in order to gain access to this market. GCHQ can also set requirements of what products are used in the public sector and as well as for companies operating critical national infrastructure.

UK government standards are also influential in guiding purchase decisions outside of government and we are already seeing MIKEY-SAKKE marketed commercially as “government-grade security” and capitalising on their approval for use in the UK government. For this reason, and also because GCHQ have provided implementers a free open source library to make it easier and cheaper to deploy Secure Chorus, we can expect wide use MIKEY-SAKKE in industry and possibly among the public. It is therefore important to consider whether MIKEY-SAKKE is appropriate for wide-scale use. For the reasons outlined in the remainder of this article, the answer is no – MIKEY-SAKKE is designed to offer minimal security while allowing undetectable mass surveillance though key-escrow, not to provide effective security.

Continue reading Insecure by design: protocols for encrypted phone calls

ACE-CSR opening event 2015/16: talks on malware, location privacy and wiretap law

The opening event for the UCL Academic Centre of Excellence for Cyber Security Research in the 2015–2016 academic term featured three speakers: Earl Barr, whose work on approximating program equivalence has won several ACM distinguished paper awards; Mirco Musolesi from the Department of Geography, whose background includes a degree in computer science and an interest in analysing myriad types of data while protecting privacy; and Susan Landau, a professor at Worcester Polytechnic Institute and a visiting professor at UCL and an expert on cyber security policy whose books include Privacy On the Line: the Politics of Wiretapping and Encryption (with Whitfield Diffie) and Surveillance or Security? The Risks Posed by New Wiretapping Technologies.

Detecting malware and IP theft through program similarity

Earl Barr is a member of the software systems engineering group and the Centre for Research on Evolution, Search, and Testing. His talk outlined his work using program similarity to determine whether two arbitrary programs have the same behaviour in two areas relevant to cyber security: malware and intellectual property theft in binaries (that is, code reused in violation of its licence).

Barr began by outlining his work on detecting malware, comparing the problem to that facing airport security personnel trying to find a terrorist among millions of passengers. The work begins with profiling: collect two zoos, and then ask if the program under consideration is more likely to belong to the benign zoo or the malware zoo.

Rather than study the structure of the binary, Barr works by viewing the program as strings of 0s and 1s, which may not coincide with the program’s instructions, and using information theory to create a measure of dissimilarity, the normalised compression distance (NCD). The NCD serves as an approximation of the Kolmogorov Complexity, a mathematical measure of the complexity of the shortest description of an object, which is then normalised using a compression algorithm that ignores the details of the instruction set architecture for which the binary is written.

Using these techniques to analyse a malware zoo collected from sources such as Virus Watch, Barr was able to achieve a 95.7% accuracy rate. He believes that although this technique isn’t suitable for contemporary desktop anti-virus software, it opens a new front in the malware detection arms race. Still, Barr is aware that malware writers will rapidly develop countermeasures and his group is already investigating counter-countermeasures.

Malware writers have three avenues for blocking detection: injecting new content that looks benign; encryption; and obfuscation. Adding new content threatens the malware’s viability: raising the NCD by 50% requires doubling the size of the malware. Encryption can be used against the malware writer: applying a language model across the program reveals a distinctive saw-toothed pattern of regions with low surprise and low entropy alternating with regions of high surprise and high entropy (that is, regions with ciphertext). Obfuscation is still under study: the group is using three obfuscation engines available for Java and applying them repeatedly to Java malware. Measuring the NCD after each application shows that after 100 iterations the NCD approaches 1 (that is, the two items being compared are dissimilar), but that two of the three engines make errors after 200 applications. Unfortunately for malware writers, this technique also causes the program to grow in size. The cost of obfuscation to malware writers may therefore be greater than that imposed upon white hats.

Continue reading ACE-CSR opening event 2015/16: talks on malware, location privacy and wiretap law

An Analysis of Reshipping Mule Scams

Credit cards are a popular target for cybercriminals. Miscreants infect victim computers with malware that reports back to their command and control servers any credit card information that the user inserts in her computer, or compromise large retail stores stealing their customers’ credit card information. After obtaining credit card details from their victims, cybercriminals face the problem of monetising such information. As we recently covered on this blog, cybercriminals monetise stolen credit cards by cloning them and using very clever tricks to bypass the Chip and PIN verification mechanisms. This way they are able to use the counterfeit credit card in a physical store, purchase expensive items such as cigarettes, and re-sell them for a profit.

Another possible way for cybercriminals to monetise stolen credit cards is by purchasing goods on online stores. To this end, they need more information than the one contained on the credit card alone: for those of you who are familiar with online shopping, some merchants require a billing address as well to allow the purchase (which is called “card not present transaction”). This additional information is often available to the criminal – it might, for example, have been retrieved together with the credit card credentials as part of a data breach against an online retailer. When purchasing goods online, cybercriminals face the issue of shipping: if they shipped the stolen goods to their home address, this would make it easy for law enforcement to find and arrest them. For this reason, miscreants need intermediaries in the shipping process.

In our recent paper, which was presented at the ACM Conference on Computer and Communications Security (CCS), we analyse a criminal scheme designed to help miscreants who wish to monetise stolen credit cards as we described: A cybercriminal (called operator) recruits unsuspecting citizens with the promise of a rewarding work-from-home job. This job involves receiving packages at home and having to re-ship them to a different address, provided by the operator. By accepting the job, people unknowingly become part of a criminal operation: the packages that they receive at their home contain stolen goods, and the shipping destinations are often overseas, typically in Russia. These shipping agents are commonly known as reshipping mules (or drops for stuff in the underground community). The operator then rents shipping mules as a service to cybercriminals wanting to ship stolen goods abroad. The cybercriminals taking advantage of such services are known as stuffers in the underground community. As a price for the service, the stuffer will pay a commission to the operator for each package reshipped through the service.

reshippinggraphic-580x328

In collaboration with the FBI and the United States Postal Inspection Service (USPIS) we conducted a study on such reshipping scam sites. This study involved data coming from seven different reshipping sites, and provides the research community with invaluable insights on how these operations are run. We observed that the vast majority of the re-shipped packages end up in the Moscow, Russia area, and that the goods purchased with stolen credit cards span multiple categories, from expensive electronics such as Apple products, to designer clothes, to DSLR cameras and even weapon accessories. Given the amount of goods shipped by the reshipping mule sites that we analysed, the annual revenue generated from such operations can span between 1.8 and 7.3 million US dollars. The overall losses are much higher though: the online merchant loses an expensive item from its inventory and typically has to refund the owner of the stolen credit card. In addition, the rogue goods typically travel labeled as “second hand goods” and therefore custom taxes are also evaded. Once the items purchased with stolen credit cards reach their destination they will be sold on the black market by cybercriminals.

Studying the management of the mules lead us to some surprising findings. When applying for the job, people are usually required to send the operator copies of their ID cards and passport. After they are hired, mules are promised to be paid at the end of their first month of employment. However, from our data it is clear that mules are usually never paid. After their first month expires, they are never contacted back by the operator, who just moves on and hires new mules. In other words, the mules become victims of this scam themselves, by never seeing a penny. Moreover, because they sent copies of their documents to the criminals, mules can potentially become victims of identity theft.

Our study is the first one shedding some light on these monetisation schemes linked to credit card fraud. We believe the insights in this paper can provide law enforcement and researchers with a better understanding of the cybercriminal ecosystem and allow them to develop more effective mitigation techniques against these problems.

Just how sophisticated will card fraud techniques become?

In late 2009, my colleagues and I discovered a serious vulnerability in EMV, the most widely used standard for smart card payments, known as “Chip and PIN” in the UK. We showed that it was possible for criminals to use a stolen credit or debit card without knowing the PIN, by tricking the terminal into thinking that any PIN is correct. We gave the banking industry advance notice of our discovery in early December 2009, to give them time to fix the problem before we published our research. After this period expired (two months, in this case) we published our paper as well explaining our results to the public on BBC Newsnight. We demonstrated that this vulnerability was real using a proof-of-concept system built from equipment we had available (off-the shelf laptop and card reader, FPGA development board, and hand-made card emulator).

No-PIN vulnerability demonstration

After the programme aired, the response from the banking industry dismissed the possibility that the vulnerability would be successfully exploited by criminals. The banking trade body, the UK Cards Association, said:

“We believe that this complicated method will never present a real threat to our customers’ cards. … Neither the banking industry nor the police have any evidence of criminals having the capability to deploy such sophisticated attacks.”

Similarly, EMVCo, who develop the EMV standards said:

“It is EMVCo’s view that when the full payment process is taken into account, suitable countermeasures to the attack described in the recent Cambridge Report are already available.”

It was therefore interesting to see that in May 2011, criminals were caught having stolen cards in France then exploiting a variant of this vulnerability to buy over €500,000 worth of goods in Belgium (which were then re-sold). At the time, not many details were available, but it seemed that the techniques the criminals used were much more sophisticated than our proof-of-concept demonstration.

We now know more about what actually happened, as well as the banks’ response, thanks to a paper by the researchers who performed the forensic analysis that formed part of the criminal investigation of this case. It shows just how sophisticated criminals could be, given sufficient motivation, contrary to the expectations in the original banking industry response.

Continue reading Just how sophisticated will card fraud techniques become?

Gianluca Stringhini – Cyber criminal operations and developing systems to defend against them

Gianluca Stringhini’s research focuses on studying cyber criminal operations and developing systems to defend against them.

Such operations tend to follow a common pattern. First the criminal operator lures a user into going to a Web site and tries to infect them with malware. Once infected, the user is joined to a botnet. From there, the user’s computer is instructed to perform malicious activities on the criminal’s behalf. Stringhini, whose UCL appointment is shared between the Department of Computer Science and the Department of Security and Crime Science, has studied all three of these stages.

Stringhini, who is from Genoa, developed his interest in computer security at college: “I was doing the things that all college students are doing, hacking, and breaking into systems. I was always interested in understanding how computers work and how one could break them. I started playing in hacking competitions.”

At the beginning, these competitions were just for fun, but those efforts became more serious when he arrived in 2008 at UC Santa Barbara, which featured one of the world’s best hacking teams, a perennial top finisher in Defcon’s Capture the Flag competition. It was at Santa Barbara that his interest in cyber crime developed, particularly in botnets and the complexity and skill of the operations that created them. He picked the US after Christopher Kruegel, whom he knew by email, invited him to Santa Barbara for an internship. He liked it, so he stayed and did a PhD studying the way criminals use online services such as social networks

“Basically, the idea is that if you have an account that’s used by a cyber criminal it will be used differently than one used by a real person because they will have a different goal,” he says. “And so you can develop systems that learn about these differences and detect accounts that are misused.” Even if the attacker tries to make their behaviour closely resemble the user’s own, ultimately spreading malicious content isn’t something normal users intend to do, and the difference is detectable.

This idea and Stringhini’s resulting PhD research led to his most significant papers to date.

Continue reading Gianluca Stringhini – Cyber criminal operations and developing systems to defend against them