Are Payment Card Contracts Unfair?

While US bank customers are almost completely protected against fraudulent transactions, in Europe banks are entitled to refuse to reimburse victims of fraud under certain circumstances. The EU Payment Services Directive (PSD) is supposed to protect customers but if the bank can show that the customer has been “grossly negligent” in following the terms and conditions associated with their account then the PSD permits the bank to pass the cost of any fraud on to the customer. The bank doesn’t have to show how the fraud happened, just that the most likely explanation for the fraud is that the customer failed to follow one of the rules set out by the bank on how to protect the account. To be certain of obtaining a refund, a customer must be able to show that he or she complied with every security-related clause of the terms and conditions, or show that the fraud was a result of a flaw in the bank’s security.

The bank terms and conditions, and how customers comply with them, are therefore of critical importance for consumer protection. We set out to answer the question: are these terms and conditions fair, taking into account how customers use their banking facilities? We focussed on ATM payments and in particular how customers manage PINs because ATM fraud losses are paid for by the banks and not retailers, so there is more incentive for the bank to pass losses on to the customer. In our paper – “Are Payment Card Contracts Unfair?” – published at Financial Cryptography 2016 we show that customers have too many PINs to remember them unaided and therefore it is unrealistic to expect customers to comply with all the rules banks set: to choose unguessable PINs, not write them down, and not use them elsewhere (even with different banks). We find that, as a result of these unrealistic expectations, customers do indeed make use of coping mechanisms which reduce security and violate terms and conditions, which puts them in a weak position should they be the victim of fraud.

We surveyed 241 UK bank customers and found that 19% of customers have four or more PINs and 48% of PINs are used at most once a month. As a result of interference (one memory being confused with another) and forgetting over time (if a memory is not exercised frequently it will be lost) it is infeasible for typical customers to remember all their bank PINs unaided. It is therefore inevitable that customers forget PINs (a quarter of our participants had forgot a 4-digit PIN at least once) and take steps to help them recall PINs. Of our participants, 33% recorded their PIN (most commonly in a mobile phone, notebook or diary) and 23% re-used their PIN elsewhere (most commonly to unlock their mobile phone). Both of these coping mechanisms would leave customers at risk of being found liable for fraud.

Customers also use the same PIN on several cards to reduce the burden of remembering PINs – 16% of our participants stated they used this technique, with the same PIN being used on up to 9 cards. Because each card allows the criminal 6 guesses at a PIN (3 on the card itself, and 3 at an ATM) this gives criminals an excellent opportunity to guess PINs and again leave the customer responsible for the losses. Such attacks are made easier by the fact that customers can change their PIN to one which is easier to remember, but also probably easier for criminals to guess (13% of our participants used a mnemonic, most commonly deriving the PIN from a specific date). Bonneau et al. studied in more detail exactly how bank customers select PINs.

Finally we found that PINs are regularly shared with other people, most commonly with a spouse or partner (32% of our participants). Again this violates bank terms and conditions and so puts customers at risk of being held liable for fraud.

Holding customers liable for not being able to follow unrealistic, vague and contradictory advice is grossly unfair to fraud victims. The Payment Services Directive is being revised, and in our submission to the consultation by the European Banking Authority we ask that banks only be permitted to pass fraud losses on to customers if they use authentication mechanisms which are feasible to use without undue effort, given the context of how people actually use banking facilities in normal life. Alternatively, regulators could adopt the tried and tested US model of strong consumer protection, and allow banks to manage risks through fraud detection. The increased trust from this approach might increase transaction volumes and profit for the industry overall.

 

“Are Payment Card Contracts Unfair?” by Steven J. Murdoch, Ingolf Becker, Ruba Abu-Salma, Ross Anderson, Nicholas Bohm, Alice Hutchings, M. Angela Sasse, and Gianluca Stringhini will be presented at Financial Cryptography and Data Security, Barbados, 22–26 February 2016.

Insecure by design: protocols for encrypted phone calls

The MIKEY-SAKKE protocol is being promoted by the UK government as a better way to secure phone calls. The reality is that MIKEY-SAKKE is designed to offer minimal security while allowing undetectable mass surveillance, through the introduction a backdoor based around mandatory key-escrow. This weakness has implications which go further than just the security of phone calls.

The current state of security for phone calls leaves a lot to be desired. Land-line calls are almost entirely unencrypted, and cellphone calls are also unencrypted except for the radio link between the handset and the phone network. While the latest cryptography standards for cellphones (3G and 4G) are reasonably strong it is possible to force a phone to fall back to older standards with easy-to-break cryptography, if any. The vast majority of phones will not reveal to their user whether such an attack is under way.

The only reason that eavesdropping on land-line calls is not commonplace is that getting access to the closed phone networks is not as easy compared to the more open Internet, and cellphone cryptography designers relied on the equipment necessary to intercept the radio link being only affordable by well-funded government intelligence agencies, and not by criminals or for corporate espionage. That might have been true in the past but it certainly no longer the case with the necessary equipment now available for $1,500. Governments, companies and individuals are increasingly looking for better security.

A second driver for better phone call encryption is the convergence of Internet and phone networks. The LTE (Long-Term Evolution) 4G cellphone standard – under development by the 3rd Generation Partnership Project (3GPP) – carries voice calls over IP packets, and desktop phones in companies are increasingly carrying voice over IP (VoIP) too. Because voice calls may travel over the Internet, whatever security was offered by the closed phone networks is gone and so other security mechanisms are needed.

Like Internet data encryption, voice encryption can broadly be categorised as either link encryption, where each intermediary may encrypt data before passing it onto the next, or end-to-end encryption, where communications are encrypted such that only the legitimate end-points can have access to the unencrypted communication. End-to-end encryption is preferable for security because it avoids intermediaries being able to eavesdrop on communications and gives the end-points assurance that communications will indeed be encrypted all the way to their other communication partner.

Current cellphone encryption standards are link encryption: the phone encrypts calls between it and the phone network using cryptographic keys stored on the Subscriber Identity Module (SIM). Within the phone network, encryption may also be present but the network provider still has access to unencrypted data, so even ignoring the vulnerability to fall-back attacks on the radio link, the network providers and their suppliers are weak points that are tempting for attackers to compromise. Recent examples of such attacks include the compromise of the phone networks of Vodafone in Greece (2004) and Belgacom in Belgium (2012), and the SIM card supplier Gemalto in France (2010). The identity of the Vodafone Greece hacker remains unknown (though the NSA is suspected) but the attacks against Belgacom and Gemalto were carried out by the UK signals intelligence agency – GCHQ – and only publicly revealed from the Snowden leaks, so it is quite possible there are others attacks which remain hidden.

Email is typically only secured by link encryption, if at all, with HTTPS encrypting access to most webmail and Transport Layer Security (TLS) sometimes encrypting other communication protocols that carry email (SMTP, IMAP and POP). Again, the fact that intermediaries have access to plaintext creates a vulnerability, as demonstrated by the 2009 hack of Google’s Gmail likely originating from China. End-to-end email encryption is possible using the OpenPGP or S/MIME protocols but their use is not common, primarily due to their poor usability, which in turn is at least partially a result of having to stay compatible with older insecure email standards.

In contrast, instant messaging applications had more opportunity to start with a clean-slate (because there is no expectation of compatibility among different networks) and so this is where much innovation in terms of end-to-end security has taken place. Secure voice communication however has had less attention than instant messaging so in the remainder of the article we shall examine what should be expected of a secure voice communication system, and in particular see how one of the latest and up-coming protocols, MIKEY-SAKKE, which comes with UK government backing, meets these criteria.

MIKEY-SAKKE and Secure Chorus

MIKEY-SAKKE is the security protocol behind the Secure Chorus voice (and also video) encryption standard, commissioned and designed by GCHQ through their information security arm, CESG. GCHQ have announced that they will only certify voice encryption products through their Commercial Product Assurance (CPA) security evaluation scheme if the product implements MIKEY-SAKKE and Secure Chorus. As a result, MIKEY-SAKKE has a monopoly over the vast majority of classified UK government voice communication and so companies developing secure voice communication systems must implement it in order to gain access to this market. GCHQ can also set requirements of what products are used in the public sector and as well as for companies operating critical national infrastructure.

UK government standards are also influential in guiding purchase decisions outside of government and we are already seeing MIKEY-SAKKE marketed commercially as “government-grade security” and capitalising on their approval for use in the UK government. For this reason, and also because GCHQ have provided implementers a free open source library to make it easier and cheaper to deploy Secure Chorus, we can expect wide use MIKEY-SAKKE in industry and possibly among the public. It is therefore important to consider whether MIKEY-SAKKE is appropriate for wide-scale use. For the reasons outlined in the remainder of this article, the answer is no – MIKEY-SAKKE is designed to offer minimal security while allowing undetectable mass surveillance though key-escrow, not to provide effective security.

Continue reading Insecure by design: protocols for encrypted phone calls

How Tor’s privacy was (momentarily) broken, and the questions it raises

Just how secure is Tor, one of the most widely used internet privacy tools? Court documents released from the Silk Road 2.0 trial suggest that a “university-based research institute” provided information that broke Tor’s privacy protections, helping identify the operator of the illicit online marketplace.

Silk Road and its successor Silk Road 2.0 were run as a Tor hidden service, an anonymised website accessible only over the Tor network which protects the identity of those running the site and those using it. The same technology is used to protect the privacy of visitors to other websites including journalists reporting on mafia activity, search engines and social networks, so the security of Tor is of critical importance to many.

How Tor’s privacy shield works

Almost 97% of Tor traffic is from those using Tor to anonymise their use of standard websites outside the network. To do so a path is created through the Tor network via three computers (nodes) selected at random: a first node entering the network, a middle node (or nodes), and a final node from which the communication exits the Tor network and passes to the destination website. The first node knows the user’s address, the last node knows the site being accessed, but no node knows both.

The remaining 3% of Tor traffic is to hidden services. These websites use “.onion” addresses stored in a hidden service directory. The user first requests information on how to contact the hidden service website, then both the user and the website make the three-hop path through the Tor network to a rendezvous point which joins the two connections and allows both parties to communicate.

In both cases, if a malicious operator simultaneously controls both the first and last nodes to the Tor network then it is possible to link the incoming and outgoing traffic and potentially identify the user. To prevent this, the Tor network is designed from the outset to have sufficient diversity in terms of who runs nodes and where they are located – and the way that nodes are selected will avoid choosing closely related nodes, so as to reduce the likelihood of a user’s privacy being compromised.

How Tor works
How Tor works (source: EFF)

This type of design is known as distributed trust: compromising any single computer should not be enough to break the security the system offers (although compromising a large proportion of the network is still a problem). Distributed trust systems protect not only the users, but also the operators; because the operators cannot break the users’ anonymity – they do not have the “keys” themselves – they are less likely to be targeted by attackers.

Unpeeling the onion skin

With about 2m daily users Tor is by far the most widely used privacy system and is considered one of the most secure, so research that demonstrates the existence of a vulnerability is important. Most research examines how to increase the likelihood of an attacker controlling both the first and last node in a connection, or how to link incoming traffic to outgoing.

When the 2014 programme for the annual BlackHat conference was announced, it included a talk by a team of researchers from CERT, a Carnegie Mellon University research institute, claiming to have found a means to compromise Tor. But the talk was cancelled and, unusually, the researchers did not give advance notice of the vulnerability to the Tor Project in order for them to examine and fix it where necessary.

This decision was particularly strange given that CERT is worldwide coordinator for ensuring software vendors are notified of vulnerabilities in their products so they can fix them before criminals can exploit them. However, the CERT researchers gave enough hints that Tor developers were able to investigate what had happened. When they examined the network they found someone was indeed attacking Tor users using a technique that matched CERT’s description.

The multiple node attack

The attack turned on a means to tamper with a user’s traffic as they looked up the .onion address in the hidden service directory, or in the hidden service’s traffic as it uploaded the information to the directory.

Continue reading How Tor’s privacy was (momentarily) broken, and the questions it raises

Forced authorisation chip and PIN scam hitting high-end retailers

Chip and PIN was designed to prevent fraud, but it also created a new opportunity for criminals that is taking retailers by surprise. Known as “forced authorisation”, committing the fraud requires no special equipment and when it works, it works big: in one transaction a jewellers store lost £20,500. This type of fraud is already a problem in the UK, and now that US retailers have made it through the first Black Friday since the Chip and PIN deadline, criminals there will be looking into what new fraud techniques are available.

The fraud works when the retailer has a one-piece Chip and PIN terminal that’s passed between the customer and retailer during the course of the transaction. This type of terminal is common, particularly in smaller shops and restaurants. They’re a cheaper option compared to terminals with a separate PIN pad (at least until a fraud happens).

The way forced authorisation fraud works is that the retailer sets up the terminal for a transaction by inserting the customer’s card and entering the amount, then hands the terminal over to the customer so they can type in the PIN. But the criminal has used a stolen or counterfeit card, and due to the high value of the transaction the terminal performs a “referral” — asking the retailer to call the bank to perform additional checks such as the customer answering a security question. If the security checks pass, the bank will give the retailer an authorisation code to enter into the terminal.

The problem is that when the terminal asks for these security checks, it’s still in the hands of the criminal, and it’s the criminal that follows the steps that the retailer should have. Since there’s no phone conversation with the bank, the criminal doesn’t know the correct authorisation code. But what surprises retailers is that the criminal can type in anything at this stage and the transaction will go through. The criminal might also be able to bypass other security features, for example they could override the checking of the PIN by following the steps the retailer would if the customer has forgotten the PIN.

By the time the terminal is passed back to the retailer, it looks like the transaction was completed successfully. The receipt will differ only very subtly from that of a normal transaction, if at all. The criminal walks off with the goods and it’s only at the end of the day that the authorisation code is checked by the bank. By that time, the criminal is long gone. Because some of the security checks the bank asked for weren’t completed, the retailer doesn’t get the money.

Continue reading Forced authorisation chip and PIN scam hitting high-end retailers

Just how sophisticated will card fraud techniques become?

In late 2009, my colleagues and I discovered a serious vulnerability in EMV, the most widely used standard for smart card payments, known as “Chip and PIN” in the UK. We showed that it was possible for criminals to use a stolen credit or debit card without knowing the PIN, by tricking the terminal into thinking that any PIN is correct. We gave the banking industry advance notice of our discovery in early December 2009, to give them time to fix the problem before we published our research. After this period expired (two months, in this case) we published our paper as well explaining our results to the public on BBC Newsnight. We demonstrated that this vulnerability was real using a proof-of-concept system built from equipment we had available (off-the shelf laptop and card reader, FPGA development board, and hand-made card emulator).

No-PIN vulnerability demonstration

After the programme aired, the response from the banking industry dismissed the possibility that the vulnerability would be successfully exploited by criminals. The banking trade body, the UK Cards Association, said:

“We believe that this complicated method will never present a real threat to our customers’ cards. … Neither the banking industry nor the police have any evidence of criminals having the capability to deploy such sophisticated attacks.”

Similarly, EMVCo, who develop the EMV standards said:

“It is EMVCo’s view that when the full payment process is taken into account, suitable countermeasures to the attack described in the recent Cambridge Report are already available.”

It was therefore interesting to see that in May 2011, criminals were caught having stolen cards in France then exploiting a variant of this vulnerability to buy over €500,000 worth of goods in Belgium (which were then re-sold). At the time, not many details were available, but it seemed that the techniques the criminals used were much more sophisticated than our proof-of-concept demonstration.

We now know more about what actually happened, as well as the banks’ response, thanks to a paper by the researchers who performed the forensic analysis that formed part of the criminal investigation of this case. It shows just how sophisticated criminals could be, given sufficient motivation, contrary to the expectations in the original banking industry response.

Continue reading Just how sophisticated will card fraud techniques become?

What are the social costs of contactless fraud?

Contactless payments are in the news again: in the UK the spending limit has been increased from £20 to £30 per transaction, and in Australia the Victoria Police has argued that contactless payments are to blame for an extra 100 cases of credit card fraud per week. These frauds are where multiple transactions are put through, keeping each under the AUS $100 (about £45) limit. UK news coverage has instead focussed on the potential for cross-channel fraud: where card details are skimmed from contactless cards then used for fraudulent online purchases. In a demonstration, Which? skimmed volunteers cards at a distance then bought a £3,000 TV with the card numbers and expiry dates recorded.

The media have been presenting contactless payments are insecure; the response from the banking industry is to point out that customers are not liable for the fraudulent transactions. Both are in some ways correct, but in other ways are missing the point.

The law in the UK (Payment Services Regulations (PSR) 2009, Regulation 62) indeed does say that the customers are entitled to a refund for fraudulent transactions. However a bank will only do this if they are convinced the customer has not authorised the transaction, and was not negligent. In my experience, a customer who is unable to clearly, concisely and confidently explain why they are entitled to a refund runs a high risk of not getting one. This fact will disproportionately disadvantage the more vulnerable members of society.

Continue reading What are the social costs of contactless fraud?

Banks undermine chip and PIN security because they see profits rise faster than fraud

The Chip and PIN card payment system has been mandatory in the UK since 2006, but only now is it being slowly introduced in the US. In western Europe more than 96% of card transactions in the last quarter of 2014 used chipped credit or debit cards, compared to just 0.03% in the US.

Yet at the same time, in the UK and elsewhere a new generation of Chip and PIN cards have arrived that allow contactless payments – transactions that don’t require a PIN code. Why would card issuers offer a means to circumvent the security Chip and PIN offers?

Chip and Problems

Chip and PIN is supposed to reduce two main types of fraud. Counterfeit fraud, where a fake card is manufactured based on stolen card data, cost the UK £47.8m in 2014 according to figures just released by Financial Fraud Action. The cryptographic key embedded in chip cards tackles counterfeit fraud by allowing the card to prove its identity. Extracting this key should be very difficult, while copying the details embedded in a card’s magnetic stripe from one card to another is simple.

The second type of fraud is where a genuine card is used, but by the wrong person. Chip and PIN makes this more difficult by requiring users to enter a PIN code, one (hopefully) not known to the criminal who took the card. Financial Fraud Action separates this into those cards stolen before reaching their owner (at a cost of £10.1m in 2014) and after (£59.7m).

Unfortunately Chip and PIN doesn’t work as well as was hoped. My research has shown how it’s possible to trick cards into accepting the wrong PIN and produce cloned cards that terminals won’t detect as being fake. Nevertheless, the widespread introduction of Chip and PIN has succeeded in forcing criminals to change tactics – £331.5m of UK card fraud (69% of the total) in 2014 is now through telephone, internet and mail order purchases (known as “cardholder not present” fraud) that don’t involve the chip at all. That’s why there’s some surprise over the introduction of less secure contactless cards.

Continue reading Banks undermine chip and PIN security because they see profits rise faster than fraud

Why Bentham’s Gaze?

Why is this blog called “Bentham’s Gaze”? Jeremy Bentham (1748-1832) was an philosopher, jurist and social reformer. Although he took no direct role in the creation of UCL (despite the myth), Bentham can be considered its spiritual founder, with his ideas being embodied in the institution. Notably, UCL went a long way to fulfilling Bentham’s desire of widening access to education, through it being the first English university to admit students regardless of class, race or religion, and to welcome women on equal terms with men.

Bentham’s Gaze refers not just to his vision of education but also to the Panopticon – a design proposed for a prison where all inmates in the circular building are potentially under continual observation from a central inspection house. Importantly, inmates would not be able to tell whether they were actively being observed and so the hope was that good behaviour would be encouraged without the high cost of actually monitoring everyone. Although no prison was created exactly to Bentham’s design, some (e.g. Presidio Modelo in Cuba) have notable similarities and pervasive CCTV can be seen as a modern instantiation of the same principles.

Finally, the more corporeal aspect to the blog name is that UCL hosts Bentham’s Auto-Icon – a case containing his preserved skeleton with wax head, seated in a chair, and dressed in his own clothes. The construction of the Auto-Icon was specified in Bentham’s will and since 1850 has been cared for by UCL. His head was also preserved but judged unsuitable for public display and so is stored by UCL Museums. Many of the staff and students at UCL will walk in view of Bentham while crossing the campus.

You too can now enjoy Bentham’s Gaze thanks to the UCL PanoptiCam – a webcam attached to the top of the Auto-Icon, as you can see below from my photo of it (and its photo of me). Footage from the camera is both on Twitter and YouTube, with highlights and discussion on @Panopticam.

UCL Panopticam

View from PanoptiCam (2015-02-19)

Tor: the last bastion of online anonymity, but is it still secure after Silk Road?

The Silk Road trial has concluded, with Ross Ulbricht found guilty of running the anonymous online marketplace for illegal goods. But questions remain over how the FBI found its way through Tor, the software that allows anonymous, untraceable use of the web, to gather the evidence against him.

The development of anonymising software such as Tor and Bitcoin has forced law enforcement to develop the expertise needed to identify those using them. But if anything, what we know about the FBI’s case suggests it was tip-offs, inside men, confessions, and Ulbricht’s own errors that were responsible for his conviction.

This is the main problem with these systems: breaking or circumventing anonymity software is hard, but it’s easy to build up evidence against an individual once you can target surveillance, and wait for them to slip up.

The problem

A design decision in the early days of the internet led to a problem: every message sent is tagged with the numerical Internet Protocol (IP) addresses that identify the source and destination computers. The network address indicates how and where to route the message, but there is no equivalent indicating the identity of the sender or intended recipient.

This conflation of addressing and identity is bad for privacy. Any internet traffic you send or receive will have your IP address attached to it. Typically a computer will only have one public IP address at a time, which means your online activity can be linked together using that address. Whether you like it or not, marketers, criminals or investigators use this sort of profiling without consent all the time. The way IP addresses are allocated is geographically and on a per-organisation basis, so it’s even possible to pinpoint a surprisingly accurate location.

This conflation of addressing and identity is also bad for security. The routing protocols which establish the best route between two points on the internet are not secure, and have been exploited by attackers to take control of (hijack) IP addresses they don’t legitimately own. Such attackers then have access to network traffic destined for the hijacked IP addresses, and also to anything the legitimate owner of the IP addresses should have access to.

Continue reading Tor: the last bastion of online anonymity, but is it still secure after Silk Road?