Steven Murdoch – Privacy and Financial Security

Probably not too many academic researchers can say this: some of Steven Murdoch’s research leads have arrived in unmarked envelopes. Murdoch, who has moved to UCL from the University of Cambridge, works primarily in the areas of privacy and financial security, including a rare specialty you might call “crypto for the masses”. It’s the financial security aspect that produces the plain, brown envelopes and also what may be his most satisfying work, “Trying to help individuals when they’re having trouble with huge organisations”.

Murdoch’s work has a twist: “Usability is a security requirement,” he says. As a result, besides writing research papers and appearing as an expert witness, his past includes a successful start-up. Cronto, which developed a usable authentication device, was acquired by VASCO, a market leader in authentication and is now used by banks such as Commerzbank and Rabobank.

Developing the Cronto product was, he says, an iterative process that relied on real-world testing: “In research into privacy, if you build unusable system two things will go wrong,” he says. “One, people won’t use it, so there’s a smaller crowd to hide in.” This issue affects anonymising technologies such as Mixmaster and Mixminion. “In theory they have better security than Tor but no one is using them.” And two, he says, “People make mistakes.” A non-expert user of PGP, for example, can’t always accurately identify which parts of the message are signed and which aren’t.

The start-up experience taught Murdoch how difficult it is to get an idea from research prototype to product, not least because what works in a small case study may not when deployed at scale. “Selling privacy remains difficult,” he says, noting that Cronto had an easier time than some of its forerunners since the business model called for sales to large institutions. The biggest challenge, he says, was not consumer acceptance but making a convincing case that the predicted threats would materialise and that a small company could deliver an acceptable solution.

Continue reading Steven Murdoch – Privacy and Financial Security

Do you know what you’re paying for? How contactless cards are still vulnerable to relay attack

Contactless card payments are fast and convenient, but convenience comes at a price: they are vulnerable to fraud. Some of these vulnerabilities are unique to contactless payment cards, and others are shared with the Chip and PIN cards – those that must be plugged into a card reader – upon which they’re based. Both are vulnerable to what’s called a relay attack. The risk for contactless cards, however, is far higher because no PIN number is required to complete the transaction. Consequently, the card payments industry has been working on ways to solve this problem.

The relay attack is also known as the “chess grandmaster attack”, by analogy to the ruse in which someone who doesn’t know how to play chess can beat an expert: the player simultaneously challenges two grandmasters to an online game of chess, and uses the moves chosen by the first grandmaster in the game against the second grandmaster, and vice versa. By relaying the opponents’ moves between the games, the player appears to be a formidable opponent to both grandmasters, and will win (or at least force a draw) in one match.

Similarly, in a relay attack the fraudster’s fake card doesn’t know how to respond properly to the payment terminal because, unlike a genuine card, it doesn’t contain the cryptographic key known only to the card and the bank that verifies the card is genuine. But like the fake chess grandmaster, the fraudster can relay the communication of the genuine card in place of the fake card.

For example, the victim’s card (Alice, in the diagram below) would be in a fake or hacked card payment terminal (Bob) and the criminal would use the fake card (Carol) to attempt a purchase in a genuine terminal (Dave). The bank would challenge the fake card to prove its identity, this challenge is then relayed to the genuine card in the hacked terminal, and the genuine card’s response is relayed back on behalf of the fake card to the bank for verification. The end result is that the terminal used for the real purchase sees the fake card as genuine, and the victim later finds an unexpected and expensive purchase on their statement.

A rigged payment terminal capable of performing the relay attack can be made from off-the-shelf components
The relay attack, where the cards and terminals can be at any distance from each other

Demonstrating the grandmaster attack

I first demonstrated that this vulnerability was real with my colleague Saar Drimer at Cambridge, showing on television how the attack could work in Britain in 2007 and in the Netherlands in 2009.

In our scenario, the victim put their card in a fake terminal thinking they were buying a coffee when in fact their card details were relayed by a radio link to another shop, where the criminal used a fake card to buy something far more expensive. The fake terminal showed the victim only the price of a cup of coffee, but when the bank statement arrives later the victim has an unpleasant surprise.

At the time, the banking industry agreed that the vulnerability was real, but argued that as it was difficult to carry out in practice it was not a serious risk. It’s true that, to avoid suspicion, the fraudulent purchase must take place within a few tens of seconds of the victim putting their card into the fake terminal. But this restriction only applies to the Chip and PIN contact cards available at the time. The same vulnerability applies to today’s contactless cards, only now the fraudster need only be physically near the victim at the time – contactless cards can communicate at a distance, even while the card is in the victim’s pocket or bag.

Continue reading Do you know what you’re paying for? How contactless cards are still vulnerable to relay attack

International Comparison of Bank Fraud Reimbursement: Customer Perceptions and Contractual Terms

Terms and Conditions (T&C) are long, convoluted, and are very rarely actually read by customers. Yet when customers are subject to fraud, the content of the T&Cs, along with national regulations, matter. The ability to revoke fraudulent payments and reimburse victims of fraud is one of the main selling points of traditional payment systems, but to be reimbursed a fraud victim may need to demonstrate that they have followed security practices set out in their contract with the bank.

Security advice in banking terms and conditions vary greatly across the world. Our study’s scope included Europe (Cyprus, Denmark, Germany, Greece, Italy, Malta, and the United Kingdom), the United States, Africa (Algeria, Kenya, Nigeria, and South Africa), the Middle East (Bahrain, Egypt, Iraq, Jordan, Kuwait, Lebanon, Oman, Palestine, Qatar, Saudi Arabia, UAE and Yemen), and East Asia (Singapore). Out of 30 banks’ terms and conditions studied, 26 give more or less specific advice on how you may store your PIN. The advice varies from “Never writing the Customer’s password or security details down in a way that someone else could easily understand” (Arab Banking Corp, Algeria), “If the Customer makes a written record of any PIN Code or security procedure, the Customer must make reasonable effort to disguise it and must not keep it with the card for which it is to be used” (National Bank of Kenya) to “any record of the PIN is kept separate from the card and in a safe place” (Nedbank, South Africa).

Half of the T&Cs studied give advice on choosing and changing one’s PIN. Some banks ask customers to immediately choose a new PIN when receiving a PIN from the bank, others don’t include any provision for customers to change their PIN. Some banks give specific advice on how to choose a PIN:

When selecting a substitute ATM-PIN, the Customer shall refrain from selecting any series of consecutive or same or similar numbers or any series of numbers which may easily be ascertainable or identifiable with the Customer…

OCBC, Singapore

Only 5 banks give specific advice about whether you are allowed to re-use your PIN on other payment cards or elsewhere. There is also disagreement about what to do with the PIN advice slip, with 7 banks asking the customer to destroy it.

Some banks also include advice on Internet security. In the UK, HSBC for example demands that customers

always access Internet banking by typing the address into the web browser and use antivirus, antispyware and a personal firewall. If accessing Internet banking from a computer connected to a LAN or a public Internet access device or access point, they must first ensure that nobody else can observe, copy or access their account. They cannot use any software, such as browsers or password managers, to record passwords or other security details, apart from a service provided by the bank. Finally, all security measures recommended by the manufacturer of the device being used to access Internet banking must be followed, such as using a PIN to access a mobile device.

HSBC, UK

Over half of banks tell customers to use firewalls and anti-virus software. Some even recommend specific commercial software, or tell customers how to find some:

It is also possible to obtain free anti-virus protection. A search for `free anti-virus’ on Google will provide a list of the most popular.

Commercial International Bank, Egypt

In the second part of our paper, we investigate the customers’ perception of banking T&Cs in three countries: Germany, the United States and the United Kingdom. We present the participants with 2 real-life scenarios where individuals are subject to fraud, and ask them to decide on the outcome. We then present the participants with sections of T&Cs representative for their country and ask them then to re-evaluate the outcome of the two scenarios.

Question DE UK US
Scenario 1: Card Loss 41.5% 81.5% 76.8%
Scenario 1: Card Loss after T&Cs 70.7% 66.7% 96.4%
Scenario 2: Phishing 31.7% 37.0% 35.7%
Scenario 2: Phishing after T&Cs 43.9% 46.3% 42.9%

The table above lists the percentage of participants that say that the money should be returned for each of the scenarios. We find that in all but one case, the participants are more likely to have the protagonist reimbursed after reading the terms and conditions. This is noteworthy – our participants are generally reassured by what they read in the T&Cs.

Further, we assess the participants’ comprehension of the T&Cs. Only 35% of participants fully understand the sections, but the regional variations are large: 45% of participants in the US fully understanding the T&Cs but only 22% do so in Germany. This may indeed be related to the differences in consumer protection laws between the countries: In the US, Federal regulations give consumers much stronger protections. In Germany and the UK (and indeed, throughout Europe under the EU’s Payment Service Directive), whether a victim of fraud is reimbursed depends on if he/she has been grossly negligent – a term that is not clearly defined and confused our participants throughout.

 

International Comparison of Bank Fraud Reimbursement: Customer Perceptions and Contractual Terms by Ingolf Becker, Alice Hutchings, Ruba Abu-Salma, Ross Anderson, Nicholas Bohm, Steven J. Murdoch, M. Angela Sasse and Gianluca Stringhini will be presented at the Workshop on the Economics of Information Security (WEIS), Berkeley, CA USA, 13–14 June 2016.

Biometrics for payments

HSBC and First Direct recently announced that they are introducing fingerprint and voice recognition authentication for customers of online and telephone banking. In my own research, I first found nearly 20 years ago that people who have a multitude of passwords and PINs cannot manage them as security experts want them to. As the number of digital devices and services we use has increased rapidly, managing dozens of login details has become a headache for most people. We recently reported that most bank customers juggle multiple PINs, and are unable to follow the rules that banks set in their contracts. Our research also found that many people dislike the 2-factor token solutions that are currently used by many UK banks.

Passwords as most people use them today are not particularly secure. Attackers can easily attempt to collect information on individuals, using leaks of password files not properly protected by some websites, “phishing” scams or malware planted on people’s computers. Reusing a banking password on other websites – something that many of us do because we cannot remember dozens of different passwords – is also a significant security risk.

The introduction of fingerprint recognition on smartphones – such as the iPhone – has delighted many users fed up with entering their PINs dozens of times a day. So the announcement that HSBC and other banks will be able to use the fingerprint sensor on their smartphones for banking means that millions of consumers will finally be able to end their battle with passwords and PINs and use biometrics instead. Other services people access from their smartphones are likely to follow suit. And given the negative impact that cumbersome authentication via passwords and PINs has on staff productivity and morale in many organisations, we can expect to see biometrics deployed in work contexts, too.

But while biometrics – unlike passwords – do not require mental gymnastics from users, there are different usability challenges. Leveraging the biometric from the modality of interaction – e.g. voice recognition phone-based interactions – makes authentication an easy task, but it will work considerably better in quiet environments than noisy ones – such as a train stations or with many people talking in the background. As many smartphone users have learnt, fingerprint sensors have a hard time recognising cold and wet fingers. And – as we report in a paper presented at IEEE Identity, Security and Behavior Analysis last week – privacy concerns mean some users ‘don’t like putting their face on the Internet’. Biometrics can’t come soon enough for most users, but there is still a lot of design and testing work to be done to make biometrics work for different interaction, physical and social contexts.

Are Payment Card Contracts Unfair?

While US bank customers are almost completely protected against fraudulent transactions, in Europe banks are entitled to refuse to reimburse victims of fraud under certain circumstances. The EU Payment Services Directive (PSD) is supposed to protect customers but if the bank can show that the customer has been “grossly negligent” in following the terms and conditions associated with their account then the PSD permits the bank to pass the cost of any fraud on to the customer. The bank doesn’t have to show how the fraud happened, just that the most likely explanation for the fraud is that the customer failed to follow one of the rules set out by the bank on how to protect the account. To be certain of obtaining a refund, a customer must be able to show that he or she complied with every security-related clause of the terms and conditions, or show that the fraud was a result of a flaw in the bank’s security.

The bank terms and conditions, and how customers comply with them, are therefore of critical importance for consumer protection. We set out to answer the question: are these terms and conditions fair, taking into account how customers use their banking facilities? We focussed on ATM payments and in particular how customers manage PINs because ATM fraud losses are paid for by the banks and not retailers, so there is more incentive for the bank to pass losses on to the customer. In our paper – “Are Payment Card Contracts Unfair?” – published at Financial Cryptography 2016 we show that customers have too many PINs to remember them unaided and therefore it is unrealistic to expect customers to comply with all the rules banks set: to choose unguessable PINs, not write them down, and not use them elsewhere (even with different banks). We find that, as a result of these unrealistic expectations, customers do indeed make use of coping mechanisms which reduce security and violate terms and conditions, which puts them in a weak position should they be the victim of fraud.

We surveyed 241 UK bank customers and found that 19% of customers have four or more PINs and 48% of PINs are used at most once a month. As a result of interference (one memory being confused with another) and forgetting over time (if a memory is not exercised frequently it will be lost) it is infeasible for typical customers to remember all their bank PINs unaided. It is therefore inevitable that customers forget PINs (a quarter of our participants had forgot a 4-digit PIN at least once) and take steps to help them recall PINs. Of our participants, 33% recorded their PIN (most commonly in a mobile phone, notebook or diary) and 23% re-used their PIN elsewhere (most commonly to unlock their mobile phone). Both of these coping mechanisms would leave customers at risk of being found liable for fraud.

Customers also use the same PIN on several cards to reduce the burden of remembering PINs – 16% of our participants stated they used this technique, with the same PIN being used on up to 9 cards. Because each card allows the criminal 6 guesses at a PIN (3 on the card itself, and 3 at an ATM) this gives criminals an excellent opportunity to guess PINs and again leave the customer responsible for the losses. Such attacks are made easier by the fact that customers can change their PIN to one which is easier to remember, but also probably easier for criminals to guess (13% of our participants used a mnemonic, most commonly deriving the PIN from a specific date). Bonneau et al. studied in more detail exactly how bank customers select PINs.

Finally we found that PINs are regularly shared with other people, most commonly with a spouse or partner (32% of our participants). Again this violates bank terms and conditions and so puts customers at risk of being held liable for fraud.

Holding customers liable for not being able to follow unrealistic, vague and contradictory advice is grossly unfair to fraud victims. The Payment Services Directive is being revised, and in our submission to the consultation by the European Banking Authority we ask that banks only be permitted to pass fraud losses on to customers if they use authentication mechanisms which are feasible to use without undue effort, given the context of how people actually use banking facilities in normal life. Alternatively, regulators could adopt the tried and tested US model of strong consumer protection, and allow banks to manage risks through fraud detection. The increased trust from this approach might increase transaction volumes and profit for the industry overall.

 

“Are Payment Card Contracts Unfair?” by Steven J. Murdoch, Ingolf Becker, Ruba Abu-Salma, Ross Anderson, Nicholas Bohm, Alice Hutchings, M. Angela Sasse, and Gianluca Stringhini will be presented at Financial Cryptography and Data Security, Barbados, 22–26 February 2016.

Insecure by design: protocols for encrypted phone calls

The MIKEY-SAKKE protocol is being promoted by the UK government as a better way to secure phone calls. The reality is that MIKEY-SAKKE is designed to offer minimal security while allowing undetectable mass surveillance, through the introduction a backdoor based around mandatory key-escrow. This weakness has implications which go further than just the security of phone calls.

The current state of security for phone calls leaves a lot to be desired. Land-line calls are almost entirely unencrypted, and cellphone calls are also unencrypted except for the radio link between the handset and the phone network. While the latest cryptography standards for cellphones (3G and 4G) are reasonably strong it is possible to force a phone to fall back to older standards with easy-to-break cryptography, if any. The vast majority of phones will not reveal to their user whether such an attack is under way.

The only reason that eavesdropping on land-line calls is not commonplace is that getting access to the closed phone networks is not as easy compared to the more open Internet, and cellphone cryptography designers relied on the equipment necessary to intercept the radio link being only affordable by well-funded government intelligence agencies, and not by criminals or for corporate espionage. That might have been true in the past but it certainly no longer the case with the necessary equipment now available for $1,500. Governments, companies and individuals are increasingly looking for better security.

A second driver for better phone call encryption is the convergence of Internet and phone networks. The LTE (Long-Term Evolution) 4G cellphone standard – under development by the 3rd Generation Partnership Project (3GPP) – carries voice calls over IP packets, and desktop phones in companies are increasingly carrying voice over IP (VoIP) too. Because voice calls may travel over the Internet, whatever security was offered by the closed phone networks is gone and so other security mechanisms are needed.

Like Internet data encryption, voice encryption can broadly be categorised as either link encryption, where each intermediary may encrypt data before passing it onto the next, or end-to-end encryption, where communications are encrypted such that only the legitimate end-points can have access to the unencrypted communication. End-to-end encryption is preferable for security because it avoids intermediaries being able to eavesdrop on communications and gives the end-points assurance that communications will indeed be encrypted all the way to their other communication partner.

Current cellphone encryption standards are link encryption: the phone encrypts calls between it and the phone network using cryptographic keys stored on the Subscriber Identity Module (SIM). Within the phone network, encryption may also be present but the network provider still has access to unencrypted data, so even ignoring the vulnerability to fall-back attacks on the radio link, the network providers and their suppliers are weak points that are tempting for attackers to compromise. Recent examples of such attacks include the compromise of the phone networks of Vodafone in Greece (2004) and Belgacom in Belgium (2012), and the SIM card supplier Gemalto in France (2010). The identity of the Vodafone Greece hacker remains unknown (though the NSA is suspected) but the attacks against Belgacom and Gemalto were carried out by the UK signals intelligence agency – GCHQ – and only publicly revealed from the Snowden leaks, so it is quite possible there are others attacks which remain hidden.

Email is typically only secured by link encryption, if at all, with HTTPS encrypting access to most webmail and Transport Layer Security (TLS) sometimes encrypting other communication protocols that carry email (SMTP, IMAP and POP). Again, the fact that intermediaries have access to plaintext creates a vulnerability, as demonstrated by the 2009 hack of Google’s Gmail likely originating from China. End-to-end email encryption is possible using the OpenPGP or S/MIME protocols but their use is not common, primarily due to their poor usability, which in turn is at least partially a result of having to stay compatible with older insecure email standards.

In contrast, instant messaging applications had more opportunity to start with a clean-slate (because there is no expectation of compatibility among different networks) and so this is where much innovation in terms of end-to-end security has taken place. Secure voice communication however has had less attention than instant messaging so in the remainder of the article we shall examine what should be expected of a secure voice communication system, and in particular see how one of the latest and up-coming protocols, MIKEY-SAKKE, which comes with UK government backing, meets these criteria.

MIKEY-SAKKE and Secure Chorus

MIKEY-SAKKE is the security protocol behind the Secure Chorus voice (and also video) encryption standard, commissioned and designed by GCHQ through their information security arm, CESG. GCHQ have announced that they will only certify voice encryption products through their Commercial Product Assurance (CPA) security evaluation scheme if the product implements MIKEY-SAKKE and Secure Chorus. As a result, MIKEY-SAKKE has a monopoly over the vast majority of classified UK government voice communication and so companies developing secure voice communication systems must implement it in order to gain access to this market. GCHQ can also set requirements of what products are used in the public sector and as well as for companies operating critical national infrastructure.

UK government standards are also influential in guiding purchase decisions outside of government and we are already seeing MIKEY-SAKKE marketed commercially as “government-grade security” and capitalising on their approval for use in the UK government. For this reason, and also because GCHQ have provided implementers a free open source library to make it easier and cheaper to deploy Secure Chorus, we can expect wide use MIKEY-SAKKE in industry and possibly among the public. It is therefore important to consider whether MIKEY-SAKKE is appropriate for wide-scale use. For the reasons outlined in the remainder of this article, the answer is no – MIKEY-SAKKE is designed to offer minimal security while allowing undetectable mass surveillance though key-escrow, not to provide effective security.

Continue reading Insecure by design: protocols for encrypted phone calls

New EU Innovative Training Network project “Privacy & Us”

Last week, “Privacy & Us” — an Innovative Training Network (ITN) project funded by the EU’s Marie Skłodowska-Curie actions — held its kick-off meeting in Munich. Hosted in the nice and modern Wisschenschafts Zentrum campus by Uniscon, one of the project partners, principal investigators from seven different countries set out the plan for the next 48 months.

Privacy & Us really stands for “Privacy and Usability” and aims to conduct privacy research and, over the next 3 years, train thirteen Early Stage Researchers (ESRs) — i.e., PhD students — to be able to reason, design, and develop innovative solutions to privacy research challenges, not only from a technical point of view but also from the “human side”.

The project involves nine “beneficiaries”: Karlstads Universitet (Sweden), Goethe Universitaet Frankfurt (Germany), Tel Aviv University (Israel), Unabhängiges Landeszentrum für Datenschutz (Germany), Uniscon (Germany), University College London (UK), USECON (Austria), VASCO Innovation Center (UK), and Wirtschaft Universitat Wien (Austria), as well as seven partner organizations: the Austrian Data Protection Authority (Austria), Preslmayr Rechtsanwälte OG (Austria), Friedrich-Alexander University Erlangen (Germany), University of Bonn (Germany), the Bavarian Data Protection Authority (Germany), EveryWare Technologies (Italy), and Sentor MSS AB (Sweden).

The people behind Privacy & Us project at the kick-off meeting in Munich, December 2015
The people behind Privacy & Us project at the kick-off meeting in Munich, December 2015

The Innovative Training Networks are interdisciplinary and multidisciplinary in nature and promote, by design, a collaborative approach to research training. Funding is extremely competitive, with acceptance rate as low as 6%, and quite generous for the ESRs who often enjoy higher than usual salaries (exact numbers depend on the hosting country), plus 600 EUR/month mobility allowance and 500 EUR/month family allowance.

The students will start in August 2016 and will be trained to face both current and future challenges in the area of privacy and usability, spending a minimum of six months in secondment to another partner organization, and participating in several training and development activities.

Three studentships will be hosted at UCL,  under the supervision of Dr Emiliano De Cristofaro, Prof. Angela Sasse, Prof. Ann Blandford, and Dr Steven Murdoch. Specifically, one project will investigate how to securely and efficiently store genomic data, design and implementing privacy-preserving genomic testing, as well as support user-centered design of secure personal genomic applications. The second project will aim to better understand and support individuals’ decision-making around healthcare data disclosure, weighing up personal and societal costs and benefits of disclosure, and the third (with the VASCO Innovation Centre) will explore techniques for privacy-preserving authentication, namely, extending these to develop and evaluate innovative solutions for secure and usable authentication that respects user privacy.

Continue reading New EU Innovative Training Network project “Privacy & Us”