International Comparison of Bank Fraud Reimbursement: Customer Perceptions and Contractual Terms

Terms and Conditions (T&C) are long, convoluted, and are very rarely actually read by customers. Yet when customers are subject to fraud, the content of the T&Cs, along with national regulations, matter. The ability to revoke fraudulent payments and reimburse victims of fraud is one of the main selling points of traditional payment systems, but to be reimbursed a fraud victim may need to demonstrate that they have followed security practices set out in their contract with the bank.

Security advice in banking terms and conditions vary greatly across the world. Our study’s scope included Europe (Cyprus, Denmark, Germany, Greece, Italy, Malta, and the United Kingdom), the United States, Africa (Algeria, Kenya, Nigeria, and South Africa), the Middle East (Bahrain, Egypt, Iraq, Jordan, Kuwait, Lebanon, Oman, Palestine, Qatar, Saudi Arabia, UAE and Yemen), and East Asia (Singapore). Out of 30 banks’ terms and conditions studied, 26 give more or less specific advice on how you may store your PIN. The advice varies from “Never writing the Customer’s password or security details down in a way that someone else could easily understand” (Arab Banking Corp, Algeria), “If the Customer makes a written record of any PIN Code or security procedure, the Customer must make reasonable effort to disguise it and must not keep it with the card for which it is to be used” (National Bank of Kenya) to “any record of the PIN is kept separate from the card and in a safe place” (Nedbank, South Africa).

Half of the T&Cs studied give advice on choosing and changing one’s PIN. Some banks ask customers to immediately choose a new PIN when receiving a PIN from the bank, others don’t include any provision for customers to change their PIN. Some banks give specific advice on how to choose a PIN:

When selecting a substitute ATM-PIN, the Customer shall refrain from selecting any series of consecutive or same or similar numbers or any series of numbers which may easily be ascertainable or identifiable with the Customer…

OCBC, Singapore

Only 5 banks give specific advice about whether you are allowed to re-use your PIN on other payment cards or elsewhere. There is also disagreement about what to do with the PIN advice slip, with 7 banks asking the customer to destroy it.

Some banks also include advice on Internet security. In the UK, HSBC for example demands that customers

always access Internet banking by typing the address into the web browser and use antivirus, antispyware and a personal firewall. If accessing Internet banking from a computer connected to a LAN or a public Internet access device or access point, they must first ensure that nobody else can observe, copy or access their account. They cannot use any software, such as browsers or password managers, to record passwords or other security details, apart from a service provided by the bank. Finally, all security measures recommended by the manufacturer of the device being used to access Internet banking must be followed, such as using a PIN to access a mobile device.

HSBC, UK

Over half of banks tell customers to use firewalls and anti-virus software. Some even recommend specific commercial software, or tell customers how to find some:

It is also possible to obtain free anti-virus protection. A search for `free anti-virus’ on Google will provide a list of the most popular.

Commercial International Bank, Egypt

In the second part of our paper, we investigate the customers’ perception of banking T&Cs in three countries: Germany, the United States and the United Kingdom. We present the participants with 2 real-life scenarios where individuals are subject to fraud, and ask them to decide on the outcome. We then present the participants with sections of T&Cs representative for their country and ask them then to re-evaluate the outcome of the two scenarios.

Question DE UK US
Scenario 1: Card Loss 41.5% 81.5% 76.8%
Scenario 1: Card Loss after T&Cs 70.7% 66.7% 96.4%
Scenario 2: Phishing 31.7% 37.0% 35.7%
Scenario 2: Phishing after T&Cs 43.9% 46.3% 42.9%

The table above lists the percentage of participants that say that the money should be returned for each of the scenarios. We find that in all but one case, the participants are more likely to have the protagonist reimbursed after reading the terms and conditions. This is noteworthy – our participants are generally reassured by what they read in the T&Cs.

Further, we assess the participants’ comprehension of the T&Cs. Only 35% of participants fully understand the sections, but the regional variations are large: 45% of participants in the US fully understanding the T&Cs but only 22% do so in Germany. This may indeed be related to the differences in consumer protection laws between the countries: In the US, Federal regulations give consumers much stronger protections. In Germany and the UK (and indeed, throughout Europe under the EU’s Payment Service Directive), whether a victim of fraud is reimbursed depends on if he/she has been grossly negligent – a term that is not clearly defined and confused our participants throughout.

 

International Comparison of Bank Fraud Reimbursement: Customer Perceptions and Contractual Terms by Ingolf Becker, Alice Hutchings, Ruba Abu-Salma, Ross Anderson, Nicholas Bohm, Steven J. Murdoch, M. Angela Sasse and Gianluca Stringhini will be presented at the Workshop on the Economics of Information Security (WEIS), Berkeley, CA USA, 13–14 June 2016.

Come work with us!

I’m very pleased to announce that — along with George Danezis and Tomaso Aste, head of our Financial Computing group — I’ve been awarded a grant to continue our work on distributed ledgers (aka “blockchain-like things”) for the next three years.

Our group has already done a lot of research in this space, including George’s and my recent paper on centrally banked cryptocurrencies (at NDSS 2016) and Jens’ paper (along with Markulf Kohlweiss, a frequent UCL collaborator) on efficient ring signatures and applications to Zerocoin-style cryptocurrencies (at Eurocrypt 2015).  It’s great to have this opportunity to further investigate the challenges in this space and develop our vision for the future of these technologies, so big thanks to the EPSRC!

Anyway, the point of this post is to advertise, as part of this grant, three positions for postdoctoral researchers.  We are also seeking collaboration with any industrial partners investigating the potential usage of distributed ledgers, and in particular ones looking at the application of these ledgers across the following settings (or with a whole new setting in mind!):

  • Identity management. How can identities be stored, shared, and issued in a way that preserves privacy, prevents theft and fraud, and allows for informal forms of identity in places where no formal ones exist?
  • Supply chain transparency. How can supply chain information be stored in a way that proves integrity, preserves the privacy of individual actors, and can be presented to the end customer in a productive way?
  • Financial settlement. How can banking information be stored in a way that allows banks to easily perform gross settlement, reduces the burden on a central bank, and enables auditability of the proper functioning of the system?
  • Administration of benefits. How can benefits be administered to and used by disadvantaged populations in a way that preserves privacy, provides useful visibility into their spending, and protects against potential abuses of the system?

We expect the postdoctoral researchers to work with us and with each other on the many exciting problems in this space, which are spread across cryptography, computer and network security, behavioural economics, distributed systems, usable security, human-computer interaction, and software engineering (just to name a few!).  I encourage anyone interested to reach out to me (Sarah) to discuss this further, whether or not they’ve already done research on the particular topic of distributed ledgers.

That’s all for now, but please get in touch with me if you have any questions, and in the years to come I hope to invite many people to come work with us in London and to announce the various outcomes of this exciting project!

Biometrics for payments

HSBC and First Direct recently announced that they are introducing fingerprint and voice recognition authentication for customers of online and telephone banking. In my own research, I first found nearly 20 years ago that people who have a multitude of passwords and PINs cannot manage them as security experts want them to. As the number of digital devices and services we use has increased rapidly, managing dozens of login details has become a headache for most people. We recently reported that most bank customers juggle multiple PINs, and are unable to follow the rules that banks set in their contracts. Our research also found that many people dislike the 2-factor token solutions that are currently used by many UK banks.

Passwords as most people use them today are not particularly secure. Attackers can easily attempt to collect information on individuals, using leaks of password files not properly protected by some websites, “phishing” scams or malware planted on people’s computers. Reusing a banking password on other websites – something that many of us do because we cannot remember dozens of different passwords – is also a significant security risk.

The introduction of fingerprint recognition on smartphones – such as the iPhone – has delighted many users fed up with entering their PINs dozens of times a day. So the announcement that HSBC and other banks will be able to use the fingerprint sensor on their smartphones for banking means that millions of consumers will finally be able to end their battle with passwords and PINs and use biometrics instead. Other services people access from their smartphones are likely to follow suit. And given the negative impact that cumbersome authentication via passwords and PINs has on staff productivity and morale in many organisations, we can expect to see biometrics deployed in work contexts, too.

But while biometrics – unlike passwords – do not require mental gymnastics from users, there are different usability challenges. Leveraging the biometric from the modality of interaction – e.g. voice recognition phone-based interactions – makes authentication an easy task, but it will work considerably better in quiet environments than noisy ones – such as a train stations or with many people talking in the background. As many smartphone users have learnt, fingerprint sensors have a hard time recognising cold and wet fingers. And – as we report in a paper presented at IEEE Identity, Security and Behavior Analysis last week – privacy concerns mean some users ‘don’t like putting their face on the Internet’. Biometrics can’t come soon enough for most users, but there is still a lot of design and testing work to be done to make biometrics work for different interaction, physical and social contexts.

“Do you see what I see?” ask Tor users, as a large number of websites reject them but accept non-Tor users

If you use an anonymity network such as Tor on a regular basis, you are probably familiar with various annoyances in your web browsing experience, ranging from pages saying “Access denied” to having to solve CAPTCHAs before continuing. Interestingly, these hurdles disappear if the same website is accessed without Tor. The growing trend of websites extending this kind of “differential treatment” to anonymous users undermines Tor’s overall utility, and adds a new dimension to the traditional threats to Tor (attacks on user privacy, or governments blocking access to Tor). There is plenty of anecdotal evidence about Tor users experiencing difficulties in browsing the web, for example the user-reported catalog of services blocking Tor. However, we don’t have sufficient detail about the problem to answer deeper questions like: how prevalent is differential treatment of Tor on the web; are there any centralized players with Tor-unfriendly policies that have a magnified effect on the browsing experience of Tor users; can we identify patterns in where these Tor-unfriendly websites are hosted (or located), and so forth.

Today we present our paper on this topic: “Do You See What I See? Differential Treatment of Anonymous Users” at the Network and Distributed System Security Symposium (NDSS). Together with researchers from the University of Cambridge, University College London, University of California, Berkeley and International Computer Science Institute (Berkeley), we conducted comprehensive network measurements to shed light on websites that block Tor. At the network layer, we scanned the entire IPv4 address space on port 80 from Tor exit nodes. At the application layer, we fetch the homepage from the most popular 1,000 websites (according to Alexa) from all Tor exit nodes. We compare these measurements with a baseline from non-Tor control measurements, and uncover significant evidence of Tor blocking. We estimate that at least 1.3 million IP addresses that would otherwise allow a TCP handshake on port 80 block the handshake if it originates from a Tor exit node. We also show that at least 3.67% of the most popular 1,000 websites block Tor users at the application layer.

Continue reading “Do you see what I see?” ask Tor users, as a large number of websites reject them but accept non-Tor users

Are Payment Card Contracts Unfair?

While US bank customers are almost completely protected against fraudulent transactions, in Europe banks are entitled to refuse to reimburse victims of fraud under certain circumstances. The EU Payment Services Directive (PSD) is supposed to protect customers but if the bank can show that the customer has been “grossly negligent” in following the terms and conditions associated with their account then the PSD permits the bank to pass the cost of any fraud on to the customer. The bank doesn’t have to show how the fraud happened, just that the most likely explanation for the fraud is that the customer failed to follow one of the rules set out by the bank on how to protect the account. To be certain of obtaining a refund, a customer must be able to show that he or she complied with every security-related clause of the terms and conditions, or show that the fraud was a result of a flaw in the bank’s security.

The bank terms and conditions, and how customers comply with them, are therefore of critical importance for consumer protection. We set out to answer the question: are these terms and conditions fair, taking into account how customers use their banking facilities? We focussed on ATM payments and in particular how customers manage PINs because ATM fraud losses are paid for by the banks and not retailers, so there is more incentive for the bank to pass losses on to the customer. In our paper – “Are Payment Card Contracts Unfair?” – published at Financial Cryptography 2016 we show that customers have too many PINs to remember them unaided and therefore it is unrealistic to expect customers to comply with all the rules banks set: to choose unguessable PINs, not write them down, and not use them elsewhere (even with different banks). We find that, as a result of these unrealistic expectations, customers do indeed make use of coping mechanisms which reduce security and violate terms and conditions, which puts them in a weak position should they be the victim of fraud.

We surveyed 241 UK bank customers and found that 19% of customers have four or more PINs and 48% of PINs are used at most once a month. As a result of interference (one memory being confused with another) and forgetting over time (if a memory is not exercised frequently it will be lost) it is infeasible for typical customers to remember all their bank PINs unaided. It is therefore inevitable that customers forget PINs (a quarter of our participants had forgot a 4-digit PIN at least once) and take steps to help them recall PINs. Of our participants, 33% recorded their PIN (most commonly in a mobile phone, notebook or diary) and 23% re-used their PIN elsewhere (most commonly to unlock their mobile phone). Both of these coping mechanisms would leave customers at risk of being found liable for fraud.

Customers also use the same PIN on several cards to reduce the burden of remembering PINs – 16% of our participants stated they used this technique, with the same PIN being used on up to 9 cards. Because each card allows the criminal 6 guesses at a PIN (3 on the card itself, and 3 at an ATM) this gives criminals an excellent opportunity to guess PINs and again leave the customer responsible for the losses. Such attacks are made easier by the fact that customers can change their PIN to one which is easier to remember, but also probably easier for criminals to guess (13% of our participants used a mnemonic, most commonly deriving the PIN from a specific date). Bonneau et al. studied in more detail exactly how bank customers select PINs.

Finally we found that PINs are regularly shared with other people, most commonly with a spouse or partner (32% of our participants). Again this violates bank terms and conditions and so puts customers at risk of being held liable for fraud.

Holding customers liable for not being able to follow unrealistic, vague and contradictory advice is grossly unfair to fraud victims. The Payment Services Directive is being revised, and in our submission to the consultation by the European Banking Authority we ask that banks only be permitted to pass fraud losses on to customers if they use authentication mechanisms which are feasible to use without undue effort, given the context of how people actually use banking facilities in normal life. Alternatively, regulators could adopt the tried and tested US model of strong consumer protection, and allow banks to manage risks through fraud detection. The increased trust from this approach might increase transaction volumes and profit for the industry overall.

 

“Are Payment Card Contracts Unfair?” by Steven J. Murdoch, Ingolf Becker, Ruba Abu-Salma, Ross Anderson, Nicholas Bohm, Alice Hutchings, M. Angela Sasse, and Gianluca Stringhini will be presented at Financial Cryptography and Data Security, Barbados, 22–26 February 2016.

Insecure by design: protocols for encrypted phone calls

The MIKEY-SAKKE protocol is being promoted by the UK government as a better way to secure phone calls. The reality is that MIKEY-SAKKE is designed to offer minimal security while allowing undetectable mass surveillance, through the introduction a backdoor based around mandatory key-escrow. This weakness has implications which go further than just the security of phone calls.

The current state of security for phone calls leaves a lot to be desired. Land-line calls are almost entirely unencrypted, and cellphone calls are also unencrypted except for the radio link between the handset and the phone network. While the latest cryptography standards for cellphones (3G and 4G) are reasonably strong it is possible to force a phone to fall back to older standards with easy-to-break cryptography, if any. The vast majority of phones will not reveal to their user whether such an attack is under way.

The only reason that eavesdropping on land-line calls is not commonplace is that getting access to the closed phone networks is not as easy compared to the more open Internet, and cellphone cryptography designers relied on the equipment necessary to intercept the radio link being only affordable by well-funded government intelligence agencies, and not by criminals or for corporate espionage. That might have been true in the past but it certainly no longer the case with the necessary equipment now available for $1,500. Governments, companies and individuals are increasingly looking for better security.

A second driver for better phone call encryption is the convergence of Internet and phone networks. The LTE (Long-Term Evolution) 4G cellphone standard – under development by the 3rd Generation Partnership Project (3GPP) – carries voice calls over IP packets, and desktop phones in companies are increasingly carrying voice over IP (VoIP) too. Because voice calls may travel over the Internet, whatever security was offered by the closed phone networks is gone and so other security mechanisms are needed.

Like Internet data encryption, voice encryption can broadly be categorised as either link encryption, where each intermediary may encrypt data before passing it onto the next, or end-to-end encryption, where communications are encrypted such that only the legitimate end-points can have access to the unencrypted communication. End-to-end encryption is preferable for security because it avoids intermediaries being able to eavesdrop on communications and gives the end-points assurance that communications will indeed be encrypted all the way to their other communication partner.

Current cellphone encryption standards are link encryption: the phone encrypts calls between it and the phone network using cryptographic keys stored on the Subscriber Identity Module (SIM). Within the phone network, encryption may also be present but the network provider still has access to unencrypted data, so even ignoring the vulnerability to fall-back attacks on the radio link, the network providers and their suppliers are weak points that are tempting for attackers to compromise. Recent examples of such attacks include the compromise of the phone networks of Vodafone in Greece (2004) and Belgacom in Belgium (2012), and the SIM card supplier Gemalto in France (2010). The identity of the Vodafone Greece hacker remains unknown (though the NSA is suspected) but the attacks against Belgacom and Gemalto were carried out by the UK signals intelligence agency – GCHQ – and only publicly revealed from the Snowden leaks, so it is quite possible there are others attacks which remain hidden.

Email is typically only secured by link encryption, if at all, with HTTPS encrypting access to most webmail and Transport Layer Security (TLS) sometimes encrypting other communication protocols that carry email (SMTP, IMAP and POP). Again, the fact that intermediaries have access to plaintext creates a vulnerability, as demonstrated by the 2009 hack of Google’s Gmail likely originating from China. End-to-end email encryption is possible using the OpenPGP or S/MIME protocols but their use is not common, primarily due to their poor usability, which in turn is at least partially a result of having to stay compatible with older insecure email standards.

In contrast, instant messaging applications had more opportunity to start with a clean-slate (because there is no expectation of compatibility among different networks) and so this is where much innovation in terms of end-to-end security has taken place. Secure voice communication however has had less attention than instant messaging so in the remainder of the article we shall examine what should be expected of a secure voice communication system, and in particular see how one of the latest and up-coming protocols, MIKEY-SAKKE, which comes with UK government backing, meets these criteria.

MIKEY-SAKKE and Secure Chorus

MIKEY-SAKKE is the security protocol behind the Secure Chorus voice (and also video) encryption standard, commissioned and designed by GCHQ through their information security arm, CESG. GCHQ have announced that they will only certify voice encryption products through their Commercial Product Assurance (CPA) security evaluation scheme if the product implements MIKEY-SAKKE and Secure Chorus. As a result, MIKEY-SAKKE has a monopoly over the vast majority of classified UK government voice communication and so companies developing secure voice communication systems must implement it in order to gain access to this market. GCHQ can also set requirements of what products are used in the public sector and as well as for companies operating critical national infrastructure.

UK government standards are also influential in guiding purchase decisions outside of government and we are already seeing MIKEY-SAKKE marketed commercially as “government-grade security” and capitalising on their approval for use in the UK government. For this reason, and also because GCHQ have provided implementers a free open source library to make it easier and cheaper to deploy Secure Chorus, we can expect wide use MIKEY-SAKKE in industry and possibly among the public. It is therefore important to consider whether MIKEY-SAKKE is appropriate for wide-scale use. For the reasons outlined in the remainder of this article, the answer is no – MIKEY-SAKKE is designed to offer minimal security while allowing undetectable mass surveillance though key-escrow, not to provide effective security.

Continue reading Insecure by design: protocols for encrypted phone calls

New EU Innovative Training Network project “Privacy & Us”

Last week, “Privacy & Us” — an Innovative Training Network (ITN) project funded by the EU’s Marie Skłodowska-Curie actions — held its kick-off meeting in Munich. Hosted in the nice and modern Wisschenschafts Zentrum campus by Uniscon, one of the project partners, principal investigators from seven different countries set out the plan for the next 48 months.

Privacy & Us really stands for “Privacy and Usability” and aims to conduct privacy research and, over the next 3 years, train thirteen Early Stage Researchers (ESRs) — i.e., PhD students — to be able to reason, design, and develop innovative solutions to privacy research challenges, not only from a technical point of view but also from the “human side”.

The project involves nine “beneficiaries”: Karlstads Universitet (Sweden), Goethe Universitaet Frankfurt (Germany), Tel Aviv University (Israel), Unabhängiges Landeszentrum für Datenschutz (Germany), Uniscon (Germany), University College London (UK), USECON (Austria), VASCO Innovation Center (UK), and Wirtschaft Universitat Wien (Austria), as well as seven partner organizations: the Austrian Data Protection Authority (Austria), Preslmayr Rechtsanwälte OG (Austria), Friedrich-Alexander University Erlangen (Germany), University of Bonn (Germany), the Bavarian Data Protection Authority (Germany), EveryWare Technologies (Italy), and Sentor MSS AB (Sweden).

The people behind Privacy & Us project at the kick-off meeting in Munich, December 2015
The people behind Privacy & Us project at the kick-off meeting in Munich, December 2015

The Innovative Training Networks are interdisciplinary and multidisciplinary in nature and promote, by design, a collaborative approach to research training. Funding is extremely competitive, with acceptance rate as low as 6%, and quite generous for the ESRs who often enjoy higher than usual salaries (exact numbers depend on the hosting country), plus 600 EUR/month mobility allowance and 500 EUR/month family allowance.

The students will start in August 2016 and will be trained to face both current and future challenges in the area of privacy and usability, spending a minimum of six months in secondment to another partner organization, and participating in several training and development activities.

Three studentships will be hosted at UCL,  under the supervision of Dr Emiliano De Cristofaro, Prof. Angela Sasse, Prof. Ann Blandford, and Dr Steven Murdoch. Specifically, one project will investigate how to securely and efficiently store genomic data, design and implementing privacy-preserving genomic testing, as well as support user-centered design of secure personal genomic applications. The second project will aim to better understand and support individuals’ decision-making around healthcare data disclosure, weighing up personal and societal costs and benefits of disclosure, and the third (with the VASCO Innovation Centre) will explore techniques for privacy-preserving authentication, namely, extending these to develop and evaluate innovative solutions for secure and usable authentication that respects user privacy.

Continue reading New EU Innovative Training Network project “Privacy & Us”

Forced authorisation chip and PIN scam hitting high-end retailers

Chip and PIN was designed to prevent fraud, but it also created a new opportunity for criminals that is taking retailers by surprise. Known as “forced authorisation”, committing the fraud requires no special equipment and when it works, it works big: in one transaction a jewellers store lost £20,500. This type of fraud is already a problem in the UK, and now that US retailers have made it through the first Black Friday since the Chip and PIN deadline, criminals there will be looking into what new fraud techniques are available.

The fraud works when the retailer has a one-piece Chip and PIN terminal that’s passed between the customer and retailer during the course of the transaction. This type of terminal is common, particularly in smaller shops and restaurants. They’re a cheaper option compared to terminals with a separate PIN pad (at least until a fraud happens).

The way forced authorisation fraud works is that the retailer sets up the terminal for a transaction by inserting the customer’s card and entering the amount, then hands the terminal over to the customer so they can type in the PIN. But the criminal has used a stolen or counterfeit card, and due to the high value of the transaction the terminal performs a “referral” — asking the retailer to call the bank to perform additional checks such as the customer answering a security question. If the security checks pass, the bank will give the retailer an authorisation code to enter into the terminal.

The problem is that when the terminal asks for these security checks, it’s still in the hands of the criminal, and it’s the criminal that follows the steps that the retailer should have. Since there’s no phone conversation with the bank, the criminal doesn’t know the correct authorisation code. But what surprises retailers is that the criminal can type in anything at this stage and the transaction will go through. The criminal might also be able to bypass other security features, for example they could override the checking of the PIN by following the steps the retailer would if the customer has forgotten the PIN.

By the time the terminal is passed back to the retailer, it looks like the transaction was completed successfully. The receipt will differ only very subtly from that of a normal transaction, if at all. The criminal walks off with the goods and it’s only at the end of the day that the authorisation code is checked by the bank. By that time, the criminal is long gone. Because some of the security checks the bank asked for weren’t completed, the retailer doesn’t get the money.

Continue reading Forced authorisation chip and PIN scam hitting high-end retailers