Confirmation of Payee is coming, but will it protect bank customers from fraud?

The Payment System Regulator (PSR) has just announced that the UK’s six largest banks must check whether the name of the recipient of a transfer matches what the sender thinks. This new feature should help address a security loophole in online payments: the name of the recipient of transfers is ignored, contrary to expectations and unlike cheques. This improved security should make some fraud more difficult, but banks must be prevented from exploiting the change to unfairly shift the liability of the remaining crime to the victims.

The PSR’s target is for checks to be fully implemented by March 2020, somewhat later than their initial promise to Parliament of September 2018 and subsequent target of July 2019. The new proposal, known as Confirmation of Payee, also only covers the six largest banking groups, but this should cover 90% of transfers. Its goal is to defend against criminals who trick victims into transferring funds under the false pretence that the money is going to the victim’s new account, whereas it is really going to the criminal. The losses from such fraud, known as push payment scams, are often life-changing, resulting in misery for the victims.

Checks on the recipient name will make this particular scam harder, so while unlikely to prevent all types of push payment scams they will hopefully force criminals to adopt strategies that are easier to prevent. The risk that consumer representatives and regulators will need to watch out for is that these new security measures could result in victims being unfairly held liable. This scenario is, unfortunately, likely because the voluntary consumer protection code for push payment scams excuses the bank from liability if they show the customer a Confirmation of Payee warning.

Warning fatigue and misaligned incentives

In my response to the consultation over this consumer protection code, I raised the issue of “warning fatigue” – that customers will be shown many irrelevant warnings while they do online banking and this reduces the likelihood that customers will notice important ones. Even Confirmation of Payee warnings will frequently be wrong, such as if the recipient’s bank account is under a different name to what the sender expects. If the two names are very dissimilar, the sender won’t be given more details but if the name entered is close to the name in bank records the sender should be told what the correct one is and asked to compare.

Continue reading Confirmation of Payee is coming, but will it protect bank customers from fraud?

UCL’s Centre for Doctoral Training in Cybersecurity

It has become increasingly apparent that the world’s cybersecurity challenges will not be resolved by specialists working in isolation.

Indeed, it has become clear that the challenges that arise from the integration of emerging technologies into existing social, commercial, legal and political systems will not be resolved by specialists working in isolation. Rather, these complex problems require the efforts of people who can cross disciplinary boundaries, communicate beyond their own fields, and comprehend the context in which others operate. Computer science, information security, encryption, criminology, psychology, international relations, public policy, philosophy of science, legal studies, and economics combine to form the ecosystem within which cybersecurity problems and solutions are found but training people to think and work across these boundaries has proven difficult.

UCL is delighted to have been awarded funding by the UK’s Engineering and Physical Sciences Research Council (EPSRC) to establish a Centre for Doctoral Training (CDT) in Cybersecurity that will help to establish a cadre of leaders in security with the breadth of perspective and depth of skills required to handle the complex challenges in security faced by our society. The CDT is led by Prof Madeline Carr (Co-Director; UCL Science, Technology, and Public Policy), Prof Shane Johnson (Co-Director; UCL Security and Crime Science), and Prof David Pym (Director; UCL Programming Principles, Logic, and Verification (PPLV) and Information Security).

The CDT is an exciting collaboration that brings together research teams in three of UCL’s departments – Computer Science, Security and Crime Science, and Science, Technology, Engineering, and Public Policy – in order to increase the capacity of the UK to respond to future information and cybersecurity challenges. Through an interdisciplinary approach, the CDT will train cohorts of highly skilled experts drawn from across the spectrum of the engineering and social sciences, able to become the next generation of UK leaders in industry and government, public policy, and scientific research. The CDT will equip them with a broad understanding of all sub-fields of cybersecurity, as well as specialized knowledge and transferable skills to be able to operate professionally in business, academic, and policy circles.

The CDT will admit candidates with a strong background in STEM (CS, Mathematics, Engineering, Physics) or Social Sciences (Psychology, Sociology, International Relations, Public Policy, Crime Science, Economics, and Management), either recent graduates or mid-career. Each will be trained in research and innovation skills in the multidisciplinary facets of cybersecurity, (computing, crime science, management and public policy) and then specialise within a discipline, with industrial experience through joint industrial projects and internships.

For more information, including directions for applications, please visit the cybersecurity CDT website.

Hiring Research Assistants and PhD students

We’re happy to announce that we have several open positions!

Privacy & machine learning

Emiliano De Cristofaro has at least one post-doc position in privacy and machine learning. The researcher will work with him and others in UCL’s InfoSec group. For a sample of our recent work in the field, please see Emiliano’s publications on this topic.

Please email jobs@emilianodc.com with questions or apply directly before 25 July 2019.

Note that we would be keen to hear from both PhD students looking for part-time research work, as well as people looking for longer-term full-time post-doctoral positions.

Web measurements

Multiple positions are available in the context of a project based at the Alan Turing Institute on cyberbullying and cyberhate, led by Emiliano De Cristofaro and Gareth Tyson. The project will primarily focus on measurements research, i.e., gathering and analysing various types of social datasets.

For a sample of our recent work in this space, please see Emiliano’s publications on this topic.

Again, we would be keen to hear from both PhD students looking for part-time research work, as well as people looking for longer-term full-time post-doctoral positions.

Please email edecristofaro@turing.ac.uk if you have questions.

PhD positions with Philipp Jovanovic

Members of the InfoSec group are always looking for talented PhD students to join their team. If you would like to investigate opportunities, please do check their website for details of their research interests and contact instructions. We are particularly happy to announce that Philipp Jovanovic will join our group as an Associate Professor starting in January 2020, and he is inviting applications for PhD students.

Philipp’s research interests broadly cover applied cryptography, privacy, and decentralised systems. His current work focuses on building scalable, privacy-preserving, decentralised protocols (such as ByzCoin, RandHound, OmniLedger, or Calypso). He has also worked on a wide variety of other security-related topics in the past, including design and analysis of symmetric cryptographic primitives, side-channel attacks and countermeasures, and the security analysis of systems deployed in the real world such as the Transport Layer Security (TLS) protocol or the Open Smart Grid Protocol (OSGP).

For an overview of his work, please visit Philipp’s website.

If you’re interested in working with Philipp as a PhD student, please email philipp@jovanovic.io.

New CDT in cybersecurity

We have several PhD positions funded through the new Centre for Doctoral Training in Cybersecurity (CDT). Please see the article about the CDT for more details and instructions to apply.

Will dispute resolution be Libra’s Achilles’ heel?

Facebook’s new cryptocurrency, Libra, has the ambitious goal of being the “financial infrastructure that empowers billions of people”. This aspiration will only be achievable if the user-experience (UX) of Libra and associated technologies is competitive with existing payment channels. Now, Facebook has an excellent track record of building high-quality websites and mobile applications, but good UX goes further than just having an aesthetically pleasing and fast user interface. We can already see aspects of Libra’s design that will have consequences on the experience of its users making payments.

For example, the basket of assets that underly the Libra currency should ensure that its value should not be too volatile in terms of the currencies represented within the reserve, so easing international payments. However, Libra’s value will fluctuate against every other currency, creating a challenge for domestic payments. People won’t be paid their salary in Libra any time soon, nor will rents be denominated in Libra. If the public is expected to hold significant value in Libra, fluctuations in the currency markets could make the difference between someone being able to pay their rent or not – a certainly unwelcome user experience.

Whether the public will consider the advantages of Libra are worth the exposure to the foibles of market fluctuations is an open question, but in this post, I’m mostly going to discuss the consequences another design decision baked into the design of Libra: that transactions are irrevocable. Once a transaction is accepted by the validator network, the user may proceed “knowing that the transaction can never be changed or reversed“. This is a common design decision within cryptocurrencies because it ensures that companies, governments and regulators should be unable to revoke payments they dislike. When coupled with anonymity or decentralisation, to prevent blacklisted transactions being blocked beforehand, irrevocability creates a censorship-resistant payment system.

Mitigating the cost of irrevocable transactions

Libra isn’t decentralised, nor is it anonymous, so it is unlikely to be particularly resistant to censorship over matters when there is an international consensus. Irrevocability does, however, make fraud easier because once stolen funds are gone, they cannot be reinstated, even if the fraud is identified. Other cryptocurrencies share Libra’s irrevocability (at least in theory), but they are designed for technically sophisticated users, and their risk of theft can be balanced against the potentially substantial gains (and losses) that can be made from volatile cryptocurrencies. While irrevocability is common within cryptocurrencies, it is not within the broader payments industry. Exposing billions of people to the risk of their Libra holdings being stolen, without the potential for recourse, isn’t good UX. I’ve argued that irrevocable transactions protect the interests of financial institutions over those of the public, and are the wrong default for payments. Eventually, public pressure and regulatory intervention forced UK banks to revoke fraudulent transactions, and they take on the risk that they are unable to do so, rather than pass it onto the victims. The same argument applies to Libra, and if fraud becomes common, they will see the same pressures as UK banks.

Continue reading Will dispute resolution be Libra’s Achilles’ heel?

Thoughts on the Libra blockchain: too centralised, not private, and won’t help the unbanked

Facebook recently announced a new project, Libra, whose mission is to be “a simple global currency and financial infrastructure that empowers billions of people”. The announcement has predictably been met with scepticism by organisations like Privacy International, regulators in the U.S. and Europe, and the media at large. This is wholly justified given the look of the project’s website, which features claims of poverty reduction, job creation, and more generally empowering billions of people, wrapped in a dubious marketing package.

To start off, there is the (at least for now) permissioned aspect of the system. One appealing aspect of cryptocurrencies is their potential for decentralisation and censorship resistance. It wasn’t uncommon to see the story of PayPal freezing Wikileak’s account in the first few slides of a cryptocurrency talk motivating its purpose. Now, PayPal and other well-known providers of payment services are the ones operating nodes in Libra.

There is some valid criticism to be made about the permissioned aspect of a system that describes itself as a public good when other cryptocurrencies are permissionless. These are essentially centralised, however, with inefficient energy wasting mechanisms like Proof-of-Work requiring large investments for any party wishing to contribute.

There is a roadmap towards decentralisation, but it is vague. Achieving decentralisation, whether at the network or governance level, hasn’t been done even in a priori decentralised cryptocurrencies. In this sense, Libra hasn’t really done worse so far. It already involves more members than there are important Bitcoin or Ethereum miners, for example, and they are also more diverse. However, this is more of a fault in existing cryptocurrencies rather than a quality of Libra.

Continue reading Thoughts on the Libra blockchain: too centralised, not private, and won’t help the unbanked

Digital Exclusion and Fraud – the Dark Side of Payments Authentication

Today, the Which? consumer rights organisation released the results from its study of how people are excluded from financial services as a result of banks changing their rules to mandate that customers use new technology. The research particularly focuses on banks now requiring that customers register a mobile phone number and be able to receive security codes in SMS messages while doing online banking or shopping. Not only does this change result in digital exclusion – customers without mobile phones or good network coverage will struggle to make payments – but as I discuss in this post, it’s also bad for security.

SMS-based security codes are being introduced to help banks meet their September 2019 deadline to comply with the Strong Customer Authentication requirements of the EU Payment Services Directive 2. These rules state that before making a payment from a customer’s account, the bank must independently verify that the customer really intended to make this payment. UK banks almost universally have decided to meet their obligation by sending a security code in an SMS message to the customer’s mobile phone and asking the customer to type this code into their web browser.

The problem that Which? identified is that some customers don’t have mobile phones, some that do have mobile phones don’t trust their bank with the number, and even those who are willing to share their mobile phone number with the bank might not have network coverage when they need to make a payment. A survey of Which? members found that nearly 1 in 5 said they would struggle to receive the security code they need to perform online banking transactions or online card payments. Remote locations have poorer network coverage than average and it is these areas that are likely to be disproportionately affected by the ongoing bank branch closure programmes.

Outsourcing security

The aspect of this scenario that I’m particularly interested in is why banks chose SMS messages as a security technology in the first place, rather than say sending out dedicated authentication devices to their customers or making a smartphone app. SMS has the advantage that customers don’t need to install an app or have the inconvenience of having to carry around an extra authentication device. The bank also saves the cost of setting up new infrastructure, other than hooking up their payment systems to the phone network. However, SMS has disadvantages – not only does it exclude customers in areas of poor network coverage, but it also effectively outsources security from the bank to the phone networks.

Continue reading Digital Exclusion and Fraud – the Dark Side of Payments Authentication

Efficient Cryptographic Arguments and Proofs – Or How I Became a Fractional Monetary Unit

In 2008, unfortunate investors found their life savings in Bernie Madoff’s hedge fund swindled away in a $65 billion Ponzi scheme. Imagine yourself back in time with an opportunity to invest in his fund that had for years delivered stable returns and pondering Madoff’s assurance that the fund was solvent and doing well. Unfortunately, neither Madoff nor any other hedge fund manager would take kindly to your suggestion of opening their books to demonstrate the veracity of the claim. And even if you somehow got access to all the internal data, it might take an inordinate effort to go through the documents.

Modern day computers share your predicament. When a computer receives the result of a computation from another machine, it can be critical whether the data is correct or not. If the computer had feelings, it would wish for the data to come with evidence of correctness attached. But the sender may not wish to reveal confidential or private information used in the computation. And even if the sender is willing to share everything, the cost of recomputation can be prohibitive.

In 1985, Goldwasser, Micali and Rackoff proposed zero-knowledge proofs as a means to give privacy-preserving evidence. Zero-knowledge proofs are convincing only if the statement they prove is true, e.g. a computation is correct; yet reveal no information except for the veracity of the statement. Their seminal work shows verification is possible without having to sacrifice privacy.

In the following three decades, cryptographers have worked tirelessly at reducing the cost of zero-knowledge proofs. Six years ago, we began the ERC funded project Efficient Cryptographic Argument and Proofs aimed at improving the efficiency of zero-knowledge proofs. In September 2018 the project came to its conclusion and throwing usual academic modesty aside, we have made remarkable progress, and several of our proof systems are provably optimal (up to a constant multiplicative factor).

As described in an earlier post, we improved the efficiency of generalised Sigma-protocols, reducing both the number of rounds in which the prover and verifier interact and the communication, with a proof size around 7 kB even for large and complex statements. Our proof techniques have been optimised and implemented in the Bulletproof system, which is now seeing widespread adoption.

We also developed highly efficient pairing-based non-interactive zero-knowledge proofs (aka zk-SNARKs). Here the communication cost is even lower in practice, enabling proofs to be just a few hundred bytes regardless of the size of the statement being proved. Their compactness and ease of verification make them useful in privacy-preserving cryptocurrencies and blockchain compression.

Continue reading Efficient Cryptographic Arguments and Proofs – Or How I Became a Fractional Monetary Unit

How Accidental Data Breaches can be Facilitated by Windows 10 and macOS Mojave

Inadequate user interface designs in Windows 10 and macOS Mojave can cause accidental data breaches through inconsistent language, insecure default options, and unclear or incomprehensible information. Users could accidentally leak sensitive personal data. Data controllers in companies might be unknowingly non-compliant with the GDPR’s legal obligations for data erasure.

At the upcoming Annual Privacy Forum 2019 in Rome, I will be presenting the results of a recent study conducted with my colleague Mark Warner, exploring the inadequate design of user interfaces (UI) as a contributing factor in accidental data breaches from USB memory sticks. The paper titled “Fight to be Forgotten: Exploring the Efficacy of Data Erasure in Popular Operating Systems” will be published in the conference proceedings at a later date but the accepted version is available now.

Privacy and security risks from decommissioned memory chips

The process of decommissioning memory chips (e.g. USB sticks, hard drives, and memory cards) can create risks for data protection. Researchers have repeatedly found sensitive data on devices they acquired from second-hand markets. Sometimes this data was from the previous owners, other times from third persons. In some cases, highly sensitive data from vulnerable people were found, e.g. Jones et al. found videos of children at a high school in the UK on a second-hand USB stick.

Data found this way had frequently been deleted but not erased, creating the risk that any tech-savvy future owner could access it using legally available, free to download software (e.g., FTK Imager Lite 3.4.3.3). Findings from these studies also indicate the previous owners’ intentions to erase these files and prevent future access by unauthorised individuals, and their failure to sufficiently do so. Moreover, these risks likely extend from the second-hand market to recycled memory chips – a practice encouraged under Directive 2012/19/EU on ‘waste electrical and electronic equipment’.

The implications for data security and data protection are substantial. End-users and companies alike could accidentally cause breaches of sensitive personal data of themselves or their customers. The protection of personal data is enshrined in Article 8 of the Charter of Fundamental Rights of the European Union, and the General Data Protection Regulation (GDPR) lays down rules and regulation for the protection of this fundamental right. For example, data processors could find themselves inadvertently in violation of Article 17 GDPR Right to Erasure (‘right to be forgotten’) despite their best intentions if they failed to erase a customer’s personal data – independent of whether that data was breached or not.

Seemingly minor design choices, the potential for major implications

The indication that people might fail to properly erase files from storage, despite their apparent intention to do so, is a strong sign of system failure. We know since more than twenty years that unintentional failure of users at a task is often caused by the way in which [these] mechanisms are implemented, and users’ lack of knowledge. In our case, these mechanisms are – for most users – the UI of Windows and macOS. When investigating these mechanisms, we found seemingly minor design choices that might facilitate unintentional data breaches. A few examples are shown below and are expanded upon in the full publication of our work.

Continue reading How Accidental Data Breaches can be Facilitated by Windows 10 and macOS Mojave

Science “of” or “for” security?

The choice of preposition – science of security versus science for security – marks an important difference in mental orientation. This post grew out of a conversation last year with Roy Maxion, Angela Sasse and David Pym. Clarifying this small preposition will help us set expectations, understand goals, and ultimately give appropriately targeted advice on how to do better security research.

These small words (for vs. of) unpack into some big differences. Science for security seems to mean taking any scientific discipline or results and using that to make decisions about information security. Thus, “for” is agnostic as to whether there is any work within security that looks like science. Like the trend for evidence-based medicine, science for security would advocate for evidence-based security decisions. This view is advocated by RISCS here in the UK and is probably consistent with approaches like the New School of Information Security.

Science for security does not say security is not science. More accurately, it seems not to care. The view is agnostic and seems to say it does not matter whether security is science. The point seems to be there is enough difficulty in adapting other sciences for use by security, and that applying the methods of other sciences to security-relevant problems is what matters. There are many examples of this approach, in different flavours. We can see at least three: porting concepts, re-situating approaches, and borrowing methods. We’re adapting these first two from Morgan (2014).

Porting concepts

Economics of infosec is its own discipline (WEIS). The way Anderson (2001) applies economics is to take established principles in economics to shed light on established difficulties in infosec.

Re-situating approaches

This is when some other science understands something, and we generalise from that instance and try to make a concrete application to security. We might argue that program verification takes this approach, re-situating understanding from mathematics and logic. Studies on keystroke dynamics also re-situate the understanding of human psychology and physical forensics.

Borrowing methods

We might study a security phenomenon according to the methods of an established discipline. Usable security largely applies psychology- and sociology-based methods, for example. Of course, there are specific challenges that might arise in studying a new area such as security (Krol et al., 2016), but the approach is science for security because the challenges result in minor tweaks to the method of the home discipline.

Continue reading Science “of” or “for” security?

The Government published its draft domestic abuse bill, but risks ignoring the growing threat of tech abuse

Dr Leonie Tanczer, who leads UCL’s “Gender and IoT” research team, reflects on the release of the draft Domestic Abuse Bill and points out that in its current form, it misses emphasis on emerging forms of technology-facilitated abuse.

On the 21st of January, the UK Government published its long-awaited Domestic Abuse Bill. The 196-page long document focuses on a wide range of issues from providing a first statutory definition of domestic abuse to the recognition of economic abuse as well as controlling and coercive non-physical behaviour. In recent years, abuse facilitated through information and communication technologies (ICT) has been growing. Efforts to mitigate these forms of abuse (e.g. social media abuse or cyberstalking) are already underway, but we expect new forms of “technology-facilitated abuse” (“tech abuse”) to become more commonplace amongst abusive perpetrators.

We are currently seeing an explosion in the number of Internet-connected devices on the market, from gadgets like Amazon’s Alexa and Google’s Home hub, to “smart” home heating, lighting, and security systems as well as wearable devices such as smartwatches. What these products have in common is their networked capability, and many also include features such as remote, video, and voice control as well as GPS location tracking. While these capabilities are intended to make modern life easier, they also create new means to facilitate psychological, physical, sexual, economic, and emotional abuse as well as controlling and manipulating behaviour.

Although so-called “Internet of Things” (IoT) usage is not yet widespread (there were 7.5 billion total connections worldwide in 2017), GSMA expects there to be 25 billion devices globally by 2025. Sadly, we have already started to see examples of these technologies being misused. An investigation last year by the New York Times showed how perpetrators of domestic abuse could use apps on their smartphones to remotely control household appliances like air conditioning or digital locks in order to monitor and frighten their victims. In 2018, we saw a husband convicted of stalking after spying on his estranged wife by hacking into their wall-mounted iPad.

The risk of being a victim of tech abuse falls predominantly on women and especially migrant women. This is a result of men still being primarily in charge of the purchase and maintenance of technical systems as well as women and girls being over-proportionally affected by domestic abuse.

The absence of ‘tech abuse’ in the draft bill

While the four objectives of the draft Bill (promote awareness, protect and support, transform the justice process, improve performance) are to be welcomed, the absence of sufficient reference to the growing rise of tech abuse is a significant omission and missed opportunity.

Continue reading The Government published its draft domestic abuse bill, but risks ignoring the growing threat of tech abuse