Thinking about fake news – As a security incident?

In Tristan and David’s Philosophy, Politics and Economics of Security and Privacy class, Jono gave a little information about incident response.  As a result, we have been thinking about the recent furor over fake news. There are some big questions circling this topic, and we’re going to try to focus on a part we have some competence in: what an understanding of fake news as a security incident can contribute to the wider debate. Our goal here is mostly to highlight some lessons from security research that should be applicable, so we can help constrain the solution space. Ultimately, any solution will need to engage with wider civil society.

The lessons we will argue for in the following are:

  • Solutions need to support the elector’s primary task. Education to avoid cognitive biases is not a short- or medium-term solution.
  • Focus on aligning the incentives of the media companies and the voters. Reduce the return on investment for the adversary.
  • Any blocking should be strategically useful, and not merely reactionary.

First, we want a more specific term, as well as a less charged one. Fake news includes politically or financially motivated stories presented as factual reports on the world that are fictional in material ways, and usually are intended to stir strong feelings. This definition is hardly complete. Furthermore, similar to the term “post-truth” as discussed by Jasanoff and Simmet, the term “fake news” makes several value judgement we’d like to avoid. “Fake news” carries a strong suggestion that we, the speakers, know what is true and what isn’t, and it also indicates some condescension by the speaker for anyone who believes an item of fake news. We want to avoid such insults. Instead, let’s say we want to focus on the following hypothetical security policy: democratic elections should be free from foreign interference.

Grounding out this policy definition hangs on the term “interference.” This is hard. Ultimately, the will of an elector in a free and fair election needs to be respected. This makes it particularly challenging to agree on constraints to what information an elector has access to. In practice, no elector is omniscient, so some constraints de facto exist. But weighing in on this issue is outside our competence. Let’s assume for now that public policy will provide an assessment of “interference” eventually. The UK recently announced a “dedicated national security communications unit” would be charged with “combating disinformation by state actors and others.” In France, Emmanuel Macron plans legislation to fight interference from foreign sources during elections. Various social media platforms have likewise announced attempted fixes, which means they have some functional definition of what “interference” they’re seeking to remove. Unfortunately, “none of the tech giants claim to be ready” for the November 2018 elections in the US.

Interference in elections is a type of information warfare. An appropriate security policy needs to assess the threat environment and the capabilities of the adversaries. In particular, the Russian Federation has been assessed as a highly motivated and well-resourced actor in this space. We should note that Russia, in turn, assesses the intent and capability of the USA similarly. Tools and tactics within information warfare, particularly disinformation campaigns, help define “interference” within our security policy.

In this context, what can the security research community recommend? Well, the main target of the disinformation campaign are usual citizens. They are targetable largely due to inherent cognitive biases in the way humans process and reason about information. In security terms, we could see these biases as vulnerabilities in the system. Classically, we have two options to secure the system: patch the vulnerability, or prevent the adversary from exploiting it by controlling or filtering the attack before it reaches the target.

Patch in this case would mean teaching people to avoid cognitive biases in their day-to-day reasoning. Psychology tells us this is hard. Intelligence analysts train for months or years for this. And the research in usable security has affirmed time and time again that the users are not the enemy. That is, the system must alleviate the burden on the user’s attention and not interfere with their primary task, or else the user will subvert or avoid the protections put in place. Any changes in user culture are slow. This leads us to lesson 1 on preventing disinformation campaigns for election interference: solutions need to support the elector’s primary task. Education to avoid cognitive biases is not a short-term or medium-term solution.

Controlling the attack vectors is more promising, although filtering them is not. A key aspect of any information security policy is aligning the economic incentives of the actors. Economics is a main reason why infosec is hard. It may not be easy to reorganize the incentives in the advertising and news distribution media space. However, as long as organizations profit from more clicks on an article no matter the content, there will be an incentive to drive viewers that is ultimately at cross-purposes with our security goal. Such misaligned incentives often swamp any technical security solutions. And any adversary with an economic incentive to attack usually will. Thus our second lesson: focus on aligning the incentives of the media companies and the voters; reduce the return on investment for the adversary. Exactly how to do these things will require future work.

There are huge issues about human rights and free speech for blocking access to information. However, the technical aspects of blacklisting are worth understanding before even attempting such human-rights debates. Blacklists of internet resources, such as domain names, IP addresses, or web pages, are useful. But they’re not a final solution. Whether blacklists move at the speed of national legislatures or are updated every five minutes, their main impact is to cause the adversary to move around.  Blacklists alone are not enough. We would need to look for suspiciously mobile resources (i.e. fast-flux), and eventually whitelist resources. Blacklists such as implemented by Facebook in response to Congress are helpful. But we should carefully consider how they drive the disinformation campaigns into a place we are better able to counteract them, and be sure we don’t make such campaigns harder to find instead. Lesson 3 is therefore that any blocking should be strategically useful, and not merely reactionary.

We’d be happy for further comments on fake news, disinformation campaigns that interfere with elections, lessons we’ve missed, disagreements about the value of security research to this topic, and other comments you might have! This is a wide open topic, and we’re still sounding it all out.

Liability for push payment fraud pushed onto the victims

This morning, BBC Rip Off Britain focused on push payment fraud, featuring an interview with me (starts at 34:20). The distinction between push and pull payments should be a matter for payment system geeks, and certainly isn’t at the front of customers’ minds when they make a payment. However, there’s a big difference when there’s fraud – for online pull payments (credit and debit card)  the bank will give the victim the money back in many situations; for online push payments (Faster Payment System and Standing Orders) the full liability falls on the party least able to protect themselves – the customer.

The banking industry doesn’t keep good statistics about push payment fraud, but it appears to be increasing, with Which receiving reports from over 650 victims in the first two weeks of November 2016, with losses totalling over £5.5 million. Today’s programme puts a human face to these statistics, by presenting the case of Jane and Steven Caldwell who were defrauded of over £100,000 from their Nationwide and NatWest accounts.

They were called up at the weekend by someone who said he was working for NatWest. To verify that this was the case, Jane used three methods. Firstly, she checked caller-ID to confirm that the number was indeed the bank’s own customer helpline – it was. Secondly, she confirmed that the caller had access to Jane’s transaction history – he did. Thirdly, she called the bank’s customer helpline, and the caller knew this was happening despite the original call being muted.

Convinced by these checks, Jane transferred funds from her own accounts to another in her own name, having been told by the caller that this was necessary to protect against fraud. Unfortunately, the caller was a scammer. Experts featured on the programme suspect that caller-ID was spoofed (quite easy, due to lack of end-to-end security for phone calls), and that malware on Jane’s laptop allowed the scammer to see transaction history on her screen, as well as to listen to and see her call to the genuine customer helpline through the computer’s microphone and webcam. The bank didn’t check that the name Jane gave (her own) matched that of the recipient account, so the scammer had full access to the transferred funds, which he quickly moved to other accounts. Only Nationwide was able to recover any money – £24,000 – leaving Jane and Steven over £75,000 out of pocket.

Neither bank offered Jane and Steven a refund, because they classed the transaction as “authorised” and so falling into one of the exceptions to the EU Payment Services Directive requirement to refund victims of fraud (the other exception being if the bank believed the customer acted either with gross negligence or fraudulently). The banks argued that their records showed that the customer’s authentication device was used and hence the transaction was “authorised”. In the original draft of the Payment Services Directive this argument would not be sufficient, but as a result of concerted lobbying by Barclays and other UK banks for their records to be considered conclusive, the word “necessarily” was inserted into Article 72, and so removing this important consumer protection.

“Where a payment service user denies having authorised an executed payment transaction, the use of a payment instrument recorded by the payment service provider, including the payment initiation service provider as appropriate, shall in itself not necessarily be sufficient to prove either that the payment transaction was authorised by the payer or that the payer acted fraudulently or failed with intent or gross negligence to fulfil one or more of the obligations under Article 69.”

Clearly the fraudulent transactions do not meet any reasonable definition of “authorised” because Jane did not give her permission for funds to be transferred to the scammer. She carried out the transfer because the way that banks commonly authenticate themselves to customers they call (proving that they know your account details) was unreliable, because the recipient bank didn’t check the account name, because bank fraud-detection mechanisms didn’t catch the suspicious nature of the transactions, and because the bank’s authentication device is too confusing to use safely. When the security of the payment system is fully under control of the banks, why is the customer held liable when a person acting with reasonable care could easily do the same as Jane?

Another question is whether banks do enough to recover funds lost through scams such as this. The programme featured an interview with barrister Gideon Roseman who quickly obtained court orders allowing him to recover most of his funds lost through a similar scam. Interestingly a side-effect of the court orders was that he discovered that his bank, Barclays, waited more than 24 hours after learning about the fraud before they acted to stop the stolen money being transferred out. After being caught out, Barclays refunded Gideon the affected funds, but in cases where the victim isn’t a barrister specialising in exactly these sorts of disputes, do the banks do all they could to recover stolen money?

In order to give banks proper incentives to prevent push payment fraud where possible and to recover stolen funds in the remainder of cases, Which called for the Payment Systems Regulator to make banks liable for push payment fraud, just as they are for pull payments. I agree, and expect that if this were the case banks would implement innovative fraud prevention mechanisms against push payment fraud that we currently only see for credit and debit transactions. I also argued that in implementing the revised Payment Service Directive, the European Banking Authority should require banks provide evidence that a customer was aware of the nature of the transaction and gave informed consent before they can hold the customer liable. Unfortunately, both the Payment Systems Regulator, and the European Banking Authority conceded to the banking industry’s request to maintain the current poor state of consumer protection.

The programme concluded with security advice, as usual. Some was actively misleading, such as the claim by NatWest that banks will never ask customers to transfer money between their accounts for security reasons. My bank called me to transfer money from my current account to savings account, for precisely this reason (I called them back to confirm it really was them). Some advice was vague and not actionable (e.g. “be vigilant” – in response to a case where the victim was extremely cautious and still got caught out). Probably the most helpful recommendation is that if a bank supposedly calls you, wait 5 minutes and call them back using the number on a printed statement or card, preferably from a different phone. Alternatively stick to using cheques – they are slow and banks discourage their use (because they are expensive for them to process), but are much safer for the customer. However, such advice should not be considered an alternative to pushing liability back where it belongs – the banks – which will not only reduce fraud but also protect vulnerable customers.

The end of the billion-user Password:Impossible

XKCD: “Password Strength”

This week, the Wall Street Journal published an article by Robert McMillan containing an apology from Bill Burr, a man whose name is unknown to most but whose work has caused daily frustration and wasted time for probably hundreds of millions of people for nearly 15 years. Burr is the author of the 2003 Special Publication 800-63. Appendix A from the US National Institute of Standards and Technology: eight pages that advised security administrators to require complex passwords including special characters, capital letters, and numbers, and dictate that they should be frequently changed.

“Much of what I did I now regret,” Burr told the Journal. In June, when NIST issued a completely rewritten document, it largely followed the same lines as the NCSCs password guidance, published in 2015 and based on prior research and collaboration with the UK Research Institute in Science of Cyber Security (RISCS), led from UCL by Professor Angela Sasse. Yet even in 2003 there was evidence that Burr’s approach was the wrong one: in 1999, Sasse did the first work pointing out the user-unfriendliness of standard password policies in the paper Users Are Not the Enemy, written with Anne Adams.

How much did that error cost in lost productivity and user frustration? Why did it take the security industry and research community 15 years to listen to users and admit that the password policies they were pushing were not only wrong but actively harmful, inflicting pain on millions of users and costing organisations huge sums in lost productivity and administration? How many other badly designed security measures are still out there, the cyber equivalent of traffic congestion and causing the same scale of damage?

For decades, every password breach has led to the same response, which Einstein would readily have recognised as insanity: ridiculing users for using weak passwords, creating policies that were even more difficult to follow, and calling users “stupid” for devising coping strategies to manage the burden. As Sasse, Brostoff, and Weirich wrote in 2001 in their paper Transforming the ‘Weakest Link’, “…simply blaming users will not lead to more effective security systems”. In his 2009 paper So Long, and No Thanks for the Externalities, Cormac Herley (Microsoft Research) pointed out that it’s often quite rational for users to reject security advice that ignores the indirect costs of the effort required to implement it: “It makes little sense to burden all users with a daily task to spare 0.01% of them a modest annual pain,” he wrote.

When GCHQ introduced the new password guidance, NCSC head Ciaran Martin noted the cognitive impossibility of following older policies, which he compared to trying to memorise a new 600-digit number every month. Part of the basis for Martin’s comments is found in more of Herley’s research. In Password Portfolios and the Finite-Effort User, Herley, Dinei Florencio, and Paul C. van Oorschot found that the cognitive load of managing 100 passwords while following the standard advice to use a unique random string for every password is equivalent to memorising 1,361 places of pi or the ordering of 17 packs of cards – a cognitive impossibility. “No one does this”, Herley said in presenting his research at a RISCS meeting in 2014.

The first of the three questions we started with may be the easiest to answer. Sasse’s research has found that in numerous organisations each staff member may spend as much as 30 minutes a day on entering, creating, and recovering passwords, all of it lost productivity. The US company Imprivata claims its system can save clinicians up to 45 minutes per day just in authentication; in that use case, the wasted time represents not just lost profit but potentially lost lives.

Add the cost of disruption. In a 2014 NIST diary study, Sasse, with Michelle Steves, Dana Chisnell, Kat Krol, Mary Theofanos, and Hannah Wald, found that up to 40% of the time leading up to the “friction point” – that is, the interruption for authentication – is spent redoing the primary task before users can find their place and resume work. The study’s participants recorded on average 23 authentication events over the 24-hour period covered by the study, and in interviews they indicated their frustration with the number, frequency, and cognitive load of these tasks, which the study’s authors dubbed “authentication fatigue”. Dana Chisnell has summarised this study in a video clip.

The NIST study identified a more subtle, hidden opportunity cost of this disruption: staff reorganise their primary tasks to minimise exposure to authentication, typically by batching the tasks that require it. This is a similar strategy to deciding to confine dealing with phone calls to certain times of day, and it has similar consequences. While it optimises that particular staff member’s time, it delays any dependent business process that is designed in the expectation of a continuous flow from primary tasks. Batching delays result not only in extra costs, but may lose customers, since slow responses may cause them to go elsewhere. In addition, staff reported not pursuing ideas for improvement or innovation because they couldn’t face the necessary discussions with security staff.

Unworkable security induces staff to circumvent it and make errors – which in turn lead to breaches, which have their own financial and reputational costs. Less obvious is the cost of lost staff goodwill for organisations that rely on free overtime – such as US government departments and agencies. The NIST study showed that this goodwill is dropping: staff log in less frequently from home, and some had even returned their agency-approved laptops and were refusing to log in from home or while travelling.

It could all have been so different as the web grew up over the last 20 years or so, because the problems and costs of password policies are not new or newly discovered. Sasse’s original 1999 research study was not requested by security administrators but by BT’s accountants, who balked when the help desk costs of password problems were tripling every year with no end in sight. Yet security people have continued to insist that users must adapt to their requirements instead of the other way around, even when the basis for their ideas is shown to be long out of date. For example, in a 2006 blog posting Purdue University professor Gene Spafford explained that the “best practice” (which he calls “infosec folk wisdom”) of regular password changes came from non-networked military mainframes in the 1970s – a far cry from today’s conditions.

Herley lists numerous other security technologies that are as much of a plague as old-style password practices: certificate error warnings, all of which are false positives; security warnings generally; and ambiguous and non-actionable advice, such as advising users not to click on “suspicious” links or attachments or “never” reusing passwords across accounts.

All of these are either not actionable, or just too difficult to put into practice, and the struggle to eliminate them has yet to bear fruit. Must this same story continue for another 20 years?

 

This article also appears on the Research Institute in Science of Cyber Security (RISCS) blog.

Top ten obstacles along distributed ledgers’ path to adoption

In January 2009, Bitcoin was released into the world by its pseudonymous founder, Satoshi Nakamoto. In the ensuing years, this cryptocurrency and its underlying technology, called the blockchain, have gone on a rollercoaster ride that few could have predicted at the time of its deployment. It’s been praised by governments around the world, and people have predicted that “the blockchain” will one day be like “the Internet.” It’s been banned by governments around the world, and people have declared it “adrift” and “dead.”

After years in which discussions focused entirely on Bitcoin, people began to realize the more abstract potential of the blockchain, and “next-generation” platforms such as Ethereum, Steem, and Zcash were launched. More established companies also realized the value in the more abstract properties of the blockchain — resilience, integrity, etc. — and repurposed it for their particular industries to create an even wider class of technologies called distributed ledgers, and to form industrial consortia such as R3 and Hyperledger. These more general distributed ledgers can look, to varying degrees, quite unlike blockchains, and have a somewhat clearer (or at least different) path to adoption given their association with established partners in industry.

Amidst many unknowns, what is increasingly clear is that, even if they might not end up quite like “the Internet,” distributed ledgers — in one form or another — are here to stay. Nevertheless, a long path remains from where we are now to widespread adoption and there are many important decisions to be made that will affect the security and usability of any final product. In what follows, we present the top ten obstacles along this path, and highlight in some cases both the problem and what we as a community can do (and have been doing) to address them. By necessity, many interesting aspects of distributed ledgers, both in terms of problems and solutions, have been omitted, and the focus is largely technical in nature.

10. Usability: why use distributed ledgers?

The problem, in short. What do end users actually want from distributed ledgers, if anything? In other words, distributed ledgers are being discussed as the solution to problems in many industries, but what is it that the full public verifiability (or accountability, immutability, etc.) of distributed ledgers really maps to in terms of what end users want?

9. Governance: who makes the rules?

The problem, in short. The beauty of distributed ledgers is that no one entity gets to control the decisions made by the network; in Bitcoin, e.g., coins are generated or transferred from one party to another only if a majority of the peers in the network agree on the validity of this action. While this process becomes threatened if any one peer becomes too powerful, there is a larger question looming over the operation of these decentralized networks: who gets to decide which actions are valid in the first place? The truth is that all these networks operate according to a defined set of rules, and that “who makes the rules matters at least as much as who enforces them.”

In this process of making the rules, even the most decentralized networks turn out to be heavily centralized, as recent issues in cryptocurrency governance demonstrate. These increasingly common collapses threaten to harm the value of these cryptocurrencies, and reveal the issues associated with ad-hoc forms of governance. Thus, the problem is not just that we don’t know how to govern these technologies, but that — somewhat ironically — we need more transparency around how these structures operate and who is responsible for which aspects of governance.

8. Meaningful comparisons: which is better?

The problem. Bitcoin was the first cryptocurrency to be based on the architecture we now refer to as the blockchain, but it certainly isn’t the last; there are now thousands of alternative cryptocurrencies out there, each with its own unique selling point. Ethereum offers a more expressive scripting language and maintains state, Litecoin allows for faster block creation than Bitcoin, and each new ICO (Initial Coin Offering) promises a shiny feature of its own. Looking beyond blockchains, there are numerous proposals for cryptocurrencies based on consensus protocols other than proof-of-work and proposals in non-currency-related settings, such as Certificate Transparency, R3 Corda, and Hyperledger Fabric, that still fit under the broad umbrella of distributed ledgers.

Continue reading Top ten obstacles along distributed ledgers’ path to adoption

The politics of the NHS WannaCrypt ransomware outbreak

You know you live in 2017 when the top headline on national newspapers relates to a ransomware attack on the National Heath Service, the UK Prime minister comments on the matter, and the the security researchers dealing with the outbreak are presented as heroic figures. As ever, The Register, has the most detailed and sophisticated technical article on the matter. But also strangely the most informative in terms of public policy. As if somehow, in our days, technical sophistication is a prerequisite also for sophisticated political comment on those matters. Other news outlets present a caricature, of the bad malware authors, the good security researcher and vendors working around the clock, the valiant government defenders, and a united humanity trying to beat the virus. I want to break that narrative open in this article, and discuss the actual political and social lessons we should be learning. In part to avoid similar disasters in the future.

First off, I am always surprised when such massive systemic outbreaks of malware, are blamed squarely on the author(s) of the malware itself, and the blame game ends there. It is without doubt that the malware author has a great share of responsibility. I personally think it is immoral to deploy ransomware in the wild, deny people access to their data, and seek to benefit from this. It is also a crime in the UK and elsewhere.

However, it is strange that a single author, or a small group of authors, without any major resources can have such a deep and widespread effect on major technological infrastructures. The absurdity becomes clear if we transpose the situation into the world of traditional engineering. Imagine all skyscrapers in major cities had to be evacuated, because a couple of teenagers with rocks were trying to blackmail business owners to pay up, to protect their precious glass windows. The fragility of software and IT systems seems to have no parallel in any other large scale engineering infrastructure — and this is not inherent, but the result of very specific micro-political, geo-political and economic decisions.

Lets take the WannaCrypt outbreak and look at the political and other social decisions that lead to the disaster — besides the agency of the malware authors:

  • The disaster was possible in part, and foremost, because IT systems within the UK critical NHS infrastructure are outdated — and for example rely on Windows XP that is not any more being maintained by Microsoft. Well, actually this is not strictly true: Microsoft does make security updates for Windows XP, but does not provide them for free — and instead Microsoft expects organizations that are locked in the OS to pay up to get patches and stay safe. So two key questions need to be asked …
  • Why is the NHS not upgrading to a new versions of Windows, or any other modern operating system? The answer is simple: line of business applications (LOB: from heath record management, specialist analysis and imaging software, to payroll) may not be compatible with new operating systems. On top of that a number of modern medical devices, such as large X-ray scanners or heart monitors, come with embedded computers running Windows XP — and only Windows XP. There is no way of upgrading them. The MEDJACK cyber-attacks were leveraging this to rampage through hospitals in 2015.
  • Is having LOB software tying you to an outdated OS, or medical devices costing millions that are not upgradeable, a fact of nature? No. It is down to a combination of terrible and naive procurement processes in health organizations, that do not take into account the need and costs if IT and security maintenance — and do not entrench it into the requirements and contracts for services, software and devices. It is also the result of the health software and devices industries being immature and unsophisticated as to the needs to secure IT. They reap the benefits of IT to make money, but without expending much of it to provide quality and security. The tragic state of security of medical devices has built the illustrious career of my friend Prof. Kevin Fu, who has found systemic attacks against implanted heart devices that could kill you, noob security bugs in medical device software, and has written extensively on the poor strategy to tackle these problem. So today’s attacks were a disaster waiting to happen — and expect more unless we learn the right lessons.
  • So given the terrible state of IT that prevents upgrading the OS, why is the NHS not paying up Microsoft to get security patches? That is because the government, and Jeremy Hunt in particular, back in 2014 decided to not pay up the money necessary to keep receiving security updates for Windows XP, despite being aware of the absolute reliance of the NHS on the outdated software. So in effect, a deliberate political decision was taken, at the highest level of the government to leave the NHS open to cyber attack. This is unlikely to be the last Windows XP security bug, so more are presumably to come.
  • Then there is the question of how malware authors, managed to get access to security bugs for windows XP? How did they get the tools necessary to attack such a mature, and rather common system, about 15 years after Windows XP was released, and only after it went out of maintenance? It turn out that the vulnerabilities they used, were in fact hoarded by the NSA as a cyber weapon — which was lost or stolen by hackers or leakers, and released into the wild! (The tool was codenamed EternalBlue). For may years, the computer security research community has been warning that stockpiling vulnerabilities in very common software for cyber-offense purposes, is dangerous. When those cyber weapons are lost, leaked, or even just used, there is proliferation of the technology necessary to attack, which criminals and foreign states can turn against critical infrastructure. This blog commented on the matter as recently as 8 March 2017 in a post entitled “What the CIA hack and leak teaches us about the bankruptcy of current “Cyber” doctrines”. This now feels like an unfortunately fulfilled prophesy, but the NHS attack was just the expected outcome of the US/UK and now common place doctrine around cyber — that contributes to, and leverages insecurity rather than security. Alternative public policy options exist of course.

So to summarize, besides the author of the malware, a number of other social and systemic factors contribute to making such cyber attacks possible: from poor security standards in heath informatics industries; poor procurement processes in heath organizations; lack of liability on any of the software vendors (incl. Microsoft) for providing insecure software or devices; cost-cutting from the government on NHS cyber security with no constructive alternatives to mitigate risks; and finally the UK/US cyber-offense doctrine that inevitably leads to proliferation of cyber-weapons and their use on civilian critical infrastructures.

It it those systemic factors that need to change to avoid future failures. Bad people wishing to make money from ransomware, or other badness, will always exist. There is a discipline devoted to preventing this, and it is called security engineering. It is time industry and goverment start taking its advice seriously.

 

This was originally posted on Conspicuous Chatter, the blog of Prof. George Danezis.

Strong Customer Authentication in the Payment Services Directive 2

Within the European Union, since 2007, banks are regulated by the Payment Services Directive. This directive sets out which types of institutions can offer payment services, and what rules they must follow. Importantly for customers, these rules include in what circumstances a fraud victim is entitled to a refund. In 2015 the European Parliament adopted a substantial revision to the directive, the Payment Services Directive 2 (PSD2), and it will soon be implemented by EU member states. One of the major changes in PSD2 is the requirement for banks to implement Strong Customer Authentication (SCA) for transactions, more commonly known as two-factor authentication – authentication codes based on two or more elements selected from something only the user knows, something only the user possesses, and something the user is. Moreover, the authentication codes must be linked to the recipient and amount of the transaction, which the customer must be made aware of.

The PSD2 does not detail the requirements of Strong Customer Authentication, nor the permitted exemptions to this rule. Instead, these decisions are to be made by the European Banking Authority (EBA) through Regulatory Technical Standards (RTS). As part of the development of these technical standards the EBA opened an initial discussion, to which we submitted a response based on our research on the security usability of banking authentication. Based on the discussion, the EBA produced a consultation paper incorporating a set of draft technical standards. In our response to this consultation paper, included below, we detailed how research both on security usability and banking authentication more broadly should guide the assessment of Strong Customer Authentication. Specifically we point out that there is an incorrect assumption of an inherent tradeoff between security and usability, that for a system to be secure it must be usable, and that evaluation of Strong Customer Authentication systems should be independent, transparent, and follow principles developed from latest research.

False trade-off between security and usability

In the reasoning presented in the consultation paper there is an assumption that a trade-off must be made between security and usability, e.g. paragraph 6 “Finally, the objective of ensuring a high degree of security and safety would suggest that the [European Banking Authority’s] Technical Standards should be onerous in terms of authentication, whereas the objective of user-friendliness would suggest that the [Regulatory Technical Standards] should rather promote the competing aim of customer convenience, such as one-click payments.”

This security/usability trade-off is not inherent to Strong Customer Authentication (SCA), and in fact the opposite is more commonly true: in order for SCA to be secure it must also be usable “because if the security is usable, users will do the security tasks, rather than ignore or circumvent them”. Also, SCA that is usable will make it more likely that customers will detect fraud because they will not have to expend their limited attention on just performing the actions required to make the SCA work. A small subset (10–15%) of participants in some studies reasoned that the fact that a security mechanism required a lot of effort from them meant it was secure. But that is a misconception that must not be used as an excuse for effortful authentication procedures.

Continue reading Strong Customer Authentication in the Payment Services Directive 2

Steven Murdoch – Privacy and Financial Security

Probably not too many academic researchers can say this: some of Steven Murdoch’s research leads have arrived in unmarked envelopes. Murdoch, who has moved to UCL from the University of Cambridge, works primarily in the areas of privacy and financial security, including a rare specialty you might call “crypto for the masses”. It’s the financial security aspect that produces the plain, brown envelopes and also what may be his most satisfying work, “Trying to help individuals when they’re having trouble with huge organisations”.

Murdoch’s work has a twist: “Usability is a security requirement,” he says. As a result, besides writing research papers and appearing as an expert witness, his past includes a successful start-up. Cronto, which developed a usable authentication device, was acquired by VASCO, a market leader in authentication and is now used by banks such as Commerzbank and Rabobank.

Developing the Cronto product was, he says, an iterative process that relied on real-world testing: “In research into privacy, if you build unusable system two things will go wrong,” he says. “One, people won’t use it, so there’s a smaller crowd to hide in.” This issue affects anonymising technologies such as Mixmaster and Mixminion. “In theory they have better security than Tor but no one is using them.” And two, he says, “People make mistakes.” A non-expert user of PGP, for example, can’t always accurately identify which parts of the message are signed and which aren’t.

The start-up experience taught Murdoch how difficult it is to get an idea from research prototype to product, not least because what works in a small case study may not when deployed at scale. “Selling privacy remains difficult,” he says, noting that Cronto had an easier time than some of its forerunners since the business model called for sales to large institutions. The biggest challenge, he says, was not consumer acceptance but making a convincing case that the predicted threats would materialise and that a small company could deliver an acceptable solution.

Continue reading Steven Murdoch – Privacy and Financial Security

Microsoft Ireland: winning the battle for privacy but losing the war

On Thursday, Microsoft won an important federal appeals court case against the US government. The case centres on a warrant issued in December 2013, requiring Microsoft to disclose emails and other records for a particular msn.com email address which was related to a narcotics investigation. It transpired that these emails were stored in a Microsoft datacenter in Ireland, but the US government argued that, since Microsoft is a US company and can easily copy the data into the US, a US warrant would suffice. Microsoft argued that the proper way for the US government to obtain the data is through the Mutual Legal Assistance Treaty (MLAT) between the US and Ireland, where an Irish court would decide, according to Irish law, whether the data should be handed over to US authorities. Part of the US government’s objection to this approach was that the MLAT process is sometimes very slow, although though the Irish government has committed to consider any such request “expeditiously”.

The appeal court decision is an important victory for Microsoft (following two lower courts ruling against them) because they sell their european datacenters as giving their european customers confidence that their data will be subject to the more stringent european privacy laws. Microsoft’s case was understandably supported by other technology companies in the same position, as well as civil liberties organisations such as the Electronic Frontier Foundation in the US and the Open Rights Group in the UK. However, I have mixed opinions about the outcome: while probably the right decision in this case, the wider consequences could be detrimental to privacy.

Both sides of the case wanted to set a precedent (if not legally, at least in practice). The US government wanted US law to apply to data held by US companies, wherever in the world the data resides. Microsoft wanted the location of the data to imply which legal regime applied, and so their customers could be confident that their own country’s laws will be respected, provided Microsoft have a datacenter in their own country (or at least one with compatible laws). My concern is that this ruling will give false assurance to customers of US companies, because in other circumstances a different decision could quite easily be taken.

We know about this case because Microsoft chose to challenge it in court, and were able to do so. This is the first time Microsoft has challenged a US warrant for data stored in their Irish datacenter despite it being in operation for three years prior to the case. Had the email address been associated with a more serious crime, or the demand for emails accompanied by a gagging order, it may not have been challenged. Microsoft and other technology companies may still choose to accept, or may even be forced to accept, the applicability of future US warrants to data they control, regardless of the court decision last week. One extreme approach to compel this approach would be for the US to jail employees until their demands are complied with.

For this reason, I have argued that control over data is more important than where data resides. If a company does not have the technical capability to comply with an order, it is easier for them to defend their case, and so protects both the company’s customers and staff. Microsoft have taken precisely this approach for their new German datacenters, which will be operated by staff in Germany working for a German “data trustee” (Deutsche Telekom). In contrast to their Irish datacenter, Microsoft staff will be unable to access customer data, except with the permission of and oversight from the data trustee.

While the data trustee model resists information being obtained through improper legal means, a malicious employee could still break rules for personal gain, or the systems designed to process legal requests could be hacked into. With modern security techniques it is possible to do better. End-to-end encryption for instant messaging is one such example, because (if designed properly) the communications provider does not have access to messages they carry. A more sophisticated approach is “distributed consensus”, where a decision is only taken if a majority of participants agree. The consensus process is automated and enforced through cryptography, ensuring that rules are respected even if some participants are malicious. Critical decisions in the Tor network and in Bitcoin are taken this way. More generally, there is a growing recognition that purely legal or procedural mechanisms are insufficient to protect privacy. This is one of the common threads present in much of the research presented at the Privacy Enhancing Technologies Symposium, being held this week in Darmstadt: recognising that there will always be imperfections in software, people and procedures and showing that nevertheless individual’s privacy can still be protected.

User-centred security awareness empowers employees to be the strongest defense

The release of our business whitepaper “Awareness is only the first step” was recently announced by Hewlett Packard Enterprise (HPE). The whitepaper is co-authored by HPE, UCL, and the UK government’s National Technical Authority for Information Assurance (CESG). The whitepaper emphasises how a user-centred approach to security awareness can empower employees to be the strongest link in defending their organisation. As Andrzej Kawalec, HPE’s Security Services CTO, notes in the press release:

“Users remain the first line of defense when faced with a dynamic and relentless threat environment.”

Security communication, education, and training (CET) in organisations is intended to align employee behaviour with the security goals of the organisation. Security managers conduct regular security awareness activities – familiar vehicles for awareness programmes, such as computer-based training (CBT), can cover topics such as password use, social media practices, and phishing. However, there is limited evidence to support the effectiveness or efficiency of CBT, and a lack of reliable indicators means that it is not clear if recommended security behaviour is followed in practice. If the design and delivery of CET programmes does not consider the individual, they can’t be certain of achieving the intended outcomes. As Angela Sasse comments:

“Many companies think that setting up web-based training packages are a cost-effective way of influencing staff behavior and achieving compliance, but research has provided clear evidence that this is not effective – rather, many staff resent it and suffer from ‘compliance fatigue.’

HPE awareness maturity curve

The whitepaper describes a path to guide the involvement of employees in their own security, as shown in the HPE awareness maturity curve above. To change security behaviors, a company needs to invest in the security knowledge and skills of its employees, and respond to employee needs differently at each stage.

Continue reading User-centred security awareness empowers employees to be the strongest defense

“Do you see what I see?” ask Tor users, as a large number of websites reject them but accept non-Tor users

If you use an anonymity network such as Tor on a regular basis, you are probably familiar with various annoyances in your web browsing experience, ranging from pages saying “Access denied” to having to solve CAPTCHAs before continuing. Interestingly, these hurdles disappear if the same website is accessed without Tor. The growing trend of websites extending this kind of “differential treatment” to anonymous users undermines Tor’s overall utility, and adds a new dimension to the traditional threats to Tor (attacks on user privacy, or governments blocking access to Tor). There is plenty of anecdotal evidence about Tor users experiencing difficulties in browsing the web, for example the user-reported catalog of services blocking Tor. However, we don’t have sufficient detail about the problem to answer deeper questions like: how prevalent is differential treatment of Tor on the web; are there any centralized players with Tor-unfriendly policies that have a magnified effect on the browsing experience of Tor users; can we identify patterns in where these Tor-unfriendly websites are hosted (or located), and so forth.

Today we present our paper on this topic: “Do You See What I See? Differential Treatment of Anonymous Users” at the Network and Distributed System Security Symposium (NDSS). Together with researchers from the University of Cambridge, University College London, University of California, Berkeley and International Computer Science Institute (Berkeley), we conducted comprehensive network measurements to shed light on websites that block Tor. At the network layer, we scanned the entire IPv4 address space on port 80 from Tor exit nodes. At the application layer, we fetch the homepage from the most popular 1,000 websites (according to Alexa) from all Tor exit nodes. We compare these measurements with a baseline from non-Tor control measurements, and uncover significant evidence of Tor blocking. We estimate that at least 1.3 million IP addresses that would otherwise allow a TCP handshake on port 80 block the handshake if it originates from a Tor exit node. We also show that at least 3.67% of the most popular 1,000 websites block Tor users at the application layer.

Continue reading “Do you see what I see?” ask Tor users, as a large number of websites reject them but accept non-Tor users