Should you phish your own employees?

No. Please don’t. It does little for security but harms productivity (because staff spend ages pondering emails, and not answering legitimate ones), upsets staff and destroys trust within an organisation.

Why is phishing a problem?

Phishing is one of the more common ways by which criminals gain access to companies’ passwords and other security credentials. The criminal sends a fake email to trick employees into opening a malware-containing attachment, clicking on a link to a malicious website that solicits passwords, or carrying out a dangerous action like transferring funds to the wrong person. If the attack is successful, criminals could impersonate staff, gain access to confidential information, steal money, or disrupt systems. It’s therefore understandable that companies want to block phishing attacks.

Perimeter protection, such as blocking suspicious emails, can never be 100% accurate. Therefore companies often tell employees not to click on links or open attachments in suspicious emails.

The problem with this advice is that it conflicts with how technology works and employees getting their job done. Links are meant to be clicked on, attachments are meant to be opened. For many employees their job consists almost entirely of opening attachments from strangers, and clicking on links in emails. Even a moderately well targeted phishing email will almost certainly succeed in getting some employees to click on it.

Companies try to deal with this problem through more aggressive training, particularly sending out mock phishing emails that exhibit some of the characteristics of phishing emails but actually come from the IT staff at the company. The company then records which employees click on the link in the email, open the attachment, or provide passwords to a fake website, as appropriate.

The problem is that mock-phishing causes more harm than good.

What harm does mock-phishing cause?

I hope no company would publicly name and shame employees that open mock-phishing emails, but effectively telling your staff that they failed a test and need remedial training will make them feel ashamed despite best intentions. If, as often recommended, employees who repeatedly open mock-phishing emails will even be subject to disciplinary procedures, not only will mock phishing lead to stress and consequent loss of productivity, but it will make it less likely that employees will report when they have clicked on a real phishing email.

Alienating your employees in this way is really the last thing a company should do if it wants to be secure – something Adams & Sasse pointed out as early as 1999. It is extremely important that companies learn when a phishing email has been opened, because there is a lot that can be done to prevent or limit harm. Contrary to popular belief, attacks don’t generally happen “at the speed of light” (it took three weeks for the Target hackers to steal data, from the point of the initial breach). Promptly cleaning potentially infected computers, revoking compromised credentials, and analysing network logs, is extremely effective, but works only if employees feel that they are on the same side as IT staff.

More generally, mock-phishing conflicts with and harms the trust relationship between the company and employees (because the company is continually probing them for weakness) and between employees (because mock-phishing normally impersonates fellow employees). Kirlappos and Sasse showed that trust is essential for maintaining employee satisfaction and for creating organisational resilience, including ability to comply with security policies. If unchecked, prolonged resentment within organisation achieves exactly the opposite – it increases the risk of insider attacks, which in the vast majority of cases start with disgruntlement.

There are however ways to achieve the same goals as mock phishing without the resulting harm.

Measuring resilience against phishing

Companies are right to want to understand how vulnerable they are to attack, and mock-phishing seems to offer this. One problem however is that the likelihood of opening a phishing email depends mainly on how well it is written, and so mock-phishing campaigns tell you more about the campaign than the organisation.

Instead, because every organisation inevitably receives many phishing emails, companies don’t need to send out their own. Use “genuine” phishing emails to collect the data needed, but be careful not to deter reporting. Realistically, however, phishing emails are going to be opened regardless of what steps are taken (short of cutting off Internet email completely). So organisations’ security strategy should accommodate this.

Reducing vulnerability to phishing

Following mock-phishing with training seems like the perfect time to get employees’ attention, but is this actually an ineffective way to reduce an organisations’ vulnerability to phishing. Caputo et. al tried this out and found that training had no significant effect, regardless of how it was phrased (using the latest nudging techniques from behavioural economists, an idea many security practitioners find very attractive). In this study, the organisation’s help desk staff was overwhelmed by calls from panicked employees – and when told it was a “training exercise”, many expressed frustration and resentment towards the security staff that had tricked them. Even if phishing prevention training could be made to work, because the activity of opening a malicious email is so close to what people do as part of their job, it would disrupt business by causing employees to delete legitimate email or spend too long deciding whether to open them.

A strong, unambiguous, and reliable cue that distinguishes phishing emails from legitimate ones would help, but until we have secure end-to-end encrypted and authenticated email, this isn’t possible. We are left with the task of designing security systems accepting that some phishing emails will be opened, rather than pretending they won’t be and blaming breaches on employees that fail to meet an unachievable bar. If employees are consistently being told that their behaviour is not good enough but not being given realistic and actionable advice on how to do better, it creates learned helplessness, with all the negative psychological consequences.

Comply with industry “best-practice”

Something must be done to protect the company; mock-phishing is something, therefore must must be done. This perverse logic is the root cause of much poor security, where organisations think they must comply with so-called “best practice” – seldom more than out-of-date folk tradition – or face penalties when there is a breach. It’s for this reason that bad security guidance persists long after it has been shown to be ineffective, such as password complexity rules.

Compliance culture, where rules are blindly followed without there being evidence of effectiveness, is one of the worst reasons to adopt a security practice. We need more research on how to develop technology that is secure and that supports an organisation’s overall goals. We know that mock-phishing is not effective, but what’s the right combination of security advice and technology that will give adequate protection, and how do we adapt these to the unique situation of each company?

What to do instead?

The security industry should take the lead of the aerospace industry and recognise the “blame and train” isn’t an effective or acceptable strategy. The attraction of mock phishing exercises to security staff is that they can say they are “doing something”, and like the idea of being able to measure behaviour change as a result of it – even though research points the other way. If vendors claim they have examples of mock phishing training reducing clicks on links, it is usually because employees have been trained to recognise only the vendor’s mock phishing emails or are frightened into not clicking on any links – and nobody measures the losses that occur because emails from actual or potential customers or suppliers are not answered. “If security doesn’t work for people, it doesn’t work.

When the CIO of a merchant bank found that mock phishing caused much anger and resentment from highly paid traders, but no reduction in clicking on links, he started to listen to what it looked like from their side. “Your security specialists can’t tell if it is a phishing email or not – why are you expecting me to be able to do that?” After seeing the problem from their perspective, he added a button to the corporate mail client labeled “I’m not sure” instead, and asked staff to use the button to forward emails they were not sure about to the security department. The security department then let the employee know, plus list all identified malicious emails on a web site employees could check before forwarding emails. Clicking on phishing links dropped to virtually zero – plus staff started talking to each other about phishing emails they had seen, and what the attacker was trying to do.

Security should deal with the problems that actually face the company; preventing phishing wouldn’t have stopped recent ransomware attacks. Assuming phishing is a concern then, where possible to do so with adequate accuracy, phishing emails should be blocked. Some will get through, but with well engineered and promptly patched systems, harm can be limited. Phishing-resistant authentication credentials, such as FIDO U2F, means that stolen passwords are worthless. Common processes should be designed so that the easy option is the secure one, giving people time to think carefully about whether a request for an exception is legitimate. Finally, if malware does get onto company computers, compartmentalisation will limit damage, effective monitoring facilitates detection, and good backups allow rapid recovery.

 

An earlier version of this article was previously published by the New Statesman.

Preventing phishing won’t stop ransomware spreading

Ransomware is in the news again, with Reckitt Benckiser reporting that disruption caused by the NotPetya ransomware could have cost them up to £100 million. In response to this news, just as every previous ransomware incident, the security industry started giving out advice – almost universally emphasising the importance of not opening phishing emails.

The problem is that this advice won’t work. Putting aside the fact that such advice is often so vague as to be impossible to put into action, the cause of recent ransomware outbreaks is not people opening phishing emails:

  • WannaCry, which notably caused severe disruption to the NHS, spread by automated scanning of computers vulnerable to an NSA-developed exploit. Although the starting point was initially assumed to be a phishing email, this was later debunked – only network scanning was used.
  • The Mole Ransomware attack that hit many organisations, including UCL, was initially thought to be spread by employees clicking on links in phishing emails. Subsequent analysis found this was incorrect and most likely the malware spread through malicious advertisements on legitimate websites.
  • NotPetya was initially thought to have been spread through Russian or Ukrainian phishing emails (explaining why that part of the world was so badly affected). It turned out to have not involved phishing at all, but the outbreak started through a tampered software update to the MEDoc tax accounting software mandated by the Ukranian government. Once inside an organisation, NotPetya then spread using the same exploit as WannaCry or by compromising administrative credentials.

Here are three major incidents, making international news, and the standard advice to “be vigilant” when opening emails or clicking links would have been useless. Is it any surprise that security advice gets ignored?

Not only is common anti-phishing advice unhelpful but it shifts blame to individuals (who are not in a position to prevent or mitigate most attacks) away from the IT industry and staff (who are). It also misleads management into thinking that they can “blame-and-train” their employees rather than investing in well engineered preventative security mechanisms and IT systems that can recover from compromise.

And there are things that can be done which have been shown to be effective, not just against the current outbreaks but many in the past and likely future. WannaCry would have been prevented by applying software updates, but the NotPetya outbreak was caused by a software update. The industry needs to act promptly to ensure that software updates are safe and reliable before customers become even more wary about installing them.

The spread of WannaCry and NotPetya within companies could have been prevented or slowed through better operational practices such as segmenting networks and limiting the use of administrative privilege. We’ve known this approach to be effective, but better tools and practices are needed to avoid enhanced security mechanisms being a drag on an organisation’s productivity.

Mole could have been prevented by ad-blocking browser extensions. The advertising industry is in open war against ad-blocking because it harms their income stream, but while they keep on spreading malware through their networks I have limited sympathy.

Well maintained and protected backups are essential to allow recovery, whether from ransomware, purely destructive attacks, or hardware failure. The security techniques above are effective, but these measures will not prevent every attack so mechanisms are needed to efficiently deal with the aftermath.

Most importantly we need to move away from security being a set of traditions passed from generation to generation with little or no reason to believe they are effective (so called “best practice”) to well engineered systems following rigorous, evidence-based guidance on state of the art cybersecurity principles, standards and practices.

Observing the WannaCry fallout: confusing advice and playing the blame game

As researchers who strive to develop effective measures that help individuals and organisations to stay secure, we have observed the public communications that followed the Wannacry ransomware attack of May 2017 with increasing concern. As in previous incidents, many descriptions of the attack are inaccurate – something colleagues have pointed out elsewhere. Our concern here is the advice being disseminated, and the fact that various stakeholders seem to be more concerned with blaming each other than with working together to prevent further attacks affecting organisations and individuals.

Countries initially affected in WannaCry ransomware attack (source Wikipedia, User:Roke)

Let’s start with the advice that is being handed out. Much of it is unhelpful at best, and downright wrong at worst – a repeat of what happened after Heartbleed, when people were advised to change their passwords before the affected organisations had patched their SSL code. Here is a sample of real advice sent out to staff in major organisation post-WannaCry:

“We urge you to be vigilant and not to open emails that are unexpected, unusual or suspicious in any way. If you experience any unusual computer behaviour, especially any warning messages, please contact your IT support immediately and do not use your computer further until advised to do so.”

Useful advice has to be correct and actionable. Users have to cope with dozens, maybe hundreds, of unexpected emails every day, most containing links and many accompanied by attachments, cannot take ten minutes to ponder each email before deciding whether to respond. Such instructions also implicitly and unfairly suggest that users’ ordinary behaviour plays a major role in causing major incidents like this one. RISCS advocates enlisting users as part of frontline defence. Well-targeted, automated blocking of malicious emails lessen the burden on individual users, and build resilience for the organisation in general.

In an example of how to confuse users, The Register reports that City of London Police sent out its “advice” via email in an attachment entitled “ransomware.pdf”. So users are simultaneously exhorted to be “vigilant” and not open emails and required to open an email in order to get that advice. The confusion resulting from contradictory advice is worse than the direct consequences of the attack: it enables future attacks. Why play Keystone Cyber Cops when UK National Technical Authority for such matters, the National Centre for Cyber Security, offers authoritative and well-presented advice on their website?

Our other concern is the unedifying squabbling between spokespeople for governments and suppliers blaming each other for running unsupported software, not paying for support, charging to support unsupported software, and so on, with and security experts weighing in on all sides. To a general public already alarmed by media headlines, finger-pointing creates little confidence that either party is competent or motivated to keep secure the technology on which our lives all now depend. When the supposed “good guys” expend their energy fighting each other, instead of working together to defeat the attackers, it’s hard to avoid the conclusion that we are most definitely doomed. As Columbia University professor Steve Bellovin writes, the question of who should pay to support old software requires broader collaborative thought; in avoiding that debate we are choosing to pay as a society for such security failures.

We would refer those looking for specific advice on dealing with ransomware to the NCSC guidance, which is offered in separate parts for SMEs and home users and enterprise administrators.

Much of NCSC’s advice is made up of things we all know: we should back up our data, patch our systems, and run anti-virus software. Part of RISCS’ remit is to understand why users often don’t follow this advice. Ensuring backups remain uninfected is, unfortunately, trickier than it should be. Ransomware will infect – that is, encrypt – not only the machine it’s installed on but any permanently-connected physical or network drive. This problem ought to be solved by cloud storage, but it can be difficult to find out whether cloud backups will be affected by ransomware, and technical support documentation often simply refers individuals to “your IT support”, even though vendors know few individuals have any. Dropbox is unusually helpful, and provides advice on how to recover from a ransomware attack and how far it can help. Users should be encouraged to read such advice in advance and factor it into backup plans.

There are many reasons why people do not update their software. They may, for example, have had bad experiences in the past that lead them to worry that security updates will fail or leave their system damaged, or incorporate unwanted changes in functionality. Software vendors can help here by rigorously testing updates and resisting the temptation to bundle in new features. IT support staff can help by doing their own tests that allow them to reassure their users that they will help resolve any resulting problems in a timely manner.

In some cases, there are no updates to install. The WannaCry ransomware attack highlighted the continuing use of desktop Windows XP, which Microsoft stopped supporting with security updates in 2014. A few organisations still pay for special support contracts, and Microsoft made an exception for WannaCry by releasing a security patch more widely. Organisations that still have XP-based systems should now investigate to understand why equipment using an unsafe, outdated operating system is still in use. Ideally, the software should be replaced with a more modern system; if that’s not possible the machine should be isolated from network connections. No amount of reminding users to patch their systems or telling them to “be vigilant” will be effective in such cases.

 

This article also appears on the Research Institute in Science of Cyber Security (RISCS) blog.

Online security won’t improve until companies stop passing the buck to the customer

It’s normally in the final seconds of a TV or radio interview that security experts get asked for advice for the general public – something simple, unambiguous, and universally applicable. It’s a fair question, and what the public want. But simple answers are usually wrong, and can do more harm than good.

For example, take the UK government’s Cyber Aware scheme to educate the public in cybersecurity. It recommends individuals choose long and complex passwords made out of three words. The problem with this advice is that the resulting passwords are hard to remember, especially as people have many passwords and use some infrequently. Consequently, they will be tempted to use the same password on multiple websites.

Password re-use is far more of a security problem than insufficiently complex passwords, so advice that doesn’t help people manage multiple passwords does more harm than good. Instead, I would recommend remembering your most important passwords (like banking and email), and store the rest in a password manager. This approach isn’t perfect or suitable for everyone, but for most people, it will improve their security.

Advice unfit for the real world

Cyber Aware also tells people not to write down their passwords, or let anyone else know them – banks require the same thing. But we know that people commonly share their banking credentials with family, for legitimate reasons. People also realise that writing down passwords is a pretty good approach if you’re only worried about internet hackers, rather than people who can get close to you to see the written notes. Security advice that doesn’t stand up to scrutiny or doesn’t fit with people’s lives will be ignored – and will discredit the organisation offering it.

Because everyone’s situation is different, good security advice should include helping people to understand what risks they should be worried about, and to take steps that mitigate these risks. This advice doesn’t have to be complicated. Teen Vogue published a tutorial on how to select and configure a secure messaging tool, which very sensibly explains that if you are more worried about invasions of privacy from people who can get their hands on your phone, you should make different choices than if you are just concerned about, for example, companies spying on you.

The Teen Vogue article was widely praised by security experts, in stark contrast to an article in The Guardian that made the eye-catching claim that encrypted messaging service WhatsApp is insecure, without making clear that this only applies in an obscure and extremely unlikely set of circumstances.

Zeynep Tufekci, a researcher studying the effects of technology on society, reported that the article was exploited to legitimise misleading advice given by the Turkish government that WhatsApp is unsafe, resulting in human rights activists using SMS instead – which is far easier for the government to censor and monitor.

The Turkish government’s “security advice” to move from WhatsApp to less secure SMS was clearly aimed more at assisting its surveillance efforts than helping the activists to whom the advice was directed. Another case where the advice is more for the benefit of the organisation giving it is that of banks, where the terms and conditions small print gives incomprehensible security advice that isn’t true security advice, instead merely a legal technique to allow the banks wiggle room to refuse to refund victims of fraud.

Continue reading Online security won’t improve until companies stop passing the buck to the customer

Strong Customer Authentication in the Payment Services Directive 2

Within the European Union, since 2007, banks are regulated by the Payment Services Directive. This directive sets out which types of institutions can offer payment services, and what rules they must follow. Importantly for customers, these rules include in what circumstances a fraud victim is entitled to a refund. In 2015 the European Parliament adopted a substantial revision to the directive, the Payment Services Directive 2 (PSD2), and it will soon be implemented by EU member states. One of the major changes in PSD2 is the requirement for banks to implement Strong Customer Authentication (SCA) for transactions, more commonly known as two-factor authentication – authentication codes based on two or more elements selected from something only the user knows, something only the user possesses, and something the user is. Moreover, the authentication codes must be linked to the recipient and amount of the transaction, which the customer must be made aware of.

The PSD2 does not detail the requirements of Strong Customer Authentication, nor the permitted exemptions to this rule. Instead, these decisions are to be made by the European Banking Authority (EBA) through Regulatory Technical Standards (RTS). As part of the development of these technical standards the EBA opened an initial discussion, to which we submitted a response based on our research on the security usability of banking authentication. Based on the discussion, the EBA produced a consultation paper incorporating a set of draft technical standards. In our response to this consultation paper, included below, we detailed how research both on security usability and banking authentication more broadly should guide the assessment of Strong Customer Authentication. Specifically we point out that there is an incorrect assumption of an inherent tradeoff between security and usability, that for a system to be secure it must be usable, and that evaluation of Strong Customer Authentication systems should be independent, transparent, and follow principles developed from latest research.

False trade-off between security and usability

In the reasoning presented in the consultation paper there is an assumption that a trade-off must be made between security and usability, e.g. paragraph 6 “Finally, the objective of ensuring a high degree of security and safety would suggest that the [European Banking Authority’s] Technical Standards should be onerous in terms of authentication, whereas the objective of user-friendliness would suggest that the [Regulatory Technical Standards] should rather promote the competing aim of customer convenience, such as one-click payments.”

This security/usability trade-off is not inherent to Strong Customer Authentication (SCA), and in fact the opposite is more commonly true: in order for SCA to be secure it must also be usable “because if the security is usable, users will do the security tasks, rather than ignore or circumvent them”. Also, SCA that is usable will make it more likely that customers will detect fraud because they will not have to expend their limited attention on just performing the actions required to make the SCA work. A small subset (10–15%) of participants in some studies reasoned that the fact that a security mechanism required a lot of effort from them meant it was secure. But that is a misconception that must not be used as an excuse for effortful authentication procedures.

Continue reading Strong Customer Authentication in the Payment Services Directive 2

Steven Murdoch – Privacy and Financial Security

Probably not too many academic researchers can say this: some of Steven Murdoch’s research leads have arrived in unmarked envelopes. Murdoch, who has moved to UCL from the University of Cambridge, works primarily in the areas of privacy and financial security, including a rare specialty you might call “crypto for the masses”. It’s the financial security aspect that produces the plain, brown envelopes and also what may be his most satisfying work, “Trying to help individuals when they’re having trouble with huge organisations”.

Murdoch’s work has a twist: “Usability is a security requirement,” he says. As a result, besides writing research papers and appearing as an expert witness, his past includes a successful start-up. Cronto, which developed a usable authentication device, was acquired by VASCO, a market leader in authentication and is now used by banks such as Commerzbank and Rabobank.

Developing the Cronto product was, he says, an iterative process that relied on real-world testing: “In research into privacy, if you build unusable system two things will go wrong,” he says. “One, people won’t use it, so there’s a smaller crowd to hide in.” This issue affects anonymising technologies such as Mixmaster and Mixminion. “In theory they have better security than Tor but no one is using them.” And two, he says, “People make mistakes.” A non-expert user of PGP, for example, can’t always accurately identify which parts of the message are signed and which aren’t.

The start-up experience taught Murdoch how difficult it is to get an idea from research prototype to product, not least because what works in a small case study may not when deployed at scale. “Selling privacy remains difficult,” he says, noting that Cronto had an easier time than some of its forerunners since the business model called for sales to large institutions. The biggest challenge, he says, was not consumer acceptance but making a convincing case that the predicted threats would materialise and that a small company could deliver an acceptable solution.

Continue reading Steven Murdoch – Privacy and Financial Security

Do you know what you’re paying for? How contactless cards are still vulnerable to relay attack

Contactless card payments are fast and convenient, but convenience comes at a price: they are vulnerable to fraud. Some of these vulnerabilities are unique to contactless payment cards, and others are shared with the Chip and PIN cards – those that must be plugged into a card reader – upon which they’re based. Both are vulnerable to what’s called a relay attack. The risk for contactless cards, however, is far higher because no PIN number is required to complete the transaction. Consequently, the card payments industry has been working on ways to solve this problem.

The relay attack is also known as the “chess grandmaster attack”, by analogy to the ruse in which someone who doesn’t know how to play chess can beat an expert: the player simultaneously challenges two grandmasters to an online game of chess, and uses the moves chosen by the first grandmaster in the game against the second grandmaster, and vice versa. By relaying the opponents’ moves between the games, the player appears to be a formidable opponent to both grandmasters, and will win (or at least force a draw) in one match.

Similarly, in a relay attack the fraudster’s fake card doesn’t know how to respond properly to the payment terminal because, unlike a genuine card, it doesn’t contain the cryptographic key known only to the card and the bank that verifies the card is genuine. But like the fake chess grandmaster, the fraudster can relay the communication of the genuine card in place of the fake card.

For example, the victim’s card (Alice, in the diagram below) would be in a fake or hacked card payment terminal (Bob) and the criminal would use the fake card (Carol) to attempt a purchase in a genuine terminal (Dave). The bank would challenge the fake card to prove its identity, this challenge is then relayed to the genuine card in the hacked terminal, and the genuine card’s response is relayed back on behalf of the fake card to the bank for verification. The end result is that the terminal used for the real purchase sees the fake card as genuine, and the victim later finds an unexpected and expensive purchase on their statement.

A rigged payment terminal capable of performing the relay attack can be made from off-the-shelf components
The relay attack, where the cards and terminals can be at any distance from each other

Demonstrating the grandmaster attack

I first demonstrated that this vulnerability was real with my colleague Saar Drimer at Cambridge, showing on television how the attack could work in Britain in 2007 and in the Netherlands in 2009.

In our scenario, the victim put their card in a fake terminal thinking they were buying a coffee when in fact their card details were relayed by a radio link to another shop, where the criminal used a fake card to buy something far more expensive. The fake terminal showed the victim only the price of a cup of coffee, but when the bank statement arrives later the victim has an unpleasant surprise.

At the time, the banking industry agreed that the vulnerability was real, but argued that as it was difficult to carry out in practice it was not a serious risk. It’s true that, to avoid suspicion, the fraudulent purchase must take place within a few tens of seconds of the victim putting their card into the fake terminal. But this restriction only applies to the Chip and PIN contact cards available at the time. The same vulnerability applies to today’s contactless cards, only now the fraudster need only be physically near the victim at the time – contactless cards can communicate at a distance, even while the card is in the victim’s pocket or bag.

Continue reading Do you know what you’re paying for? How contactless cards are still vulnerable to relay attack

Microsoft Ireland: winning the battle for privacy but losing the war

On Thursday, Microsoft won an important federal appeals court case against the US government. The case centres on a warrant issued in December 2013, requiring Microsoft to disclose emails and other records for a particular msn.com email address which was related to a narcotics investigation. It transpired that these emails were stored in a Microsoft datacenter in Ireland, but the US government argued that, since Microsoft is a US company and can easily copy the data into the US, a US warrant would suffice. Microsoft argued that the proper way for the US government to obtain the data is through the Mutual Legal Assistance Treaty (MLAT) between the US and Ireland, where an Irish court would decide, according to Irish law, whether the data should be handed over to US authorities. Part of the US government’s objection to this approach was that the MLAT process is sometimes very slow, although though the Irish government has committed to consider any such request “expeditiously”.

The appeal court decision is an important victory for Microsoft (following two lower courts ruling against them) because they sell their european datacenters as giving their european customers confidence that their data will be subject to the more stringent european privacy laws. Microsoft’s case was understandably supported by other technology companies in the same position, as well as civil liberties organisations such as the Electronic Frontier Foundation in the US and the Open Rights Group in the UK. However, I have mixed opinions about the outcome: while probably the right decision in this case, the wider consequences could be detrimental to privacy.

Both sides of the case wanted to set a precedent (if not legally, at least in practice). The US government wanted US law to apply to data held by US companies, wherever in the world the data resides. Microsoft wanted the location of the data to imply which legal regime applied, and so their customers could be confident that their own country’s laws will be respected, provided Microsoft have a datacenter in their own country (or at least one with compatible laws). My concern is that this ruling will give false assurance to customers of US companies, because in other circumstances a different decision could quite easily be taken.

We know about this case because Microsoft chose to challenge it in court, and were able to do so. This is the first time Microsoft has challenged a US warrant for data stored in their Irish datacenter despite it being in operation for three years prior to the case. Had the email address been associated with a more serious crime, or the demand for emails accompanied by a gagging order, it may not have been challenged. Microsoft and other technology companies may still choose to accept, or may even be forced to accept, the applicability of future US warrants to data they control, regardless of the court decision last week. One extreme approach to compel this approach would be for the US to jail employees until their demands are complied with.

For this reason, I have argued that control over data is more important than where data resides. If a company does not have the technical capability to comply with an order, it is easier for them to defend their case, and so protects both the company’s customers and staff. Microsoft have taken precisely this approach for their new German datacenters, which will be operated by staff in Germany working for a German “data trustee” (Deutsche Telekom). In contrast to their Irish datacenter, Microsoft staff will be unable to access customer data, except with the permission of and oversight from the data trustee.

While the data trustee model resists information being obtained through improper legal means, a malicious employee could still break rules for personal gain, or the systems designed to process legal requests could be hacked into. With modern security techniques it is possible to do better. End-to-end encryption for instant messaging is one such example, because (if designed properly) the communications provider does not have access to messages they carry. A more sophisticated approach is “distributed consensus”, where a decision is only taken if a majority of participants agree. The consensus process is automated and enforced through cryptography, ensuring that rules are respected even if some participants are malicious. Critical decisions in the Tor network and in Bitcoin are taken this way. More generally, there is a growing recognition that purely legal or procedural mechanisms are insufficient to protect privacy. This is one of the common threads present in much of the research presented at the Privacy Enhancing Technologies Symposium, being held this week in Darmstadt: recognising that there will always be imperfections in software, people and procedures and showing that nevertheless individual’s privacy can still be protected.

Cybersecurity: Supporting a Resilient and Trustworthy System for the UK

Yesterday, the Royal Society published their report on cybersecurity policy, practice and research – Progress and Research in Cybersecurity: Supporting a Resilient and Trustworthy System for the UK. The report includes 10 recommendations for government, industry, universities and research funders, covering the topics of trust, resilience, research and translation. This major report was written based on evidence gathered from an open call, as well as meetings with key stakeholders, guided by a steering committee which included UCL members M. Angela Sasse and Steven Murdoch. Here, we summarise what we think are the most important signposts for cybersecurity research and practice.

The report points out that, as online technology and services touches nearly everyone’s lives, the role of cybersecurity is to support a resilient digital economy and society in the UK. Previously, the government focus was very much on national security – but it is just as important that we are able to secure our personal data, financial assets and homes, and that our decisions as consumers and citizens are not manipulated or subverted. The report rightly states that the national authority for cybersecurity needs to be transparent, expert and have a clear and widely-understood remit. The creation of the National Cyber Security Center (NCSC) may be a first step towards this, but the report also points out that currently, it is to be under control of GCHQ – and this is bound to be a problem given the lack of trust they have from parts of industry and civil society, as a result of their role in subverting the development of security standards in order to make surveillance easier.

The report furthermore recommends that the government preserves the robustness of encryption, including end-to-end encryption and promotes its widespread use. Encryption and other computer security measures provides the foundation that allows individuals to trust organisations and attempts to weaken these measures in order to facilitate surveillance will create security risks and reduce robustness. Whether weaknesses are created by requiring fragile encryption algorithms or mandating exceptional access, these attempts increase the risk of unauthorised parties gaining access to sensitive computer systems.

The report also rightly says that companies need to take more responsibility for cyber security: to be a trustworthy business partner or service provider, they need to be competent, and have the correct motivation. “Dumping” the risks associated with online transactions on customers or business partners who don’t have skills and resources to deal with them, and hiding this in complex terms and conditions, is not trustworthy behaviour. Making companies take liability for the security failures will likely play a part in improving trustworthiness, but needs to be done carefully. Important open source software such as OpenSSL is developed by a handful of people in their spare time. When something goes wrong (such as Heartbleed), multi-billion dollar companies who built their business around open source software without contributing or even properly evaluating the risk, should not be able to assign liability to the volunteer developers. Companies should also be transparent and be required to disclose vulnerabilities and breaches. The report calls for such disclosures to be made to a central body, but we would go further and recommend that they be disclosed to the customers exposed to risks as a result of the security failures.

In order to improve and demonstrate competence in cybersecurity, we need evidence-based guidance on state-of-the-art cybersecurity principles, standards and practices. These go further than just following widely used industry practice, or following craft knowledge based on expert opinion, but should be an an ambitious set of criteria which have been demonstrated to make a pronounced improvement in security. A significant effort is required to transform what is currently a set of common practices (the term “best practice” is a misnomer) through empirical tests and measurements into a set of practices and tools that we know to be effective and efficient under real-world conditions (this is the mission of The Research Institute in Science of Cyber Security (RISCS), which has just started a new 5 year phase). The report in particular calls for research on ways to quantify the security offered by anonymization algorithms and anonymous communication techniques, as these perform an critical role in supporting privacy by design.

The report calls for more research, and new means to assess and support research. Cybersecurity is an international field, and research funders should seek for peer-review to be performed by the best expertise available internationally and to remove barriers to international and multidisciplinary research. However, supporting multidisciplinary research should not be at the expense of addressing the many hard technical problems which remain. The report also identifies the benefits of challenge-led funding, where a research programme is led by a world-leading expert with substantial freedom in how research funds are distributed. For this model to work it is critical to create the right environment for recruiting international experts to both lead and participate in such challenges, which as fellow steering-group member Ross Anderson has pointed out, the vote to leave the EU has seriously harmed. Finally, the report calls for improvements to the research commercialisation process, including that universities priorities getting research out into the real world over trying to extract as much money as possible, and that new investment sources are developed to fill in the gaps left by traditional venture capital, such as for software developed for the public good.

Exceptional access provisions in the Investigatory Powers Bill

The Investigatory Powers Bill, being debated in Parliament this week, proposes the first wide-scale update in 15 years to the surveillance powers of the UK law-enforcement and intelligence agencies.

The Bill has several goals: to consolidate some existing surveillance powers currently either scattered throughout other legislation or not even publicly disclosed, to create a wide range of new surveillance powers, and to change the process of authorisation and oversight surrounding the use of surveillance powers. The Bill is complex and, at 245 pages long, makes scrutiny challenging.

The Bill has had its first and second readings in the House of Commons, and has been examined by relevant committees in the Commons. The Bill will now be debated in the ‘report stage’, where MPs will have the chance to propose amendments following committee scrutiny. After this it will progress to a third reading, and then to the House of Lords for further debate, followed by final agreement by both Houses.

So far, four committee reports have been published examining the draft Bill, from the Intelligence and Security Committee of Parliament, the joint House of Lords/House of Commons committee specifically set up to examine the draft Bill, the House of Commons Science and Technology committee (to which I served as technical advisor) and the Joint Committee on Human Rights.

These committees were faced with a difficult task of meeting an accelerated timetable for the Bill, with the government aiming to have it become law by the end of 2016. The reason for the haste is that the Bill would re-instate and extend the ability of the government to compel companies to collect data about their users, even without there being any suspicion of wrongdoing, known as “data retention”. This power was previously set out in the EU Data Retention Directive, but in 2014 the European Court of Justice found it be unlawful.

Emergency legislation passed to temporarily permit the government to continue their activities will expire in December 2016 (but may be repealed earlier if an appeal to the European Court of Justice succeeds).

The four committees which examined the Bill together made 130 recommendations but since the draft was published, the government only slightly changed the Bill, and only a few minor amendments were accepted by the Public Bills committee.

Many questions remain about whether the powers granted by the Bill are justifiable and subject to adequate oversight, but where insights from computer security research are particularly relevant is on the powers to grant law enforcement the ability to bypass normal security mechanisms, sometimes termed “exceptional access”.

Continue reading Exceptional access provisions in the Investigatory Powers Bill