Security code AutoFill: is this new iOS feature a security risk for online banking?

A new feature for iPhones in iOS 12 – Security Code AutoFill – is supposed to improve the usability of Two Factor Authentication but could place users at risk of falling victim to online banking fraud.

Two Factor Authentication (2FA), which is often referred to as Two Step Verification, is an essential element for many security systems, especially those online and accessed remotely. In most cases, it provides extended security by checking if the user has access to a device. In SMS-based 2FA, for example, a user registers their phone number with an online service. When this service sees a login attempt for the corresponding user account, it sends a One Time Password (OTP), e.g. four to six digits, to the registered phone number. The legitimate user then receives this code and is able to quote it during the login process, but an impersonator won’t.

In a recent development by Apple, announced at its developer conference WWDC18, they are set to automate this last step to improve user experience with 2FA with a new feature that is set to be introduced to iOS in version 12. The Security Code AutoFill feature, currently available to developers in a beta version, will allow the mobile device to scan incoming SMS messages for such codes and suggest them at the top of the default keyboard.

Description of new iOS 12 Security Code AutoFill feature (source: Apple)

Currently, these SMS codes rely on the user actively switching apps and memorising the code, which can take a couple of seconds. Some users deploy alternative try strategies such as memorising the code from the preview banner and hastily typing it down. Apple’s new iOS feature will require only a single tap from the user. This will make the login process faster and less error prone, a significant improvement to the usability of 2FA. It could also translate into an increased uptake of 2FA among iPhone users.

Example of Security Code AutoFill feature in operation on iPhone (source: Apple)

If users synchronise SMS with their MacBook or iMac, the existing Text Message Forwarding feature will push codes from their iPhone and enable Security Code AutoFill in Safari.

Example of Security Code AutoFill feature synchronised with macOS Mojave (source: Apple)

Reducing friction in user interaction to improve technology uptake for new users, and increase the usability and satisfaction for existing users, is not a new concept. It has not only been discussed in academia at length but is also a common goal within industry, e.g. in banking. This is evident in how the financial and payment industry has encouraged contactless (Near Field Communication – NFC) payments, which makes transactions below a certain threshold much quicker than traditional Chip and PIN payments.

Continue reading Security code AutoFill: is this new iOS feature a security risk for online banking?

Improving the auditability of access to data requests

Data is increasingly collected and shared, with potential benefits for both individuals and society as a whole, but people cannot always be confident that their data will be shared and used appropriately. Decisions made with the help of sensitive data can greatly affect lives, so there is a need for ways to hold data processors accountable. This requires not only ways to audit these data processors, but also ways to verify that the reported results of an audit are accurate, while protecting the privacy of individuals whose data is involved.

We (Alexander Hicks, Vasilios Mavroudis, Mustafa Al-Basam, Sarah Meiklejohn and Steven Murdoch) present a system, VAMS, that allows individuals to check accesses to their sensitive personal data, enables auditors to detect violations of policy, and allows publicly verifiable and privacy-preserving statistics to be published. VAMS has been implemented twice, as a permissioned distributed ledger using Hyperledger Fabric and as a verifiable log-backed map using Trillian. The paper and the code are available.

Use cases and setting

Our work is motivated by two scenarios: controlling the access of law-enforcement personnel to communication records and controlling the access of healthcare professionals to medical data.

The UK Home Office states that 95% of serious and organized criminal cases make use of communications data. Annual reports published by the IOCCO (now under the IPCO name) provide some information about the request and use of communications data. There were over 750 000 requests for data in 2016, a portion of which were audited to provide the usage statistics and errors that can be found in the published report.

Not only is it important that requests are auditable, the requested data can also be used as evidence in legal proceedings. In this case, it is necessary to ensure the integrity of the data or to rely on representatives of data providers and expert witnesses, the latter being more expensive and requiring trust in third parties.

In the healthcare case, individuals usually consent for their GP or any medical professional they interact with to have access to relevant medical records, but may have concerns about the way their information is then used or shared.  The NHS regularly shares data with researchers or companies like DeepMind, sometimes in ways that may reduce the trust levels of individuals, despite the potential benefits to healthcare.

Continue reading Improving the auditability of access to data requests

Scanning beyond the horizon: long-term planning for cybersecurity and the post-quantum challenge

I recently came across an interesting white paper published by PwC, “A false sense of security? Cyber-security in the Middle East”. This paper is interesting for a number of reasons. Most obviously, I guess, it’s about an area of the world that’s a bit different from that of my immediate experience in the West and which faces many well-reported challenges. Indeed, it seems, as reported in the PwC paper, that companies and governments in the region suffer from more cyberattacks, resulting in bigger financial losses, than anywhere else in the world.

The paper confirms that many of the problems faced by companies and governments in the Middle East are, as of course one would expect, exactly those faced by their Western counterparts – too often, the cybersecurity industry responds to incidents in a fire-fighting style, rolling out patches in rushed knee-jerk reactions to imminent threats.

The way to counteract these problems is, of course, to train cybersecurity professionals who will be capable of making appropriate strategic and tactical investments in security and able to respond to respond better to attacks. All well and good, but there is global skills deficit in the cybersecurity industry and it seems that this problem is particularly acute in the Middle East;  and it seems to be a notable contributory factor to the problems experienced in the region. The problem needs some long-term thinking: in the average user, we need to encourage good security behaviours, which are learned over many years; in the security profession, we need to ensure that there is sufficient upcoming talent to fill our growing needs over the next century.

Exploring this topic a bit, I came across a company called SiConsult, a security services provider (with which I have no personal connection), with offices in the Middle East. They are taking an initiative, which provides students (or, indeed, anyone I think) with an interesting opportunity. They have been thinking about cryptography in the post-quantum world, and how to develop solutions and relevant expertise in the long term.

All public key cryptography as we currently know it may be rendered insecure by the deployment of quantum computers. Your Internet connection to the bank, the keys protecting your Dropbox, and your secure messaging applications will all be compromised. But a quantum computer that can run Shor’s algorithm, which means large numbers can be factorized in polynomial time, is still maybe ten years away (or five, or twenty, or … ). So why should we care now? Well, the consequences of losing the protection of good public-key crypto would be very serious and, consequently, NIST (the US’s National Institute of Standards and Technology) is running a process to standardize quantum-resistant algorithms. The first round of submissions has just closed, but we will have to wait until 2025 for draft standards, which could be too late for some use cases.

As a result of the process timeline, companies and academics are likely to search for their own solutions long before NIST standardizes theirs. SiConsult, the company I mentioned, is inviting students (or anyone else) develop a quantum-safe application messaging application, for a small prize – the Post Quantum Innovation Challenge. What is interesting is that the company’s motivation here is not purely financial – they are not looking to retain ownership of any designs or applications that may be submitted to the competition – but instead they are looking to spark interest in post-quantum cryptography, search for new cybersecurity talent, and encourage cybersecurity education, especially in the Middle East.

Initiatives like the Post Quantum Innovation Challenge are needed to energise those that may be considering a career in cyber security, to make sure that the talent pipeline is flowing well for years to come. Importantly, the barrier for entry to PQIC is relatively low: anyone with an interest in security should consider entering. Perhaps it will go a little of the way towards a solution to both the quantum and education long-term problems.

Thinking about fake news – As a security incident?

In Tristan and David’s Philosophy, Politics and Economics of Security and Privacy class, Jono gave a little information about incident response.  As a result, we have been thinking about the recent furor over fake news. There are some big questions circling this topic, and we’re going to try to focus on a part we have some competence in: what an understanding of fake news as a security incident can contribute to the wider debate. Our goal here is mostly to highlight some lessons from security research that should be applicable, so we can help constrain the solution space. Ultimately, any solution will need to engage with wider civil society.

The lessons we will argue for in the following are:

  • Solutions need to support the elector’s primary task. Education to avoid cognitive biases is not a short- or medium-term solution.
  • Focus on aligning the incentives of the media companies and the voters. Reduce the return on investment for the adversary.
  • Any blocking should be strategically useful, and not merely reactionary.

First, we want a more specific term, as well as a less charged one. Fake news includes politically or financially motivated stories presented as factual reports on the world that are fictional in material ways, and usually are intended to stir strong feelings. This definition is hardly complete. Furthermore, similar to the term “post-truth” as discussed by Jasanoff and Simmet, the term “fake news” makes several value judgement we’d like to avoid. “Fake news” carries a strong suggestion that we, the speakers, know what is true and what isn’t, and it also indicates some condescension by the speaker for anyone who believes an item of fake news. We want to avoid such insults. Instead, let’s say we want to focus on the following hypothetical security policy: democratic elections should be free from foreign interference.

Grounding out this policy definition hangs on the term “interference.” This is hard. Ultimately, the will of an elector in a free and fair election needs to be respected. This makes it particularly challenging to agree on constraints to what information an elector has access to. In practice, no elector is omniscient, so some constraints de facto exist. But weighing in on this issue is outside our competence. Let’s assume for now that public policy will provide an assessment of “interference” eventually. The UK recently announced a “dedicated national security communications unit” would be charged with “combating disinformation by state actors and others.” In France, Emmanuel Macron plans legislation to fight interference from foreign sources during elections. Various social media platforms have likewise announced attempted fixes, which means they have some functional definition of what “interference” they’re seeking to remove. Unfortunately, “none of the tech giants claim to be ready” for the November 2018 elections in the US.

Interference in elections is a type of information warfare. An appropriate security policy needs to assess the threat environment and the capabilities of the adversaries. In particular, the Russian Federation has been assessed as a highly motivated and well-resourced actor in this space. We should note that Russia, in turn, assesses the intent and capability of the USA similarly. Tools and tactics within information warfare, particularly disinformation campaigns, help define “interference” within our security policy.

In this context, what can the security research community recommend? Well, the main target of the disinformation campaign are usual citizens. They are targetable largely due to inherent cognitive biases in the way humans process and reason about information. In security terms, we could see these biases as vulnerabilities in the system. Classically, we have two options to secure the system: patch the vulnerability, or prevent the adversary from exploiting it by controlling or filtering the attack before it reaches the target.

Patch in this case would mean teaching people to avoid cognitive biases in their day-to-day reasoning. Psychology tells us this is hard. Intelligence analysts train for months or years for this. And the research in usable security has affirmed time and time again that the users are not the enemy. That is, the system must alleviate the burden on the user’s attention and not interfere with their primary task, or else the user will subvert or avoid the protections put in place. Any changes in user culture are slow. This leads us to lesson 1 on preventing disinformation campaigns for election interference: solutions need to support the elector’s primary task. Education to avoid cognitive biases is not a short-term or medium-term solution.

Controlling the attack vectors is more promising, although filtering them is not. A key aspect of any information security policy is aligning the economic incentives of the actors. Economics is a main reason why infosec is hard. It may not be easy to reorganize the incentives in the advertising and news distribution media space. However, as long as organizations profit from more clicks on an article no matter the content, there will be an incentive to drive viewers that is ultimately at cross-purposes with our security goal. Such misaligned incentives often swamp any technical security solutions. And any adversary with an economic incentive to attack usually will. Thus our second lesson: focus on aligning the incentives of the media companies and the voters; reduce the return on investment for the adversary. Exactly how to do these things will require future work.

There are huge issues about human rights and free speech for blocking access to information. However, the technical aspects of blacklisting are worth understanding before even attempting such human-rights debates. Blacklists of internet resources, such as domain names, IP addresses, or web pages, are useful. But they’re not a final solution. Whether blacklists move at the speed of national legislatures or are updated every five minutes, their main impact is to cause the adversary to move around.  Blacklists alone are not enough. We would need to look for suspiciously mobile resources (i.e. fast-flux), and eventually whitelist resources. Blacklists such as implemented by Facebook in response to Congress are helpful. But we should carefully consider how they drive the disinformation campaigns into a place we are better able to counteract them, and be sure we don’t make such campaigns harder to find instead. Lesson 3 is therefore that any blocking should be strategically useful, and not merely reactionary.

We’d be happy for further comments on fake news, disinformation campaigns that interfere with elections, lessons we’ve missed, disagreements about the value of security research to this topic, and other comments you might have! This is a wide open topic, and we’re still sounding it all out.

An investigation of online censorship in Cyprus

The island of Cyprus, situated in the east of the Mediterranean sea, has always been an important commercial and information exchange hub. Today, this is reflected on the large number of submarine cables that facilitate telecommunications with neighboring countries (Greece, Turkey, Egypt, Israel, Syria, and Lebanon) and with the rest of the world (reaching as far as India, South Korea, and Australia). Nevertheless, the Republic of Cyprus (RoC) is officially regarded as a freedom of expression safe haven, where “Internet is completely free of any specific regulation”. Unfortunately, Cypriot netizens claim that such statements couldn’t be further from the truth.

In recent years, Internet Service Providers (ISPs) in RoC have implemented an Internet filtering infrastructure to comply with the laws and regulations implied by the National Betting Authority (NBA). In an effort to understand the capacity of this infrastructure, a multi-disciplinary group of volunteers from the hack66 Observatory in Nicosia has collected and analyzed connectivity measurements from end-user connections on a variety of websites and services. Their report was presented at the 7th International Conference on e-Democracy.

For their experiments, the hack66 Observatory team put together a testlist comprising of domains from the National Betting Authority blocklist, the CitizenLab lists for Greece and Turkey, and WordPress blogs banned in Turkey as reported at the Lumen Database. The analysis was based on over 45,000 measurements from four residential ISPs operating in the Republic of Cyprus, that were anonymously submitted using a custom OONI probe during the months of March to May 2017. In addition, the team collected data using open DNS resolvers in Cyprus. Early findings suggest that the most common blocking method is DNS hijacking. Furthermore, the measurements indicate that some of the ISPs have deployed middle-boxes – network components capable of performing censorship, traffic manipulation or surveillance.

A closer inspection on the variations of the censorship mechanism implementations among ISPs raised concerns with regard to transparency and privacy: some ISPs do not inform users why a blocked website is not accessible; while others redirect requests to a web server controlled by the NBA, that could in turn log user identifiers such as their IP address. Similarly, the hack66 Observatory team was able to identify a number of unreported Internet censorship cases, entries in the NBA blocklist that either are invalid or that require sophisticated blocking techniques, and collateral damage due to blocking of email delivery to the regulated domains.

Understanding the case of Internet freedom in Cyprus becomes more complicated when the geopolitical situation is taken into consideration. Apart from the Republic of Cyprus, the island of Cyprus is divided into three other segments: the self-declared Turkish Republic of Northern Cyprus; the United Nations-controlled Green Line buffer zone; and the Sovereign Base Areas of Akrotiri and Dhekelia that remain under British control for military purposes. Measurements from the Multimax ISP operating in the area occupied by Turkey indicate network interference practices similar to those of mainland Turkey. This could be interpreted as the existence of two distinct regimes in terms of information policy on the island of Cyprus. No volunteers submitted measurements from the UN buffer zone or the British Sovereign bases. However, it is known via the Snowden revelations that GCHQ is operating a wiretap base in Cyprus codenamed “SOUNDER”, jointly funded by the NSA.

The purpose of the hack66 Observatory is to “to collect and analyze data, and routes of data through EMEA, […] in order to promote evidence based policy making”. The timing is just right, given the recent RoC government announcement of a new bill in the making, to regulate media operations and stop fake news. With their report, the hack66 Observatory aims to provide policy makers with a valuable asset for understanding the limitations and implications of the existing censorship infrastructure, and to start a debate around Internet freedom on the entirety of the island of Cyprus.

Liability for push payment fraud pushed onto the victims

This morning, BBC Rip Off Britain focused on push payment fraud, featuring an interview with me (starts at 34:20). The distinction between push and pull payments should be a matter for payment system geeks, and certainly isn’t at the front of customers’ minds when they make a payment. However, there’s a big difference when there’s fraud – for online pull payments (credit and debit card)  the bank will give the victim the money back in many situations; for online push payments (Faster Payment System and Standing Orders) the full liability falls on the party least able to protect themselves – the customer.

The banking industry doesn’t keep good statistics about push payment fraud, but it appears to be increasing, with Which receiving reports from over 650 victims in the first two weeks of November 2016, with losses totalling over £5.5 million. Today’s programme puts a human face to these statistics, by presenting the case of Jane and Steven Caldwell who were defrauded of over £100,000 from their Nationwide and NatWest accounts.

They were called up at the weekend by someone who said he was working for NatWest. To verify that this was the case, Jane used three methods. Firstly, she checked caller-ID to confirm that the number was indeed the bank’s own customer helpline – it was. Secondly, she confirmed that the caller had access to Jane’s transaction history – he did. Thirdly, she called the bank’s customer helpline, and the caller knew this was happening despite the original call being muted.

Convinced by these checks, Jane transferred funds from her own accounts to another in her own name, having been told by the caller that this was necessary to protect against fraud. Unfortunately, the caller was a scammer. Experts featured on the programme suspect that caller-ID was spoofed (quite easy, due to lack of end-to-end security for phone calls), and that malware on Jane’s laptop allowed the scammer to see transaction history on her screen, as well as to listen to and see her call to the genuine customer helpline through the computer’s microphone and webcam. The bank didn’t check that the name Jane gave (her own) matched that of the recipient account, so the scammer had full access to the transferred funds, which he quickly moved to other accounts. Only Nationwide was able to recover any money – £24,000 – leaving Jane and Steven over £75,000 out of pocket.

Neither bank offered Jane and Steven a refund, because they classed the transaction as “authorised” and so falling into one of the exceptions to the EU Payment Services Directive requirement to refund victims of fraud (the other exception being if the bank believed the customer acted either with gross negligence or fraudulently). The banks argued that their records showed that the customer’s authentication device was used and hence the transaction was “authorised”. In the original draft of the Payment Services Directive this argument would not be sufficient, but as a result of concerted lobbying by Barclays and other UK banks for their records to be considered conclusive, the word “necessarily” was inserted into Article 72, and so removing this important consumer protection.

“Where a payment service user denies having authorised an executed payment transaction, the use of a payment instrument recorded by the payment service provider, including the payment initiation service provider as appropriate, shall in itself not necessarily be sufficient to prove either that the payment transaction was authorised by the payer or that the payer acted fraudulently or failed with intent or gross negligence to fulfil one or more of the obligations under Article 69.”

Clearly the fraudulent transactions do not meet any reasonable definition of “authorised” because Jane did not give her permission for funds to be transferred to the scammer. She carried out the transfer because the way that banks commonly authenticate themselves to customers they call (proving that they know your account details) was unreliable, because the recipient bank didn’t check the account name, because bank fraud-detection mechanisms didn’t catch the suspicious nature of the transactions, and because the bank’s authentication device is too confusing to use safely. When the security of the payment system is fully under control of the banks, why is the customer held liable when a person acting with reasonable care could easily do the same as Jane?

Another question is whether banks do enough to recover funds lost through scams such as this. The programme featured an interview with barrister Gideon Roseman who quickly obtained court orders allowing him to recover most of his funds lost through a similar scam. Interestingly a side-effect of the court orders was that he discovered that his bank, Barclays, waited more than 24 hours after learning about the fraud before they acted to stop the stolen money being transferred out. After being caught out, Barclays refunded Gideon the affected funds, but in cases where the victim isn’t a barrister specialising in exactly these sorts of disputes, do the banks do all they could to recover stolen money?

In order to give banks proper incentives to prevent push payment fraud where possible and to recover stolen funds in the remainder of cases, Which called for the Payment Systems Regulator to make banks liable for push payment fraud, just as they are for pull payments. I agree, and expect that if this were the case banks would implement innovative fraud prevention mechanisms against push payment fraud that we currently only see for credit and debit transactions. I also argued that in implementing the revised Payment Service Directive, the European Banking Authority should require banks provide evidence that a customer was aware of the nature of the transaction and gave informed consent before they can hold the customer liable. Unfortunately, both the Payment Systems Regulator, and the European Banking Authority conceded to the banking industry’s request to maintain the current poor state of consumer protection.

The programme concluded with security advice, as usual. Some was actively misleading, such as the claim by NatWest that banks will never ask customers to transfer money between their accounts for security reasons. My bank called me to transfer money from my current account to savings account, for precisely this reason (I called them back to confirm it really was them). Some advice was vague and not actionable (e.g. “be vigilant” – in response to a case where the victim was extremely cautious and still got caught out). Probably the most helpful recommendation is that if a bank supposedly calls you, wait 5 minutes and call them back using the number on a printed statement or card, preferably from a different phone. Alternatively stick to using cheques – they are slow and banks discourage their use (because they are expensive for them to process), but are much safer for the customer. However, such advice should not be considered an alternative to pushing liability back where it belongs – the banks – which will not only reduce fraud but also protect vulnerable customers.

Should you phish your own employees?

No. Please don’t. It does little for security but harms productivity (because staff spend ages pondering emails, and not answering legitimate ones), upsets staff and destroys trust within an organisation.

Why is phishing a problem?

Phishing is one of the more common ways by which criminals gain access to companies’ passwords and other security credentials. The criminal sends a fake email to trick employees into opening a malware-containing attachment, clicking on a link to a malicious website that solicits passwords, or carrying out a dangerous action like transferring funds to the wrong person. If the attack is successful, criminals could impersonate staff, gain access to confidential information, steal money, or disrupt systems. It’s therefore understandable that companies want to block phishing attacks.

Perimeter protection, such as blocking suspicious emails, can never be 100% accurate. Therefore companies often tell employees not to click on links or open attachments in suspicious emails.

The problem with this advice is that it conflicts with how technology works and employees getting their job done. Links are meant to be clicked on, attachments are meant to be opened. For many employees their job consists almost entirely of opening attachments from strangers, and clicking on links in emails. Even a moderately well targeted phishing email will almost certainly succeed in getting some employees to click on it.

Companies try to deal with this problem through more aggressive training, particularly sending out mock phishing emails that exhibit some of the characteristics of phishing emails but actually come from the IT staff at the company. The company then records which employees click on the link in the email, open the attachment, or provide passwords to a fake website, as appropriate.

The problem is that mock-phishing causes more harm than good.

What harm does mock-phishing cause?

I hope no company would publicly name and shame employees that open mock-phishing emails, but effectively telling your staff that they failed a test and need remedial training will make them feel ashamed despite best intentions. If, as often recommended, employees who repeatedly open mock-phishing emails will even be subject to disciplinary procedures, not only will mock phishing lead to stress and consequent loss of productivity, but it will make it less likely that employees will report when they have clicked on a real phishing email.

Alienating your employees in this way is really the last thing a company should do if it wants to be secure – something Adams & Sasse pointed out as early as 1999. It is extremely important that companies learn when a phishing email has been opened, because there is a lot that can be done to prevent or limit harm. Contrary to popular belief, attacks don’t generally happen “at the speed of light” (it took three weeks for the Target hackers to steal data, from the point of the initial breach). Promptly cleaning potentially infected computers, revoking compromised credentials, and analysing network logs, is extremely effective, but works only if employees feel that they are on the same side as IT staff.

More generally, mock-phishing conflicts with and harms the trust relationship between the company and employees (because the company is continually probing them for weakness) and between employees (because mock-phishing normally impersonates fellow employees). Kirlappos and Sasse showed that trust is essential for maintaining employee satisfaction and for creating organisational resilience, including ability to comply with security policies. If unchecked, prolonged resentment within organisation achieves exactly the opposite – it increases the risk of insider attacks, which in the vast majority of cases start with disgruntlement.

There are however ways to achieve the same goals as mock phishing without the resulting harm.

Measuring resilience against phishing

Companies are right to want to understand how vulnerable they are to attack, and mock-phishing seems to offer this. One problem however is that the likelihood of opening a phishing email depends mainly on how well it is written, and so mock-phishing campaigns tell you more about the campaign than the organisation.

Instead, because every organisation inevitably receives many phishing emails, companies don’t need to send out their own. Use “genuine” phishing emails to collect the data needed, but be careful not to deter reporting. Realistically, however, phishing emails are going to be opened regardless of what steps are taken (short of cutting off Internet email completely). So organisations’ security strategy should accommodate this.

Reducing vulnerability to phishing

Following mock-phishing with training seems like the perfect time to get employees’ attention, but is this actually an ineffective way to reduce an organisations’ vulnerability to phishing. Caputo et. al tried this out and found that training had no significant effect, regardless of how it was phrased (using the latest nudging techniques from behavioural economists, an idea many security practitioners find very attractive). In this study, the organisation’s help desk staff was overwhelmed by calls from panicked employees – and when told it was a “training exercise”, many expressed frustration and resentment towards the security staff that had tricked them. Even if phishing prevention training could be made to work, because the activity of opening a malicious email is so close to what people do as part of their job, it would disrupt business by causing employees to delete legitimate email or spend too long deciding whether to open them.

A strong, unambiguous, and reliable cue that distinguishes phishing emails from legitimate ones would help, but until we have secure end-to-end encrypted and authenticated email, this isn’t possible. We are left with the task of designing security systems accepting that some phishing emails will be opened, rather than pretending they won’t be and blaming breaches on employees that fail to meet an unachievable bar. If employees are consistently being told that their behaviour is not good enough but not being given realistic and actionable advice on how to do better, it creates learned helplessness, with all the negative psychological consequences.

Comply with industry “best-practice”

Something must be done to protect the company; mock-phishing is something, therefore must must be done. This perverse logic is the root cause of much poor security, where organisations think they must comply with so-called “best practice” – seldom more than out-of-date folk tradition – or face penalties when there is a breach. It’s for this reason that bad security guidance persists long after it has been shown to be ineffective, such as password complexity rules.

Compliance culture, where rules are blindly followed without there being evidence of effectiveness, is one of the worst reasons to adopt a security practice. We need more research on how to develop technology that is secure and that supports an organisation’s overall goals. We know that mock-phishing is not effective, but what’s the right combination of security advice and technology that will give adequate protection, and how do we adapt these to the unique situation of each company?

What to do instead?

The security industry should take the lead of the aerospace industry and recognise the “blame and train” isn’t an effective or acceptable strategy. The attraction of mock phishing exercises to security staff is that they can say they are “doing something”, and like the idea of being able to measure behaviour change as a result of it – even though research points the other way. If vendors claim they have examples of mock phishing training reducing clicks on links, it is usually because employees have been trained to recognise only the vendor’s mock phishing emails or are frightened into not clicking on any links – and nobody measures the losses that occur because emails from actual or potential customers or suppliers are not answered. “If security doesn’t work for people, it doesn’t work.

When the CIO of a merchant bank found that mock phishing caused much anger and resentment from highly paid traders, but no reduction in clicking on links, he started to listen to what it looked like from their side. “Your security specialists can’t tell if it is a phishing email or not – why are you expecting me to be able to do that?” After seeing the problem from their perspective, he added a button to the corporate mail client labeled “I’m not sure” instead, and asked staff to use the button to forward emails they were not sure about to the security department. The security department then let the employee know, plus list all identified malicious emails on a web site employees could check before forwarding emails. Clicking on phishing links dropped to virtually zero – plus staff started talking to each other about phishing emails they had seen, and what the attacker was trying to do.

Security should deal with the problems that actually face the company; preventing phishing wouldn’t have stopped recent ransomware attacks. Assuming phishing is a concern then, where possible to do so with adequate accuracy, phishing emails should be blocked. Some will get through, but with well engineered and promptly patched systems, harm can be limited. Phishing-resistant authentication credentials, such as FIDO U2F, means that stolen passwords are worthless. Common processes should be designed so that the easy option is the secure one, giving people time to think carefully about whether a request for an exception is legitimate. Finally, if malware does get onto company computers, compartmentalisation will limit damage, effective monitoring facilitates detection, and good backups allow rapid recovery.

 

An earlier version of this article was previously published by the New Statesman.

The end of the billion-user Password:Impossible

XKCD: “Password Strength”

This week, the Wall Street Journal published an article by Robert McMillan containing an apology from Bill Burr, a man whose name is unknown to most but whose work has caused daily frustration and wasted time for probably hundreds of millions of people for nearly 15 years. Burr is the author of the 2003 Special Publication 800-63. Appendix A from the US National Institute of Standards and Technology: eight pages that advised security administrators to require complex passwords including special characters, capital letters, and numbers, and dictate that they should be frequently changed.

“Much of what I did I now regret,” Burr told the Journal. In June, when NIST issued a completely rewritten document, it largely followed the same lines as the NCSCs password guidance, published in 2015 and based on prior research and collaboration with the UK Research Institute in Science of Cyber Security (RISCS), led from UCL by Professor Angela Sasse. Yet even in 2003 there was evidence that Burr’s approach was the wrong one: in 1999, Sasse did the first work pointing out the user-unfriendliness of standard password policies in the paper Users Are Not the Enemy, written with Anne Adams.

How much did that error cost in lost productivity and user frustration? Why did it take the security industry and research community 15 years to listen to users and admit that the password policies they were pushing were not only wrong but actively harmful, inflicting pain on millions of users and costing organisations huge sums in lost productivity and administration? How many other badly designed security measures are still out there, the cyber equivalent of traffic congestion and causing the same scale of damage?

For decades, every password breach has led to the same response, which Einstein would readily have recognised as insanity: ridiculing users for using weak passwords, creating policies that were even more difficult to follow, and calling users “stupid” for devising coping strategies to manage the burden. As Sasse, Brostoff, and Weirich wrote in 2001 in their paper Transforming the ‘Weakest Link’, “…simply blaming users will not lead to more effective security systems”. In his 2009 paper So Long, and No Thanks for the Externalities, Cormac Herley (Microsoft Research) pointed out that it’s often quite rational for users to reject security advice that ignores the indirect costs of the effort required to implement it: “It makes little sense to burden all users with a daily task to spare 0.01% of them a modest annual pain,” he wrote.

When GCHQ introduced the new password guidance, NCSC head Ciaran Martin noted the cognitive impossibility of following older policies, which he compared to trying to memorise a new 600-digit number every month. Part of the basis for Martin’s comments is found in more of Herley’s research. In Password Portfolios and the Finite-Effort User, Herley, Dinei Florencio, and Paul C. van Oorschot found that the cognitive load of managing 100 passwords while following the standard advice to use a unique random string for every password is equivalent to memorising 1,361 places of pi or the ordering of 17 packs of cards – a cognitive impossibility. “No one does this”, Herley said in presenting his research at a RISCS meeting in 2014.

The first of the three questions we started with may be the easiest to answer. Sasse’s research has found that in numerous organisations each staff member may spend as much as 30 minutes a day on entering, creating, and recovering passwords, all of it lost productivity. The US company Imprivata claims its system can save clinicians up to 45 minutes per day just in authentication; in that use case, the wasted time represents not just lost profit but potentially lost lives.

Add the cost of disruption. In a 2014 NIST diary study, Sasse, with Michelle Steves, Dana Chisnell, Kat Krol, Mary Theofanos, and Hannah Wald, found that up to 40% of the time leading up to the “friction point” – that is, the interruption for authentication – is spent redoing the primary task before users can find their place and resume work. The study’s participants recorded on average 23 authentication events over the 24-hour period covered by the study, and in interviews they indicated their frustration with the number, frequency, and cognitive load of these tasks, which the study’s authors dubbed “authentication fatigue”. Dana Chisnell has summarised this study in a video clip.

The NIST study identified a more subtle, hidden opportunity cost of this disruption: staff reorganise their primary tasks to minimise exposure to authentication, typically by batching the tasks that require it. This is a similar strategy to deciding to confine dealing with phone calls to certain times of day, and it has similar consequences. While it optimises that particular staff member’s time, it delays any dependent business process that is designed in the expectation of a continuous flow from primary tasks. Batching delays result not only in extra costs, but may lose customers, since slow responses may cause them to go elsewhere. In addition, staff reported not pursuing ideas for improvement or innovation because they couldn’t face the necessary discussions with security staff.

Unworkable security induces staff to circumvent it and make errors – which in turn lead to breaches, which have their own financial and reputational costs. Less obvious is the cost of lost staff goodwill for organisations that rely on free overtime – such as US government departments and agencies. The NIST study showed that this goodwill is dropping: staff log in less frequently from home, and some had even returned their agency-approved laptops and were refusing to log in from home or while travelling.

It could all have been so different as the web grew up over the last 20 years or so, because the problems and costs of password policies are not new or newly discovered. Sasse’s original 1999 research study was not requested by security administrators but by BT’s accountants, who balked when the help desk costs of password problems were tripling every year with no end in sight. Yet security people have continued to insist that users must adapt to their requirements instead of the other way around, even when the basis for their ideas is shown to be long out of date. For example, in a 2006 blog posting Purdue University professor Gene Spafford explained that the “best practice” (which he calls “infosec folk wisdom”) of regular password changes came from non-networked military mainframes in the 1970s – a far cry from today’s conditions.

Herley lists numerous other security technologies that are as much of a plague as old-style password practices: certificate error warnings, all of which are false positives; security warnings generally; and ambiguous and non-actionable advice, such as advising users not to click on “suspicious” links or attachments or “never” reusing passwords across accounts.

All of these are either not actionable, or just too difficult to put into practice, and the struggle to eliminate them has yet to bear fruit. Must this same story continue for another 20 years?

 

This article also appears on the Research Institute in Science of Cyber Security (RISCS) blog.

Caveat emptor: Privacy could turn UK’s genomic dream into a nightmare

Raise your hand if, over the past couple of years, you have not heard of whole genome sequencing (usually abbreviated as WGS), or at least read a sensational headline or two about how fast its costs are dropping. In a nutshell, WGS is used to determine an organism’s complete DNA sequence. But it is actually not the only way to analyze our DNA — in fact, genetic testing has been used in clinical settings for decades, e.g., to diagnose patients with known genetic conditions. Seven-time Wimbledon champion Pete Sampras is a beta-thalassemia carrier – a condition that affects the formation of beta-globin chains, ultimately leading to red blood cells not being formed correctly. Testing for thalassemia, usually triggered by family history or a blood test showing low mean corpuscular volume, is done with a number of simple in-vitro techniques.

The availability of affordable whole genome sequencing not only prompts new hopes toward the discovery and diagnosis of rare/unknown genetic conditions, but also enables researchers to better understand the relationship between the genome and predisposition to diseases, response to treatment, etc. Overall, progress makes it increasingly feasible to envision a not-so-distant future where individuals will undergo sequencing once, making their digitized genome easily available for doctors, clinicians, and third-parties. This would also allow us to use computational algorithms to analyze the genome as a whole, as opposed to expensive, slower, targeted in-vitro tests.

Along these lines is last week’s announcement by Prof. Dame Sally Davies, UK’s Chief Medical Officer, calling the NHS to deliver her “genomic dream” within five years, with whole genome sequencing becoming “as standard as blood tests and biopsies.” As detailed in her annual report, a large number of patients in the UK already undergo genetic testing at least once in their life, and for a wide range of reasons, including the aforementioned thalassemia diagnosis, screening for cancer predisposition triggered by high family incidence, or determining the best course of action in cancer treatment. So wouldn’t it make sense to sequence the genome once and keep the data available for life? My answer is yes, but with a number of bold and double underlined caveats.

The first one is with respect to the security concerns prompted by the need to store data of extreme sensitivity like genomic data. The genome obviously contains information about ethnic heritage and predisposition to diseases/conditions, possibly including mental disorders. Data breaches of sensitive information, including health and medical data, sadly happen on a daily basis. But certain security threats are actually specific to genomic data and much more worrisome. For instance, due to its hereditary nature, access to a genome essentially implies access to that of close relatives as well, including offspring, so one’s decision to publish/donate their genome is also being made for their siblings, kids/grandkids, etc. So sensitivity does not degrade over time, but persists long after a patient’s death. In fact, it might even increase, as new aspects of the genome are studied and discovered. As a consequence, Prof. Dame Davies’ dream could easily turn into a nightmare without adequate investments toward sound security measures, that involve both technical tools (such as upgrading of obsolete hardware) as well as education, awareness, and practices that do not simply shift burden onto clinicians and practitioners, but incorporate security in their design and not as an after-the-fact.

Another concern is with allowing researchers to use the genomic data collected by the NHS, along with medical history, for research purposes – e.g., to discover genetic mutations that are responsible for certain traits or diseases. This requires building a meaningful trust relationship between the NHS/Government and patients, which cannot happen without healing the wounds from recent incidents like the care.data debacle or Google DeepMind’s use of personal NHS records. Instead, the annual report seems to include security/anonymity promises we cannot realistically maintain, while, worse yet, promoting a rhetoric of greater good trumping privacy concerns, as well as seemingly pushing a choice between donating data and access to the best care. It is misleading to use terms like “de-identification” of genomic data as an effective protection tool, while proper anonymization is inherently impossible due to its peculiar combination of unique and hereditary features, as demonstrated by a wide array of scientific results. Rather, we should make it clear that data can never be fully anonymized, or protected with 100% guarantees.

Overall, I believe that patients should not be automatically enrolled in sequencing programs. Even if they are given an option to later withdraw, once the data is out there it is impossible to delete all copies of it. Rather, patients should voluntarily decide to join through an effective informed consent mechanism. This proves to be challenging against a background in which information that can be extracted/inferred from genomes may rapidly change: what if in the future a new mutation responsible for early on-set Alzheimer’s is discovered? What if the NHS is privatized? Encouraging results with respect to education and informed consent, however, do exist. For instance, the Personal Genome Project is a good example of effective strategies to help volunteers understand the risks and could be used to inform future NHS-run sequencing programs.

 

An edited version of this article was originally published on the BMJ.

Preventing phishing won’t stop ransomware spreading

Ransomware is in the news again, with Reckitt Benckiser reporting that disruption caused by the NotPetya ransomware could have cost them up to £100 million. In response to this news, just as every previous ransomware incident, the security industry started giving out advice – almost universally emphasising the importance of not opening phishing emails.

The problem is that this advice won’t work. Putting aside the fact that such advice is often so vague as to be impossible to put into action, the cause of recent ransomware outbreaks is not people opening phishing emails:

  • WannaCry, which notably caused severe disruption to the NHS, spread by automated scanning of computers vulnerable to an NSA-developed exploit. Although the starting point was initially assumed to be a phishing email, this was later debunked – only network scanning was used.
  • The Mole Ransomware attack that hit many organisations, including UCL, was initially thought to be spread by employees clicking on links in phishing emails. Subsequent analysis found this was incorrect and most likely the malware spread through malicious advertisements on legitimate websites.
  • NotPetya was initially thought to have been spread through Russian or Ukrainian phishing emails (explaining why that part of the world was so badly affected). It turned out to have not involved phishing at all, but the outbreak started through a tampered software update to the MEDoc tax accounting software mandated by the Ukranian government. Once inside an organisation, NotPetya then spread using the same exploit as WannaCry or by compromising administrative credentials.

Here are three major incidents, making international news, and the standard advice to “be vigilant” when opening emails or clicking links would have been useless. Is it any surprise that security advice gets ignored?

Not only is common anti-phishing advice unhelpful but it shifts blame to individuals (who are not in a position to prevent or mitigate most attacks) away from the IT industry and staff (who are). It also misleads management into thinking that they can “blame-and-train” their employees rather than investing in well engineered preventative security mechanisms and IT systems that can recover from compromise.

And there are things that can be done which have been shown to be effective, not just against the current outbreaks but many in the past and likely future. WannaCry would have been prevented by applying software updates, but the NotPetya outbreak was caused by a software update. The industry needs to act promptly to ensure that software updates are safe and reliable before customers become even more wary about installing them.

The spread of WannaCry and NotPetya within companies could have been prevented or slowed through better operational practices such as segmenting networks and limiting the use of administrative privilege. We’ve known this approach to be effective, but better tools and practices are needed to avoid enhanced security mechanisms being a drag on an organisation’s productivity.

Mole could have been prevented by ad-blocking browser extensions. The advertising industry is in open war against ad-blocking because it harms their income stream, but while they keep on spreading malware through their networks I have limited sympathy.

Well maintained and protected backups are essential to allow recovery, whether from ransomware, purely destructive attacks, or hardware failure. The security techniques above are effective, but these measures will not prevent every attack so mechanisms are needed to efficiently deal with the aftermath.

Most importantly we need to move away from security being a set of traditions passed from generation to generation with little or no reason to believe they are effective (so called “best practice”) to well engineered systems following rigorous, evidence-based guidance on state of the art cybersecurity principles, standards and practices.