The end of the billion-user Password:Impossible

XKCD: “Password Strength”

This week, the Wall Street Journal published an article by Robert McMillan containing an apology from Bill Burr, a man whose name is unknown to most but whose work has caused daily frustration and wasted time for probably hundreds of millions of people for nearly 15 years. Burr is the author of the 2003 Special Publication 800-63. Appendix A from the US National Institute of Standards and Technology: eight pages that advised security administrators to require complex passwords including special characters, capital letters, and numbers, and dictate that they should be frequently changed.

“Much of what I did I now regret,” Burr told the Journal. In June, when NIST issued a completely rewritten document, it largely followed the same lines as the NCSCs password guidance, published in 2015 and based on prior research and collaboration with the UK Research Institute in Science of Cyber Security (RISCS), led from UCL by Professor Angela Sasse. Yet even in 2003 there was evidence that Burr’s approach was the wrong one: in 1999, Sasse did the first work pointing out the user-unfriendliness of standard password policies in the paper Users Are Not the Enemy, written with Anne Adams.

How much did that error cost in lost productivity and user frustration? Why did it take the security industry and research community 15 years to listen to users and admit that the password policies they were pushing were not only wrong but actively harmful, inflicting pain on millions of users and costing organisations huge sums in lost productivity and administration? How many other badly designed security measures are still out there, the cyber equivalent of traffic congestion and causing the same scale of damage?

For decades, every password breach has led to the same response, which Einstein would readily have recognised as insanity: ridiculing users for using weak passwords, creating policies that were even more difficult to follow, and calling users “stupid” for devising coping strategies to manage the burden. As Sasse, Brostoff, and Weirich wrote in 2001 in their paper Transforming the ‘Weakest Link’, “…simply blaming users will not lead to more effective security systems”. In his 2009 paper So Long, and No Thanks for the Externalities, Cormac Herley (Microsoft Research) pointed out that it’s often quite rational for users to reject security advice that ignores the indirect costs of the effort required to implement it: “It makes little sense to burden all users with a daily task to spare 0.01% of them a modest annual pain,” he wrote.

When GCHQ introduced the new password guidance, NCSC head Ciaran Martin noted the cognitive impossibility of following older policies, which he compared to trying to memorise a new 600-digit number every month. Part of the basis for Martin’s comments is found in more of Herley’s research. In Password Portfolios and the Finite-Effort User, Herley, Dinei Florencio, and Paul C. van Oorschot found that the cognitive load of managing 100 passwords while following the standard advice to use a unique random string for every password is equivalent to memorising 1,361 places of pi or the ordering of 17 packs of cards – a cognitive impossibility. “No one does this”, Herley said in presenting his research at a RISCS meeting in 2014.

The first of the three questions we started with may be the easiest to answer. Sasse’s research has found that in numerous organisations each staff member may spend as much as 30 minutes a day on entering, creating, and recovering passwords, all of it lost productivity. The US company Imprivata claims its system can save clinicians up to 45 minutes per day just in authentication; in that use case, the wasted time represents not just lost profit but potentially lost lives.

Add the cost of disruption. In a 2014 NIST diary study, Sasse, with Michelle Steves, Dana Chisnell, Kat Krol, Mary Theofanos, and Hannah Wald, found that up to 40% of the time leading up to the “friction point” – that is, the interruption for authentication – is spent redoing the primary task before users can find their place and resume work. The study’s participants recorded on average 23 authentication events over the 24-hour period covered by the study, and in interviews they indicated their frustration with the number, frequency, and cognitive load of these tasks, which the study’s authors dubbed “authentication fatigue”. Dana Chisnell has summarised this study in a video clip.

The NIST study identified a more subtle, hidden opportunity cost of this disruption: staff reorganise their primary tasks to minimise exposure to authentication, typically by batching the tasks that require it. This is a similar strategy to deciding to confine dealing with phone calls to certain times of day, and it has similar consequences. While it optimises that particular staff member’s time, it delays any dependent business process that is designed in the expectation of a continuous flow from primary tasks. Batching delays result not only in extra costs, but may lose customers, since slow responses may cause them to go elsewhere. In addition, staff reported not pursuing ideas for improvement or innovation because they couldn’t face the necessary discussions with security staff.

Unworkable security induces staff to circumvent it and make errors – which in turn lead to breaches, which have their own financial and reputational costs. Less obvious is the cost of lost staff goodwill for organisations that rely on free overtime – such as US government departments and agencies. The NIST study showed that this goodwill is dropping: staff log in less frequently from home, and some had even returned their agency-approved laptops and were refusing to log in from home or while travelling.

It could all have been so different as the web grew up over the last 20 years or so, because the problems and costs of password policies are not new or newly discovered. Sasse’s original 1999 research study was not requested by security administrators but by BT’s accountants, who balked when the help desk costs of password problems were tripling every year with no end in sight. Yet security people have continued to insist that users must adapt to their requirements instead of the other way around, even when the basis for their ideas is shown to be long out of date. For example, in a 2006 blog posting Purdue University professor Gene Spafford explained that the “best practice” (which he calls “infosec folk wisdom”) of regular password changes came from non-networked military mainframes in the 1970s – a far cry from today’s conditions.

Herley lists numerous other security technologies that are as much of a plague as old-style password practices: certificate error warnings, all of which are false positives; security warnings generally; and ambiguous and non-actionable advice, such as advising users not to click on “suspicious” links or attachments or “never” reusing passwords across accounts.

All of these are either not actionable, or just too difficult to put into practice, and the struggle to eliminate them has yet to bear fruit. Must this same story continue for another 20 years?

 

This article also appears on the Research Institute in Science of Cyber Security (RISCS) blog.

Top ten obstacles along distributed ledgers’ path to adoption

In January 2009, Bitcoin was released into the world by its pseudonymous founder, Satoshi Nakamoto. In the ensuing years, this cryptocurrency and its underlying technology, called the blockchain, have gone on a rollercoaster ride that few could have predicted at the time of its deployment. It’s been praised by governments around the world, and people have predicted that “the blockchain” will one day be like “the Internet.” It’s been banned by governments around the world, and people have declared it “adrift” and “dead.”

After years in which discussions focused entirely on Bitcoin, people began to realize the more abstract potential of the blockchain, and “next-generation” platforms such as Ethereum, Steem, and Zcash were launched. More established companies also realized the value in the more abstract properties of the blockchain — resilience, integrity, etc. — and repurposed it for their particular industries to create an even wider class of technologies called distributed ledgers, and to form industrial consortia such as R3 and Hyperledger. These more general distributed ledgers can look, to varying degrees, quite unlike blockchains, and have a somewhat clearer (or at least different) path to adoption given their association with established partners in industry.

Amidst many unknowns, what is increasingly clear is that, even if they might not end up quite like “the Internet,” distributed ledgers — in one form or another — are here to stay. Nevertheless, a long path remains from where we are now to widespread adoption and there are many important decisions to be made that will affect the security and usability of any final product. In what follows, we present the top ten obstacles along this path, and highlight in some cases both the problem and what we as a community can do (and have been doing) to address them. By necessity, many interesting aspects of distributed ledgers, both in terms of problems and solutions, have been omitted, and the focus is largely technical in nature.

10. Usability: why use distributed ledgers?

The problem, in short. What do end users actually want from distributed ledgers, if anything? In other words, distributed ledgers are being discussed as the solution to problems in many industries, but what is it that the full public verifiability (or accountability, immutability, etc.) of distributed ledgers really maps to in terms of what end users want?

9. Governance: who makes the rules?

The problem, in short. The beauty of distributed ledgers is that no one entity gets to control the decisions made by the network; in Bitcoin, e.g., coins are generated or transferred from one party to another only if a majority of the peers in the network agree on the validity of this action. While this process becomes threatened if any one peer becomes too powerful, there is a larger question looming over the operation of these decentralized networks: who gets to decide which actions are valid in the first place? The truth is that all these networks operate according to a defined set of rules, and that “who makes the rules matters at least as much as who enforces them.”

In this process of making the rules, even the most decentralized networks turn out to be heavily centralized, as recent issues in cryptocurrency governance demonstrate. These increasingly common collapses threaten to harm the value of these cryptocurrencies, and reveal the issues associated with ad-hoc forms of governance. Thus, the problem is not just that we don’t know how to govern these technologies, but that — somewhat ironically — we need more transparency around how these structures operate and who is responsible for which aspects of governance.

8. Meaningful comparisons: which is better?

The problem. Bitcoin was the first cryptocurrency to be based on the architecture we now refer to as the blockchain, but it certainly isn’t the last; there are now thousands of alternative cryptocurrencies out there, each with its own unique selling point. Ethereum offers a more expressive scripting language and maintains state, Litecoin allows for faster block creation than Bitcoin, and each new ICO (Initial Coin Offering) promises a shiny feature of its own. Looking beyond blockchains, there are numerous proposals for cryptocurrencies based on consensus protocols other than proof-of-work and proposals in non-currency-related settings, such as Certificate Transparency, R3 Corda, and Hyperledger Fabric, that still fit under the broad umbrella of distributed ledgers.

Continue reading Top ten obstacles along distributed ledgers’ path to adoption

Can we make people value IT security?

Angela Sasse was invited to give the sixth annual Wheeler Lecture, at the University of Cambridge Computer Laboratory. The video of her talk is below, and the slides are also available. A summary of the talk appears on the blog of the Research Institute in Science of Cyber Security (RISCS).

In many organisations today, IT security is a battleground: to manage the risks the organisation faces, security specialists devise policies and deploy security mechanisms that they expect staff and customers to comply with. But most of time, staff and customers don’t comply, and attempts to change that by “raising awareness” and “educating” them generally fail. The talk will use the examples of security warnings, access control, and sandboxing to explain the different perspectives and values that security specialists and ‘the rest of us’ apply to security. In conclusion, I will argue that a value-centred design approach is the only way to develop security solutions people want to use.

Find Security Champions in Blends of Organisational Culture

I was at the EuroUSEC ’17 workshop in Paris at the end of April. Our own Angela Sasse was also there to deliver the keynote talk, and Ruba Abu-Salma presented our paper “The Security Blanket of the Chat World: An Analytic Evaluation and a User Study of Telegram” (which was based on research by undergraduate students studying UCL’s COMP3096 “Research Group Project” module). I presented secondary analysis, conducted with Ingolf Becker and Angela Sasse, of a survey deployed at a large partner organisation. This analysis builds on research we presented at the Symposium on Usable Privacy and Security (SOUPS) in 2016. Based on survey responses and voluntary free-text comments, we saw potential for employees to inform policy from the ‘ground up’, in contradiction to the current trend for identifying security champions as local representatives of pre-determined policy.

Top-down security policies

Organisational policies are intended to promote a unified approach to security, one that all the organisation’s employees are expected to follow. If security procedures and mechanisms are unusable, policies risk being seen as impossible to follow, or may be sidelined if they lack clear relevance to business goals. This can result in deliberate or unwitting non-compliance, and workarounds to prescribed procedures.

Organisations may promote security champions, as local representatives to promote policy in their part of the organisation. However, these security champions can be effective only if policy is workable. Encouraging ‘top down’ policy compliance assumes that policy is correct, complete, and appropriate. It also assumes that policy applies to everyone equally and that employees have no role to play in shaping effective policy. Our analysis explores the potential for employees to inform effective policies, in particular whether it was possible to (i) identify local pockets of security expertise, and (ii) target engagement with employees that involves them in the creation of workable security solutions.

Identifying security champions ‘from the ground up’

Level Attitude Approach
1 Uninfluenced Security behaviour is driven by personal knowledge.
2 Technically Controlled Technical controls enforce compliance with policy.
3 Ad-hoc Knowledge and Application Shallow understanding of policy.
Knowledge absorbed from surrounding work environment.
4 Policy Compliant Comprehensive knowledge and understanding of policy.
Willing policy compliance.
Role model for organisation’s security culture.
5 Active Approach to Security Actively promote and advance security culture.
Intent of policy carried into work activities
Leverage well-understood values that support both security and business.
Employee security – Attitude-Levels. We studied an organisation with IT systems, so there were no participants at Level 1

A scenario-based survey was deployed in the partner company. Scenarios were based upon in-depth interviews with employees that explored security behaviours in the workplace. Each scenario involved a dilemma, where fixed options described different responses and included an element of non-compliance or an implicit cost. Participant choices indicate their Behaviour Type (above) and Attitude Level (below), which we recorded across groups of employees to characterise the security culture of the organisation and in four specific divisions. Both interviews and surveys represent a cross-section of divisions, locations, and age groups. We collected 608 survey responses; crucially, the survey allowed participants to comment on the scenarios and the available options – we also looked at 267 additional free-text comments that were provided.

Behaviour-Type Description
Individualists Rely on self for solutions
Egalitarians Rely on social or group solutions
Hierarchists Rely on existing systems or technologies
Fatalists Take a ‘naive’ approach, that their actions are not significant in creating outcomes
Behaviour-Types

Continue reading Find Security Champions in Blends of Organisational Culture

Observing the WannaCry fallout: confusing advice and playing the blame game

As researchers who strive to develop effective measures that help individuals and organisations to stay secure, we have observed the public communications that followed the Wannacry ransomware attack of May 2017 with increasing concern. As in previous incidents, many descriptions of the attack are inaccurate – something colleagues have pointed out elsewhere. Our concern here is the advice being disseminated, and the fact that various stakeholders seem to be more concerned with blaming each other than with working together to prevent further attacks affecting organisations and individuals.

Countries initially affected in WannaCry ransomware attack (source Wikipedia, User:Roke)

Let’s start with the advice that is being handed out. Much of it is unhelpful at best, and downright wrong at worst – a repeat of what happened after Heartbleed, when people were advised to change their passwords before the affected organisations had patched their SSL code. Here is a sample of real advice sent out to staff in major organisation post-WannaCry:

“We urge you to be vigilant and not to open emails that are unexpected, unusual or suspicious in any way. If you experience any unusual computer behaviour, especially any warning messages, please contact your IT support immediately and do not use your computer further until advised to do so.”

Useful advice has to be correct and actionable. Users have to cope with dozens, maybe hundreds, of unexpected emails every day, most containing links and many accompanied by attachments, cannot take ten minutes to ponder each email before deciding whether to respond. Such instructions also implicitly and unfairly suggest that users’ ordinary behaviour plays a major role in causing major incidents like this one. RISCS advocates enlisting users as part of frontline defence. Well-targeted, automated blocking of malicious emails lessen the burden on individual users, and build resilience for the organisation in general.

In an example of how to confuse users, The Register reports that City of London Police sent out its “advice” via email in an attachment entitled “ransomware.pdf”. So users are simultaneously exhorted to be “vigilant” and not open emails and required to open an email in order to get that advice. The confusion resulting from contradictory advice is worse than the direct consequences of the attack: it enables future attacks. Why play Keystone Cyber Cops when UK National Technical Authority for such matters, the National Centre for Cyber Security, offers authoritative and well-presented advice on their website?

Our other concern is the unedifying squabbling between spokespeople for governments and suppliers blaming each other for running unsupported software, not paying for support, charging to support unsupported software, and so on, with and security experts weighing in on all sides. To a general public already alarmed by media headlines, finger-pointing creates little confidence that either party is competent or motivated to keep secure the technology on which our lives all now depend. When the supposed “good guys” expend their energy fighting each other, instead of working together to defeat the attackers, it’s hard to avoid the conclusion that we are most definitely doomed. As Columbia University professor Steve Bellovin writes, the question of who should pay to support old software requires broader collaborative thought; in avoiding that debate we are choosing to pay as a society for such security failures.

We would refer those looking for specific advice on dealing with ransomware to the NCSC guidance, which is offered in separate parts for SMEs and home users and enterprise administrators.

Much of NCSC’s advice is made up of things we all know: we should back up our data, patch our systems, and run anti-virus software. Part of RISCS’ remit is to understand why users often don’t follow this advice. Ensuring backups remain uninfected is, unfortunately, trickier than it should be. Ransomware will infect – that is, encrypt – not only the machine it’s installed on but any permanently-connected physical or network drive. This problem ought to be solved by cloud storage, but it can be difficult to find out whether cloud backups will be affected by ransomware, and technical support documentation often simply refers individuals to “your IT support”, even though vendors know few individuals have any. Dropbox is unusually helpful, and provides advice on how to recover from a ransomware attack and how far it can help. Users should be encouraged to read such advice in advance and factor it into backup plans.

There are many reasons why people do not update their software. They may, for example, have had bad experiences in the past that lead them to worry that security updates will fail or leave their system damaged, or incorporate unwanted changes in functionality. Software vendors can help here by rigorously testing updates and resisting the temptation to bundle in new features. IT support staff can help by doing their own tests that allow them to reassure their users that they will help resolve any resulting problems in a timely manner.

In some cases, there are no updates to install. The WannaCry ransomware attack highlighted the continuing use of desktop Windows XP, which Microsoft stopped supporting with security updates in 2014. A few organisations still pay for special support contracts, and Microsoft made an exception for WannaCry by releasing a security patch more widely. Organisations that still have XP-based systems should now investigate to understand why equipment using an unsafe, outdated operating system is still in use. Ideally, the software should be replaced with a more modern system; if that’s not possible the machine should be isolated from network connections. No amount of reminding users to patch their systems or telling them to “be vigilant” will be effective in such cases.

 

This article also appears on the Research Institute in Science of Cyber Security (RISCS) blog.

Online security won’t improve until companies stop passing the buck to the customer

It’s normally in the final seconds of a TV or radio interview that security experts get asked for advice for the general public – something simple, unambiguous, and universally applicable. It’s a fair question, and what the public want. But simple answers are usually wrong, and can do more harm than good.

For example, take the UK government’s Cyber Aware scheme to educate the public in cybersecurity. It recommends individuals choose long and complex passwords made out of three words. The problem with this advice is that the resulting passwords are hard to remember, especially as people have many passwords and use some infrequently. Consequently, they will be tempted to use the same password on multiple websites.

Password re-use is far more of a security problem than insufficiently complex passwords, so advice that doesn’t help people manage multiple passwords does more harm than good. Instead, I would recommend remembering your most important passwords (like banking and email), and store the rest in a password manager. This approach isn’t perfect or suitable for everyone, but for most people, it will improve their security.

Advice unfit for the real world

Cyber Aware also tells people not to write down their passwords, or let anyone else know them – banks require the same thing. But we know that people commonly share their banking credentials with family, for legitimate reasons. People also realise that writing down passwords is a pretty good approach if you’re only worried about internet hackers, rather than people who can get close to you to see the written notes. Security advice that doesn’t stand up to scrutiny or doesn’t fit with people’s lives will be ignored – and will discredit the organisation offering it.

Because everyone’s situation is different, good security advice should include helping people to understand what risks they should be worried about, and to take steps that mitigate these risks. This advice doesn’t have to be complicated. Teen Vogue published a tutorial on how to select and configure a secure messaging tool, which very sensibly explains that if you are more worried about invasions of privacy from people who can get their hands on your phone, you should make different choices than if you are just concerned about, for example, companies spying on you.

The Teen Vogue article was widely praised by security experts, in stark contrast to an article in The Guardian that made the eye-catching claim that encrypted messaging service WhatsApp is insecure, without making clear that this only applies in an obscure and extremely unlikely set of circumstances.

Zeynep Tufekci, a researcher studying the effects of technology on society, reported that the article was exploited to legitimise misleading advice given by the Turkish government that WhatsApp is unsafe, resulting in human rights activists using SMS instead – which is far easier for the government to censor and monitor.

The Turkish government’s “security advice” to move from WhatsApp to less secure SMS was clearly aimed more at assisting its surveillance efforts than helping the activists to whom the advice was directed. Another case where the advice is more for the benefit of the organisation giving it is that of banks, where the terms and conditions small print gives incomprehensible security advice that isn’t true security advice, instead merely a legal technique to allow the banks wiggle room to refuse to refund victims of fraud.

Continue reading Online security won’t improve until companies stop passing the buck to the customer

Strong Customer Authentication in the Payment Services Directive 2

Within the European Union, since 2007, banks are regulated by the Payment Services Directive. This directive sets out which types of institutions can offer payment services, and what rules they must follow. Importantly for customers, these rules include in what circumstances a fraud victim is entitled to a refund. In 2015 the European Parliament adopted a substantial revision to the directive, the Payment Services Directive 2 (PSD2), and it will soon be implemented by EU member states. One of the major changes in PSD2 is the requirement for banks to implement Strong Customer Authentication (SCA) for transactions, more commonly known as two-factor authentication – authentication codes based on two or more elements selected from something only the user knows, something only the user possesses, and something the user is. Moreover, the authentication codes must be linked to the recipient and amount of the transaction, which the customer must be made aware of.

The PSD2 does not detail the requirements of Strong Customer Authentication, nor the permitted exemptions to this rule. Instead, these decisions are to be made by the European Banking Authority (EBA) through Regulatory Technical Standards (RTS). As part of the development of these technical standards the EBA opened an initial discussion, to which we submitted a response based on our research on the security usability of banking authentication. Based on the discussion, the EBA produced a consultation paper incorporating a set of draft technical standards. In our response to this consultation paper, included below, we detailed how research both on security usability and banking authentication more broadly should guide the assessment of Strong Customer Authentication. Specifically we point out that there is an incorrect assumption of an inherent tradeoff between security and usability, that for a system to be secure it must be usable, and that evaluation of Strong Customer Authentication systems should be independent, transparent, and follow principles developed from latest research.

False trade-off between security and usability

In the reasoning presented in the consultation paper there is an assumption that a trade-off must be made between security and usability, e.g. paragraph 6 “Finally, the objective of ensuring a high degree of security and safety would suggest that the [European Banking Authority’s] Technical Standards should be onerous in terms of authentication, whereas the objective of user-friendliness would suggest that the [Regulatory Technical Standards] should rather promote the competing aim of customer convenience, such as one-click payments.”

This security/usability trade-off is not inherent to Strong Customer Authentication (SCA), and in fact the opposite is more commonly true: in order for SCA to be secure it must also be usable “because if the security is usable, users will do the security tasks, rather than ignore or circumvent them”. Also, SCA that is usable will make it more likely that customers will detect fraud because they will not have to expend their limited attention on just performing the actions required to make the SCA work. A small subset (10–15%) of participants in some studies reasoned that the fact that a security mechanism required a lot of effort from them meant it was secure. But that is a misconception that must not be used as an excuse for effortful authentication procedures.

Continue reading Strong Customer Authentication in the Payment Services Directive 2

Steven Murdoch – Privacy and Financial Security

Probably not too many academic researchers can say this: some of Steven Murdoch’s research leads have arrived in unmarked envelopes. Murdoch, who has moved to UCL from the University of Cambridge, works primarily in the areas of privacy and financial security, including a rare specialty you might call “crypto for the masses”. It’s the financial security aspect that produces the plain, brown envelopes and also what may be his most satisfying work, “Trying to help individuals when they’re having trouble with huge organisations”.

Murdoch’s work has a twist: “Usability is a security requirement,” he says. As a result, besides writing research papers and appearing as an expert witness, his past includes a successful start-up. Cronto, which developed a usable authentication device, was acquired by VASCO, a market leader in authentication and is now used by banks such as Commerzbank and Rabobank.

Developing the Cronto product was, he says, an iterative process that relied on real-world testing: “In research into privacy, if you build unusable system two things will go wrong,” he says. “One, people won’t use it, so there’s a smaller crowd to hide in.” This issue affects anonymising technologies such as Mixmaster and Mixminion. “In theory they have better security than Tor but no one is using them.” And two, he says, “People make mistakes.” A non-expert user of PGP, for example, can’t always accurately identify which parts of the message are signed and which aren’t.

The start-up experience taught Murdoch how difficult it is to get an idea from research prototype to product, not least because what works in a small case study may not when deployed at scale. “Selling privacy remains difficult,” he says, noting that Cronto had an easier time than some of its forerunners since the business model called for sales to large institutions. The biggest challenge, he says, was not consumer acceptance but making a convincing case that the predicted threats would materialise and that a small company could deliver an acceptable solution.

Continue reading Steven Murdoch – Privacy and Financial Security

Moving towards security and privacy experiments for the real world

Jono and I recently presented our joint paper with Simon and Angela at the Learning from Authoritative Security Experiment Results (LASER) Workshop in San Jose, CA, USA. The workshop was co-located with the IEEE Security and Privacy Symposium. LASER has a different focus each year; in 2016, presented papers explored new approaches to computer security experiments that are repeatable and can be shared across communities.

Through our LASER paper, “Towards robust experimental design for user studies in security and privacy”, we wanted to advance the quest for better experiment design and execution. We proposed the following five principles for conducting robust experiments into usable security and privacy:

  1. Give participants a primary task
  2. Ensure participants experience realistic risk
  3. Avoid priming the participants
  4. Perform experiments double-blind whenever possible
  5. Define these elements precisely: threat model; security; privacy and usability

Understanding users and their interaction with security is a blind spot for many security practitioners and designers. Learning from prior studies within and outside our research group, we have defined principles for conducting robust experiments into usable security and privacy. These principles are informed by efforts in other fields such as biology, qualitative research methods, and medicine, where four overarching experiment-design factors guided our principles:

Internal validity – The experiment is of “suitable scope to achieve the reported results” and is not “susceptible to systematic error”.

External validity – The result of the experiment “is not solely an artifact of the laboratory setting”.

Containment  – There are no “confounds” in the results, and no experimental “effects are a threat to safety” of the participants, the environment, or society generally.

Transparency – “There are no explanatory gaps in the experimental mechanism” and the explanatory “diagram for the experimental mechanism is complete”, in that it covers all relevant entities and activities.

Continue reading Moving towards security and privacy experiments for the real world

User-centred security awareness empowers employees to be the strongest defense

The release of our business whitepaper “Awareness is only the first step” was recently announced by Hewlett Packard Enterprise (HPE). The whitepaper is co-authored by HPE, UCL, and the UK government’s National Technical Authority for Information Assurance (CESG). The whitepaper emphasises how a user-centred approach to security awareness can empower employees to be the strongest link in defending their organisation. As Andrzej Kawalec, HPE’s Security Services CTO, notes in the press release:

“Users remain the first line of defense when faced with a dynamic and relentless threat environment.”

Security communication, education, and training (CET) in organisations is intended to align employee behaviour with the security goals of the organisation. Security managers conduct regular security awareness activities – familiar vehicles for awareness programmes, such as computer-based training (CBT), can cover topics such as password use, social media practices, and phishing. However, there is limited evidence to support the effectiveness or efficiency of CBT, and a lack of reliable indicators means that it is not clear if recommended security behaviour is followed in practice. If the design and delivery of CET programmes does not consider the individual, they can’t be certain of achieving the intended outcomes. As Angela Sasse comments:

“Many companies think that setting up web-based training packages are a cost-effective way of influencing staff behavior and achieving compliance, but research has provided clear evidence that this is not effective – rather, many staff resent it and suffer from ‘compliance fatigue.’

HPE awareness maturity curve

The whitepaper describes a path to guide the involvement of employees in their own security, as shown in the HPE awareness maturity curve above. To change security behaviors, a company needs to invest in the security knowledge and skills of its employees, and respond to employee needs differently at each stage.

Continue reading User-centred security awareness empowers employees to be the strongest defense