By revisiting security training through economics principles, organisations can navigate how to support effective security behaviour change

Here I describe analysis by myself and colleagues Albesë Demjaha and David Pym at UCL, which originally appeared at the STAST workshop in late 2019 (where it was awarded best paper). The work was the basis for a talk I gave at Cambridge Computer Laboratory earlier this week (I thank Alice Hutchings and the Security Group for hosting the talk, as it was also an opportunity to consider this work alongside themes raised in our recent eCrime 2019 paper).

Secure behaviour in organisations

Both research and practice have shown that security behaviours, encapsulated in policy and advised in organisations, may not be adopted by employees. Employees may not see how advice applies to them, find it difficult to follow, or regard the expectations as unrealistic. Employees may, as a consequence, create their own alternative behaviours as an effort to approximate secure working (rather than totally abandoning security). Organisational support can then be critical to whether secure practices persist. Economics principles can be applied to explain how complex systems such as these behave the way they do, and so here we focus on informing an overarching goal to:

Provide better support for ‘good enough’ security-related decisions, by individuals within an organization, that best approximate secure behaviours under constraints, such as limited time or knowledge.

Traditional economics assumes decision-makers are rational, and that they are equipped with the capabilities and resources to make the decision which will be most beneficial for them. However, people have reasons, motivations, and goals when deciding to do something — whether they do it well or badly, they do engage in thinking and reasoning when making a decision. We must capture how the decision-making process looks for the employee, as a bounded agent with limited resources and knowledge to make the best choice. This process is more realistically represented in behavioural economics. And yet, behaviour intervention programmes mix elements of both of these areas of economics. It is by considering these principles in tandem that we explore a more constructive approach to decision-support in organisations.

Contradictions in current practice

A bounded agent often settles for a satisfactory decision, by satisficing rather than optimising. For example, the agent can turn to ‘rules of thumb’ and make ad-hoc decisions, based on a quick evaluation of perceived probability, costs, gains, and losses. We can already imagine how these restrictions may play out in a busy workplace. This leads us toward identifying those points of engagement at which employees ought to be supported, in order to avoid poor choices.

Continue reading By revisiting security training through economics principles, organisations can navigate how to support effective security behaviour change

Consider unintended harms of cybersecurity controls, as they might harm the people you are trying to protect

Well-meaning cybersecurity risk owners will deploy countermeasures in an effort to manage the risks they see affecting their services or systems. What is not often considered is that those countermeasures may produce unintended, negative consequences themselves. These unintended consequences can potentially be harmful, adversely affecting user behaviour, user inclusion, or the infrastructure itself (including services of others).

Here, I describe a framework co-developed with several international researchers at a Dagstuhl seminar in mid-2019, resulting in an eCrime 2019 paper later in the year. We were drawn together by an interest in understanding unintended harms of cybersecurity countermeasures, and encouraging efforts to preemptively identify and avoid these harms. Our collaboration on this theme drew on our varied and multidisciplinary backgrounds and interests, including not only risk management and cybercrime, but also security usability, systems engineering, and security economics.

We saw it as necessary to focus on situations where there is often an urgency to counter threats, but where efforts to manage threats have the potential to introduce harms. As documented in the recently published seminar report, we explored specific situations in which potential harms may make resolving the overarching problems more difficult, and as such cannot be ignored – especially where potentially harmful countermeasures ought to be avoided. Example case studies of particular importance include tech-abuse by an intimate partner, online disinformation campaigns, combating CEO fraud and phishing emails in organisations, and online dating fraud.

Consider disinformation campaigns, for example. Efforts to counter disinformation on social media platforms can include fact-checking and automated detection algorithms behind the scenes. These can reduce the burden on users to address the problem. However, automation can also reduce users’ scepticism towards the information they see; fact-checking can be appropriated as a tool by any one group to challenge viewpoints of dissimilar groups.

We then see how unintended harms can shift the burden of managing cybersecurity to others in the ecosystem without them necessarily expecting it or being prepared for it. There can be vulnerable populations which are disadvantaged by the effects of a control more than others. An example may be legitimate users of social media who are removed – or have their content removed – from a platform, due to traits shared with malicious actors or behaviour, e.g., referring to some of the same topics, irrespective of sentiment – an example of ‘Misclassification’, in the list below. If a user, user group, or their online activity are removed from the system, the risk owner for that system may not notice that problems have been created for users in this way – they simply will not see them, as their actions have excluded them. Anticipating and avoiding unintended harms is then crucial before any such outcomes can occur.

Continue reading Consider unintended harms of cybersecurity controls, as they might harm the people you are trying to protect

UCL runs a digital security training event aimed at domestic abuse support services

In late November, UCL’s “Gender and IoT” (G-IoT) research team ran a “CryptoParty” (digital security training event) followed by a panel discussion which brought together frontline workers, support organisations, as well as policy and tech representatives to discuss the risk of emerging technologies for domestic violence and abuse. The event coincided with the International Day for the Elimination of Violence against Women, taking place annually on the 25th of November.

Technologies such as smartphones or platforms such as social media websites and apps are increasingly used as tools for harassment and stalking. Adding to the existing challenges and complexities are evolving “smart”, Internet-connected devices that are progressively populating public and private spaces. These systems, due to their functionalities, create further opportunities to monitor, control, and coerce individuals. The G-IoT project is studying the implications of IoT-facilitated “tech abuse” for victims and survivors of domestic violence and abuse.

CryptoParty

The evening represented an opportunity for frontline workers and support organisations to upskill in digital security. Attendees had the chance to learn about various topics including phone, communication, Internet browser and data security. They were trained by a group of so-called “crypto angels”, meaning volunteers who provide technical guidance and support. Many of the trainers are affiliated with the global “CryptoParty” movement and the CryptoParty London specifically, as well as Privacy International, and the National Cyber Security Centre.

G-IoT’s lead researcher, Dr Leonie Tanczer, highlighted the importance of this event in light of the socio-technical research that the team pursued so far: “Since January 2018, we worked closely with the statutory and voluntary support sector. We identified various shortcomings in the delivery of tech abuse provisions, including practice-oriented, policy, and technical limitations. We set up the CryptoParty to bring together different communities to holistically tackle tech abuse and increase the technical security awareness of the support sector.”

Continue reading UCL runs a digital security training event aimed at domestic abuse support services

Find Security Champions in Blends of Organisational Culture

I was at the EuroUSEC ’17 workshop in Paris at the end of April. Our own Angela Sasse was also there to deliver the keynote talk, and Ruba Abu-Salma presented our paper “The Security Blanket of the Chat World: An Analytic Evaluation and a User Study of Telegram” (which was based on research by undergraduate students studying UCL’s COMP3096 “Research Group Project” module). I presented secondary analysis, conducted with Ingolf Becker and Angela Sasse, of a survey deployed at a large partner organisation. This analysis builds on research we presented at the Symposium on Usable Privacy and Security (SOUPS) in 2016. Based on survey responses and voluntary free-text comments, we saw potential for employees to inform policy from the ‘ground up’, in contradiction to the current trend for identifying security champions as local representatives of pre-determined policy.

Top-down security policies

Organisational policies are intended to promote a unified approach to security, one that all the organisation’s employees are expected to follow. If security procedures and mechanisms are unusable, policies risk being seen as impossible to follow, or may be sidelined if they lack clear relevance to business goals. This can result in deliberate or unwitting non-compliance, and workarounds to prescribed procedures.

Organisations may promote security champions, as local representatives to promote policy in their part of the organisation. However, these security champions can be effective only if policy is workable. Encouraging ‘top down’ policy compliance assumes that policy is correct, complete, and appropriate. It also assumes that policy applies to everyone equally and that employees have no role to play in shaping effective policy. Our analysis explores the potential for employees to inform effective policies, in particular whether it was possible to (i) identify local pockets of security expertise, and (ii) target engagement with employees that involves them in the creation of workable security solutions.

Identifying security champions ‘from the ground up’

Level Attitude Approach
1 Uninfluenced Security behaviour is driven by personal knowledge.
2 Technically Controlled Technical controls enforce compliance with policy.
3 Ad-hoc Knowledge and Application Shallow understanding of policy.
Knowledge absorbed from surrounding work environment.
4 Policy Compliant Comprehensive knowledge and understanding of policy.
Willing policy compliance.
Role model for organisation’s security culture.
5 Active Approach to Security Actively promote and advance security culture.
Intent of policy carried into work activities
Leverage well-understood values that support both security and business.
Employee security – Attitude-Levels. We studied an organisation with IT systems, so there were no participants at Level 1

A scenario-based survey was deployed in the partner company. Scenarios were based upon in-depth interviews with employees that explored security behaviours in the workplace. Each scenario involved a dilemma, where fixed options described different responses and included an element of non-compliance or an implicit cost. Participant choices indicate their Behaviour Type (above) and Attitude Level (below), which we recorded across groups of employees to characterise the security culture of the organisation and in four specific divisions. Both interviews and surveys represent a cross-section of divisions, locations, and age groups. We collected 608 survey responses; crucially, the survey allowed participants to comment on the scenarios and the available options – we also looked at 267 additional free-text comments that were provided.

Behaviour-Type Description
Individualists Rely on self for solutions
Egalitarians Rely on social or group solutions
Hierarchists Rely on existing systems or technologies
Fatalists Take a ‘naive’ approach, that their actions are not significant in creating outcomes
Behaviour-Types

Continue reading Find Security Champions in Blends of Organisational Culture

Observing the WannaCry fallout: confusing advice and playing the blame game

As researchers who strive to develop effective measures that help individuals and organisations to stay secure, we have observed the public communications that followed the Wannacry ransomware attack of May 2017 with increasing concern. As in previous incidents, many descriptions of the attack are inaccurate – something colleagues have pointed out elsewhere. Our concern here is the advice being disseminated, and the fact that various stakeholders seem to be more concerned with blaming each other than with working together to prevent further attacks affecting organisations and individuals.

Countries initially affected in WannaCry ransomware attack (source Wikipedia, User:Roke)

Let’s start with the advice that is being handed out. Much of it is unhelpful at best, and downright wrong at worst – a repeat of what happened after Heartbleed, when people were advised to change their passwords before the affected organisations had patched their SSL code. Here is a sample of real advice sent out to staff in major organisation post-WannaCry:

“We urge you to be vigilant and not to open emails that are unexpected, unusual or suspicious in any way. If you experience any unusual computer behaviour, especially any warning messages, please contact your IT support immediately and do not use your computer further until advised to do so.”

Useful advice has to be correct and actionable. Users have to cope with dozens, maybe hundreds, of unexpected emails every day, most containing links and many accompanied by attachments, cannot take ten minutes to ponder each email before deciding whether to respond. Such instructions also implicitly and unfairly suggest that users’ ordinary behaviour plays a major role in causing major incidents like this one. RISCS advocates enlisting users as part of frontline defence. Well-targeted, automated blocking of malicious emails lessen the burden on individual users, and build resilience for the organisation in general.

In an example of how to confuse users, The Register reports that City of London Police sent out its “advice” via email in an attachment entitled “ransomware.pdf”. So users are simultaneously exhorted to be “vigilant” and not open emails and required to open an email in order to get that advice. The confusion resulting from contradictory advice is worse than the direct consequences of the attack: it enables future attacks. Why play Keystone Cyber Cops when UK National Technical Authority for such matters, the National Centre for Cyber Security, offers authoritative and well-presented advice on their website?

Our other concern is the unedifying squabbling between spokespeople for governments and suppliers blaming each other for running unsupported software, not paying for support, charging to support unsupported software, and so on, with and security experts weighing in on all sides. To a general public already alarmed by media headlines, finger-pointing creates little confidence that either party is competent or motivated to keep secure the technology on which our lives all now depend. When the supposed “good guys” expend their energy fighting each other, instead of working together to defeat the attackers, it’s hard to avoid the conclusion that we are most definitely doomed. As Columbia University professor Steve Bellovin writes, the question of who should pay to support old software requires broader collaborative thought; in avoiding that debate we are choosing to pay as a society for such security failures.

We would refer those looking for specific advice on dealing with ransomware to the NCSC guidance, which is offered in separate parts for SMEs and home users and enterprise administrators.

Much of NCSC’s advice is made up of things we all know: we should back up our data, patch our systems, and run anti-virus software. Part of RISCS’ remit is to understand why users often don’t follow this advice. Ensuring backups remain uninfected is, unfortunately, trickier than it should be. Ransomware will infect – that is, encrypt – not only the machine it’s installed on but any permanently-connected physical or network drive. This problem ought to be solved by cloud storage, but it can be difficult to find out whether cloud backups will be affected by ransomware, and technical support documentation often simply refers individuals to “your IT support”, even though vendors know few individuals have any. Dropbox is unusually helpful, and provides advice on how to recover from a ransomware attack and how far it can help. Users should be encouraged to read such advice in advance and factor it into backup plans.

There are many reasons why people do not update their software. They may, for example, have had bad experiences in the past that lead them to worry that security updates will fail or leave their system damaged, or incorporate unwanted changes in functionality. Software vendors can help here by rigorously testing updates and resisting the temptation to bundle in new features. IT support staff can help by doing their own tests that allow them to reassure their users that they will help resolve any resulting problems in a timely manner.

In some cases, there are no updates to install. The WannaCry ransomware attack highlighted the continuing use of desktop Windows XP, which Microsoft stopped supporting with security updates in 2014. A few organisations still pay for special support contracts, and Microsoft made an exception for WannaCry by releasing a security patch more widely. Organisations that still have XP-based systems should now investigate to understand why equipment using an unsafe, outdated operating system is still in use. Ideally, the software should be replaced with a more modern system; if that’s not possible the machine should be isolated from network connections. No amount of reminding users to patch their systems or telling them to “be vigilant” will be effective in such cases.

 

This article also appears on the Research Institute in Science of Cyber Security (RISCS) blog.

User-centred security awareness empowers employees to be the strongest defense

The release of our business whitepaper “Awareness is only the first step” was recently announced by Hewlett Packard Enterprise (HPE). The whitepaper is co-authored by HPE, UCL, and the UK government’s National Technical Authority for Information Assurance (CESG). The whitepaper emphasises how a user-centred approach to security awareness can empower employees to be the strongest link in defending their organisation. As Andrzej Kawalec, HPE’s Security Services CTO, notes in the press release:

“Users remain the first line of defense when faced with a dynamic and relentless threat environment.”

Security communication, education, and training (CET) in organisations is intended to align employee behaviour with the security goals of the organisation. Security managers conduct regular security awareness activities – familiar vehicles for awareness programmes, such as computer-based training (CBT), can cover topics such as password use, social media practices, and phishing. However, there is limited evidence to support the effectiveness or efficiency of CBT, and a lack of reliable indicators means that it is not clear if recommended security behaviour is followed in practice. If the design and delivery of CET programmes does not consider the individual, they can’t be certain of achieving the intended outcomes. As Angela Sasse comments:

“Many companies think that setting up web-based training packages are a cost-effective way of influencing staff behavior and achieving compliance, but research has provided clear evidence that this is not effective – rather, many staff resent it and suffer from ‘compliance fatigue.’

HPE awareness maturity curve

The whitepaper describes a path to guide the involvement of employees in their own security, as shown in the HPE awareness maturity curve above. To change security behaviors, a company needs to invest in the security knowledge and skills of its employees, and respond to employee needs differently at each stage.

Continue reading User-centred security awareness empowers employees to be the strongest defense