The Acropalypse vulnerability in Windows Snip and Sketch, lessons for developer-centered security

Acropalypse is a vulnerability first identified in the Google Pixel phone screenshot tool, where after cropping an image, the original would be recoverable. Since the part of the image cropped out might contain sensitive information, this was a serious security issue. The problem occurred because the Android API changed behaviour from truncating files by default to leaving existing content in place. Consequently, the beginning of the resulting image file contains the cropped content, but the end of the original file is still present. Image viewers ignore this data and open the file as usual, but with some clever analysis of the compression algorithm used, the original image can (partially) be recovered.

Shortly after the vulnerability was announced, someone noticed that the Windows default screenshot tool, Snip and Sketch, appeared to have the same problem, despite being an entirely unrelated application on a different operating system. I also found a similar problem back in 2004 relating to JPEG thumbnail images. When the same vulnerability keeps re-occurring, it suggests a systemic problem in how we build software, so I set out to understand more about the reasons for the vulnerability existing in Windows Snip and Sketch.

A flawed API

The first problem I found is that the modern Windows API for saving files had a very similar problem to that in Android. Specifically, existing files would not be truncated by default. Arguably the vulnerability was worse because, unlike Android, there is no option to truncate files. The Windows documentation is, at best, unclear on the need to truncate files and what code is needed to achieve the desired result.

This wasn’t always the case. The old Win32 API for saving a file was (roughly) to show a file picker, get the filename the user selected, and then open the file. To open a file, the programmer must specify whether to overwrite the file or not, and example code usually does overwrite the file. However, the new “more secure” Universal Windows Platform (UWP) sandboxes the file picker in a separate process, allowing neat features like capability-based access control. It creates the file if needed and returns a handle which, if the selected file exists, will not overwrite the existing content.

However, from the documentation, a programmer would understandably assume, however, that the file would be empty.

“The file name, extension, and location of this storageFile match those specified by the user, but the file has no content.”

Continue reading The Acropalypse vulnerability in Windows Snip and Sketch, lessons for developer-centered security

Exploring an Attack on Image Scaling Algorithms

In their 2019 publication ‘Seeing is Not Believing: Camouflage Attacks on Image Scaling Algorithms’, Xiao et al. demonstrated a fascinating and frightening exploit on a few commonly used and popular scaling algorithms. Through what Quiring et al. referred to as adversarial preprocessing, they created an attack image that closely resembles one image (decoy) but portrays a completely different image (payload) when scaled down. In their example (below), an image of sheep could scale down and suddenly show a wolf.

Two images are shown, the left shows the original attack image, which depicts a group of sheep. The right shows the scaled down attack image, which shows a grey wolf.
On the left, a group of sheep can be seen in a slightly stretched out photo (the decoy). When scaled down to the correct dimensions (right), the image shows a grey wolf (payload). This is an example of an attack image.

These attack images can be used in a number of scenarios, particularly in data poisoning of deep learning datasets and covert dissemination of information. Deep learning models require large datasets for training. A series of carefully crafted and planted attack images placed into public datasets can poison these models, for example, reducing the accuracy of object classification. Essentially all models are trained with images scaled down to a fixed size (e.g. 229 × 229) to reduce the computational load, so these attack images are highly likely to work if their dimensions are correctly configured. As these attack images hide their malicious payload in plain sight, they also evade detection. Xiao et al. described how an attack image could be crafted for a specific device (e.g. an iPhone XS) so that the iPhone XS browser renders the malicious image instead of the decoy image. This technique could be used to propagate payload, such as illegal advertisements, discreetly.

The natural stealthiness of this attack is a dangerous factor, but on top of that, it is also relatively easy to replicate. Xiao et al. published their own source code in a GitHub repository, with which anyone can run and create their own attack images. Additionally, the maths behind the method is also well described in the paper, allowing my group to replicate the attack for coursework assigned to us for UCL’s Computer Security II module, without referencing the paper authors’ source code. Our implementation of the attack is available at our GitHub repository. This coursework required us to select an attack detailed in a conference paper and replicate it. While working on the coursework, we discovered a relatively simple way to stop these attack images from working and even allow the original content to be viewed. This is shown in the series of images below.

Continue reading Exploring an Attack on Image Scaling Algorithms

Still treating users as the enemy: entrapment and the escalating nastiness of simulated phishing campaigns

Three years ago, we made the case against phishing your own employees through simulated phishing campaigns. They do little to improve security: click rates tend to be reduced (temporarily) but not to zero – and each remaining click can enable an attack. They also have a hidden cost in terms of productivity – employees have to spend time processing more emails that are not relevant to their work, and then spend more time pondering whether to act on emails. In a recent paper, Melanie Volkamer and colleagues provided a detailed listing of the pros and cons from the perspectives of security, human factors and law. One of the legal risks was finding yourself in court with one of the 600-pound digital enterprise gorillas for trademark infringement – Facebook objected to their trademark and domain being impersonated. They also likely don’t want their brand to be used in attacks because, contrary to what some vendors tell you, being tricked by your employer is not a pleasant experience. Negative emotions experienced with an event often transfer to anyone or anything associated with it – and negative emotions are not what you want associated with your brand if your business depends on keeping billions of users engaging with your services as often as possible.

Recent tactics employed by the providers of phishing campaigns can only be described as entrapment – to “demonstrate” the need for their services, they create messages that almost everyone will click on. Employees of the Chicago Tribune and GoDaddy, for instance, received emails promising bonuses. Employees had hope of extra pay raised and then cruelly dashed, and on top, were hectored for being careless about phishing. Some employees vented their rage publicly on Twitter, and the companies involved apologised. The negative publicity may eventually be forgotten, but the resentment of employees feeling not only tricked but humiliated and betrayed, will not fade any time soon. The increasing nastiness of entrapment has seen employees targeted with promises of COVID vaccinations from employers – who then find themselves being ridiculed for their gullibility instead of lauded for their willingness to help.

Continue reading Still treating users as the enemy: entrapment and the escalating nastiness of simulated phishing campaigns

UCL runs a digital security training event aimed at domestic abuse support services

In late November, UCL’s “Gender and IoT” (G-IoT) research team ran a “CryptoParty” (digital security training event) followed by a panel discussion which brought together frontline workers, support organisations, as well as policy and tech representatives to discuss the risk of emerging technologies for domestic violence and abuse. The event coincided with the International Day for the Elimination of Violence against Women, taking place annually on the 25th of November.

Technologies such as smartphones or platforms such as social media websites and apps are increasingly used as tools for harassment and stalking. Adding to the existing challenges and complexities are evolving “smart”, Internet-connected devices that are progressively populating public and private spaces. These systems, due to their functionalities, create further opportunities to monitor, control, and coerce individuals. The G-IoT project is studying the implications of IoT-facilitated “tech abuse” for victims and survivors of domestic violence and abuse.

CryptoParty

The evening represented an opportunity for frontline workers and support organisations to upskill in digital security. Attendees had the chance to learn about various topics including phone, communication, Internet browser and data security. They were trained by a group of so-called “crypto angels”, meaning volunteers who provide technical guidance and support. Many of the trainers are affiliated with the global “CryptoParty” movement and the CryptoParty London specifically, as well as Privacy International, and the National Cyber Security Centre.

G-IoT’s lead researcher, Dr Leonie Tanczer, highlighted the importance of this event in light of the socio-technical research that the team pursued so far: “Since January 2018, we worked closely with the statutory and voluntary support sector. We identified various shortcomings in the delivery of tech abuse provisions, including practice-oriented, policy, and technical limitations. We set up the CryptoParty to bring together different communities to holistically tackle tech abuse and increase the technical security awareness of the support sector.”

Continue reading UCL runs a digital security training event aimed at domestic abuse support services

An untapped resource to reproduce studies

Science is generally accepted to operate by conducting specially-designed structured observations (such as experiments and case studies) and then interpreting the results to build generalised knowledge (sometimes called theories or models). An important, nay necessary, feature of the social operation of science is transparency in the design, conduct, and interpretation of these structured observations. We’re going to work from the view that security research is science just like any other, though of course as its own discipline it has its own tools, topics, and challenges. This means that studies in security should be replicable, reproducible, or at least able to be corroborated. Spring and Hatleback argue that transparency is just as important for computer science as it is for experimental biology. Rossow et al. also persuasively argue that transparency is a key feature for malware research in particular. But how can we judge whether a paper is transparent enough? The natural answer would seem to be if it is possible to make a replication attempt from the materials and information in the paper. Forget how often the replications succeed for now, although we know that there are publication biases and other factors that mess with that.

So how many security papers published in major conferences contain enough information to attempt a reproduction? In short, we don’t know. From anecdotal evidence, Jono and a couple students looked through the IEEE S&P 2012 proceedings in 2013, and the results were pretty grim. But heroic effort from a few interested parties is not a sustainable answer to this question. We’re here to propose a slightly more robust solution. Master’s students in security should attempt to reproduce published papers as their capstone thesis work. This has several benefits, and several challenges. In the following we hope to convince you that the challenges can be mitigated and the benefits are worth it.

This should be a choice, but one that master’s students should want to make. If anyone has a great new idea to pursue, they should be encouraged to do so. However, here in the UK, the dissertation process is compressed into the summer and there’s not always time to prototype and pilot study designs. Selecting a paper to reproduce, with a documented methodology in place, lets the student get to work faster. There is still a start-up cost; students will likely have to read several abstracts to shortlist a few workable papers, and then read these few papers in detail to select a good candidate. But learning to read, shortlist, and study academic papers is an important skill that all master’s students should be attempting to, well, master. This style of project would provide them with an opportunity to practice these skills.

Briefly, let’s be clear what we mean by reproduction of published work.
Reproduction isn’t just one thing. There’s reproduce and replicate and corroborate and controlled variation (see Feitelson for details). Not everything is amenable to reproduction. For example, case studies (such as attack papers) or natural experiments are often interesting because they are unique. Corroborating some aspect of the case may be possible with a new study, and such study is also valuable. But this not the sort of reproduction we have in mind to advocate here.

Continue reading An untapped resource to reproduce studies

Managing conflicts between ethical principles and job duties

Despite its international context, discussion of the social implications of technology is surprisingly parochial. For example, the idea that individuals should have control over how their data is used is considered radical and innovative in the US, despite it being commonly accepted in Europe since the early 1980’s. The same applies to including professional and ethical training as part of computer science curricula – while a recent move in many US institutions, it’s been mandatory for BCS accredited courses in the UK for as long as I can remember. One lesson that comes from the UK’s experience here, and that I think would be of help to institutions following its lead, is that students being aware of ethics is not enough to protect society and individuals. There needs to also be strong codes of conduct, built on ethical principles, which practitioners are expected to follow.

For most computer science practitioners in the UK, the codes of conduct of relevance are from the field’s professional bodies – BCS and IET. They say roughly what you might expect – do a good job, follow instructions, avoid conflicts of interest, and consider the public interest. I’ve always found these to be a bit unsatisfactory, treating ethical decisions as the uncontroversial product of the application of consistent rules of professional conduct. These rules however don’t help with reality, where practitioners are faced with decisions where all options come at substantial personal or financial cost, where rules are inconsistent with themselves and ethical principles, all while faced with substantial uncertainty as to the consequences of their actions.

That’s why I am pleased to see that the ACM ethical code released today goes some way to acknowledge the complex interaction between technology and society, and provides tools to help practitioners navigate the challenges. In particular it gives some guidance on a topic I have long felt sorely lacking in the BCS and IET codes – what to do when instructions from your employer conflict with the public interest. At best, the BCS and IET codes are silent on how to handle such situations – if anything the BCS code puts emphasis on acting “in accordance” with employer instructions compared to requiring that members only “have due regard” for the public interest. In contrast, the ACM code is clear “that the public good is the paramount consideration”.

The ACM code also is clear that ethical practices are the responsibility of all. Management should enact rules that require ethical practices – they “should pursue clearly defined organizational policies that are consistent with the Code and effectively communicate them to relevant stakeholders. In addition, leaders should encourage and reward compliance with those policies, and take appropriate action when policies are violated.” But also, the code puts the duty on employees, through individual or collective action, to follow ethical practices even if management has not discharged their duty – “rules that are judged unethical should be challenged”.

Courses of action discussed in the ACM code are not limited to just challenging rules, but also actively disrupting unethical practices – “consider challenging the rule through existing channels before violating the rule. A computing professional who decides to violate a rule because it is unethical, or for any other reason, must consider potential consequences and accept responsibility for that action”.

One specific example of such disruptive action is whistleblowing, which the code recognizes as a legitimate course of action in the right circumstances – “if leaders do not act to curtail or mitigate such risks, it may be necessary to ‘blow the whistle’ to reduce potential harm”. However, my one disappointment in the code is that such disclosures are restricted to being made only through the “appropriate authorities” even though such authorities are often ineffective at instituting organizational change or protecting whistleblowers.

Implementing ethical policies is not without cost, and when doing so runs against business opportunities, profit often wins. It is nevertheless helpful that the code suggests that “in cases where misuse or harm are predictable or unavoidable, the best option may be to not implement the system”. The UK banks currently saying they can’t prevent push-payment fraud, resulting in life-changing losses to their customers, would do well to consider this principle. The current situation, where customers are held liable despite taking a normal level of care, is not an ethical practice.

Overall, I think this code is helpful and I am impressed at the breadth and depth of thought that clearly went into it. The code is also timely, as practitioners are now discovering their power to disrupt unethical practices through collective action and could take advantage of being given the permission to do so. The next task will be how to support and encourage the adoption of ethical principles and counteract the powerful forces that run into conflict with their practice.

Scanning beyond the horizon: long-term planning for cybersecurity and the post-quantum challenge

I recently came across an interesting white paper published by PwC, “A false sense of security? Cyber-security in the Middle East”. This paper is interesting for a number of reasons. Most obviously, I guess, it’s about an area of the world that’s a bit different from that of my immediate experience in the West and which faces many well-reported challenges. Indeed, it seems, as reported in the PwC paper, that companies and governments in the region suffer from more cyberattacks, resulting in bigger financial losses, than anywhere else in the world.

The paper confirms that many of the problems faced by companies and governments in the Middle East are, as of course one would expect, exactly those faced by their Western counterparts – too often, the cybersecurity industry responds to incidents in a fire-fighting style, rolling out patches in rushed knee-jerk reactions to imminent threats.

The way to counteract these problems is, of course, to train cybersecurity professionals who will be capable of making appropriate strategic and tactical investments in security and able to respond to respond better to attacks. All well and good, but there is global skills deficit in the cybersecurity industry and it seems that this problem is particularly acute in the Middle East;  and it seems to be a notable contributory factor to the problems experienced in the region. The problem needs some long-term thinking: in the average user, we need to encourage good security behaviours, which are learned over many years; in the security profession, we need to ensure that there is sufficient upcoming talent to fill our growing needs over the next century.

Exploring this topic a bit, I came across a company called SiConsult, a security services provider (with which I have no personal connection), with offices in the Middle East. They are taking an initiative, which provides students (or, indeed, anyone I think) with an interesting opportunity. They have been thinking about cryptography in the post-quantum world, and how to develop solutions and relevant expertise in the long term.

All public key cryptography as we currently know it may be rendered insecure by the deployment of quantum computers. Your Internet connection to the bank, the keys protecting your Dropbox, and your secure messaging applications will all be compromised. But a quantum computer that can run Shor’s algorithm, which means large numbers can be factorized in polynomial time, is still maybe ten years away (or five, or twenty, or … ). So why should we care now? Well, the consequences of losing the protection of good public-key crypto would be very serious and, consequently, NIST (the US’s National Institute of Standards and Technology) is running a process to standardize quantum-resistant algorithms. The first round of submissions has just closed, but we will have to wait until 2025 for draft standards, which could be too late for some use cases.

As a result of the process timeline, companies and academics are likely to search for their own solutions long before NIST standardizes theirs. SiConsult, the company I mentioned, is inviting students (or anyone else) develop a quantum-safe application messaging application, for a small prize – the Post Quantum Innovation Challenge. What is interesting is that the company’s motivation here is not purely financial – they are not looking to retain ownership of any designs or applications that may be submitted to the competition – but instead they are looking to spark interest in post-quantum cryptography, search for new cybersecurity talent, and encourage cybersecurity education, especially in the Middle East.

Initiatives like the Post Quantum Innovation Challenge are needed to energise those that may be considering a career in cyber security, to make sure that the talent pipeline is flowing well for years to come. Importantly, the barrier for entry to PQIC is relatively low: anyone with an interest in security should consider entering. Perhaps it will go a little of the way towards a solution to both the quantum and education long-term problems.

Practicing a science of security

Recently, at NSPW 2017, Tyler Moore, David Pym, and I presented our work on practicing a science of security. The main argument is that security work – both in academia but also in industry – already looks a lot like other sciences. It’s also an introduction to modern philosophy of science for security, and a survey of the existing science of security discussion within computer science. The goal is to help us ask more useful questions about what we can do better in security research, rather than get distracted by asking whether security can be scientific.

Most people writing about a science of security conclude that security work is not a science, or at best rather hopefully conclude that it is not a science yet but could be. We identify five common reasons people present as to why security is not a science: (1) experiments are untenable; (2) reproducibility is impossible; (3) there are no laws of nature in security; (4) there is no single ontology of terms to discuss security; and (5) security is merely engineering.

Through our introduction to modern philosophy of science, we demonstrate that all five of these complaints are misguided. They rely on an old conception of what counts as science that was largely abandoned in the 1970s, when the features of biology came to be recognized as important and independent from the features of physics. One way to understand what the five complaints actually allege is that security is not physics. But that’s much less impactful than claiming it is not science.

More importantly, we have a positive message on how to overcome these challenges and practice a science of security. Instead of complaining about untenable experiments, we can discuss structured observations of the empirical world. Experiments are just one type of structured observation. We need to know what counts as a useful structure to help us interpret the results as evidence. We provide recommendations for use of randomized control trials as well as references for useful design of experiments that collect qualitative empirical data. Ethical constraints are also important; the Menlo Report provides a good discussion on addressing them when designing structured observations and interventions in security.

Complaints about reproducibility are really targeted at the challenge of interpreting results. Astrophysics and paleontology do not reproduce experiments either, but are clearly still sciences. There are different senses of “reproduce,” from repeat exactly to corroborate by similar observations in a different context. There are also notions of statistical reproducibility, such as using the right tests and having enough observations to justify a statistical claim. The complaint is unfair in essentially demanding all the eight types of reproducibility at once, when realistically any individual study will only be able to probe a couple types at best. Seen with this additional nuance, security has similar challenges in reproducibility and interpreting evidence as other sciences.

A law of nature is a very strange thing to ask for when we have constructed the devices we are studying. The word “law” has had a lot of sticking power within science. The word was perhaps used in the 1600s and 1700s to imply a divine designer, thereby making the Church more comfortable with the work of the early scientists. The intellectual function we really care about is that a so-called “law” lets us generalize from particular observations. Mechanistic explanations of phenomena provide a more useful and approachable goal for our generalizations. A mechanism “for a phenomenon consists of entities (or parts) whose activities and interactions are organized so as to be responsible for the phenomenon” (pg 2).

MITRE wrote the original statement that a single ontology was needed for a science of security. They also happen to have a big research group funded to create such an ontology. We synthesize a more realistic view from Galison, Mitchell, and Craver. Basically, diverse fields contribute to a science of security by collaboratively adding constraints on the available explanations for a phenomenon. We should expect our explanations of complex topics to reflect that complexity, and so complexity may be a mark of maturity, rather than (as is commonly taken) a mark that security has as yet failed to become a science by simplifying everything into one language.

Finally, we address the relationship between science and engineering. In short, people have tried to reduce science to engineering and engineering to science. Neither are convincing. The line between the two is blurry, but it is useful. Engineers generate knowledge, and scientists generate knowledge. Scientists tend to want to explain why, whereas engineers tend to want to predict a change in the future based on something they make.  Knowing why may help us make changes. Making changes may help us understand why. We draw on the work of Dear and Leonelli to bring out this nuanced, mutually supportive relationship between science and engineering.

Security already can accommodate all of these perspectives. There is nothing here that makes it seem any less scientific than life sciences. What we hope to gain from this reorientation is to refocus the question about cybersecurity research from ‘is this process scientific’ to ‘why is this scientific process producing unsatisfactory results’.

Should you phish your own employees?

No. Please don’t. It does little for security but harms productivity (because staff spend ages pondering emails, and not answering legitimate ones), upsets staff and destroys trust within an organisation.

Why is phishing a problem?

Phishing is one of the more common ways by which criminals gain access to companies’ passwords and other security credentials. The criminal sends a fake email to trick employees into opening a malware-containing attachment, clicking on a link to a malicious website that solicits passwords, or carrying out a dangerous action like transferring funds to the wrong person. If the attack is successful, criminals could impersonate staff, gain access to confidential information, steal money, or disrupt systems. It’s therefore understandable that companies want to block phishing attacks.

Perimeter protection, such as blocking suspicious emails, can never be 100% accurate. Therefore companies often tell employees not to click on links or open attachments in suspicious emails.

The problem with this advice is that it conflicts with how technology works and employees getting their job done. Links are meant to be clicked on, attachments are meant to be opened. For many employees their job consists almost entirely of opening attachments from strangers, and clicking on links in emails. Even a moderately well targeted phishing email will almost certainly succeed in getting some employees to click on it.

Companies try to deal with this problem through more aggressive training, particularly sending out mock phishing emails that exhibit some of the characteristics of phishing emails but actually come from the IT staff at the company. The company then records which employees click on the link in the email, open the attachment, or provide passwords to a fake website, as appropriate.

The problem is that mock-phishing causes more harm than good.

What harm does mock-phishing cause?

I hope no company would publicly name and shame employees that open mock-phishing emails, but effectively telling your staff that they failed a test and need remedial training will make them feel ashamed despite best intentions. If, as often recommended, employees who repeatedly open mock-phishing emails will even be subject to disciplinary procedures, not only will mock phishing lead to stress and consequent loss of productivity, but it will make it less likely that employees will report when they have clicked on a real phishing email.

Alienating your employees in this way is really the last thing a company should do if it wants to be secure – something Adams & Sasse pointed out as early as 1999. It is extremely important that companies learn when a phishing email has been opened, because there is a lot that can be done to prevent or limit harm. Contrary to popular belief, attacks don’t generally happen “at the speed of light” (it took three weeks for the Target hackers to steal data, from the point of the initial breach). Promptly cleaning potentially infected computers, revoking compromised credentials, and analysing network logs, is extremely effective, but works only if employees feel that they are on the same side as IT staff.

More generally, mock-phishing conflicts with and harms the trust relationship between the company and employees (because the company is continually probing them for weakness) and between employees (because mock-phishing normally impersonates fellow employees). Kirlappos and Sasse showed that trust is essential for maintaining employee satisfaction and for creating organisational resilience, including ability to comply with security policies. If unchecked, prolonged resentment within organisation achieves exactly the opposite – it increases the risk of insider attacks, which in the vast majority of cases start with disgruntlement.

There are however ways to achieve the same goals as mock phishing without the resulting harm.

Measuring resilience against phishing

Companies are right to want to understand how vulnerable they are to attack, and mock-phishing seems to offer this. One problem however is that the likelihood of opening a phishing email depends mainly on how well it is written, and so mock-phishing campaigns tell you more about the campaign than the organisation.

Instead, because every organisation inevitably receives many phishing emails, companies don’t need to send out their own. Use “genuine” phishing emails to collect the data needed, but be careful not to deter reporting. Realistically, however, phishing emails are going to be opened regardless of what steps are taken (short of cutting off Internet email completely). So organisations’ security strategy should accommodate this.

Reducing vulnerability to phishing

Following mock-phishing with training seems like the perfect time to get employees’ attention, but is this actually an ineffective way to reduce an organisations’ vulnerability to phishing. Caputo et. al tried this out and found that training had no significant effect, regardless of how it was phrased (using the latest nudging techniques from behavioural economists, an idea many security practitioners find very attractive). In this study, the organisation’s help desk staff was overwhelmed by calls from panicked employees – and when told it was a “training exercise”, many expressed frustration and resentment towards the security staff that had tricked them. Even if phishing prevention training could be made to work, because the activity of opening a malicious email is so close to what people do as part of their job, it would disrupt business by causing employees to delete legitimate email or spend too long deciding whether to open them.

A strong, unambiguous, and reliable cue that distinguishes phishing emails from legitimate ones would help, but until we have secure end-to-end encrypted and authenticated email, this isn’t possible. We are left with the task of designing security systems accepting that some phishing emails will be opened, rather than pretending they won’t be and blaming breaches on employees that fail to meet an unachievable bar. If employees are consistently being told that their behaviour is not good enough but not being given realistic and actionable advice on how to do better, it creates learned helplessness, with all the negative psychological consequences.

Comply with industry “best-practice”

Something must be done to protect the company; mock-phishing is something, therefore must must be done. This perverse logic is the root cause of much poor security, where organisations think they must comply with so-called “best practice” – seldom more than out-of-date folk tradition – or face penalties when there is a breach. It’s for this reason that bad security guidance persists long after it has been shown to be ineffective, such as password complexity rules.

Compliance culture, where rules are blindly followed without there being evidence of effectiveness, is one of the worst reasons to adopt a security practice. We need more research on how to develop technology that is secure and that supports an organisation’s overall goals. We know that mock-phishing is not effective, but what’s the right combination of security advice and technology that will give adequate protection, and how do we adapt these to the unique situation of each company?

What to do instead?

The security industry should take the lead of the aerospace industry and recognise the “blame and train” isn’t an effective or acceptable strategy. The attraction of mock phishing exercises to security staff is that they can say they are “doing something”, and like the idea of being able to measure behaviour change as a result of it – even though research points the other way. If vendors claim they have examples of mock phishing training reducing clicks on links, it is usually because employees have been trained to recognise only the vendor’s mock phishing emails or are frightened into not clicking on any links – and nobody measures the losses that occur because emails from actual or potential customers or suppliers are not answered. “If security doesn’t work for people, it doesn’t work.

When the CIO of a merchant bank found that mock phishing caused much anger and resentment from highly paid traders, but no reduction in clicking on links, he started to listen to what it looked like from their side. “Your security specialists can’t tell if it is a phishing email or not – why are you expecting me to be able to do that?” After seeing the problem from their perspective, he added a button to the corporate mail client labeled “I’m not sure” instead, and asked staff to use the button to forward emails they were not sure about to the security department. The security department then let the employee know, plus list all identified malicious emails on a web site employees could check before forwarding emails. Clicking on phishing links dropped to virtually zero – plus staff started talking to each other about phishing emails they had seen, and what the attacker was trying to do.

Security should deal with the problems that actually face the company; preventing phishing wouldn’t have stopped recent ransomware attacks. Assuming phishing is a concern then, where possible to do so with adequate accuracy, phishing emails should be blocked. Some will get through, but with well engineered and promptly patched systems, harm can be limited. Phishing-resistant authentication credentials, such as FIDO U2F, means that stolen passwords are worthless. Common processes should be designed so that the easy option is the secure one, giving people time to think carefully about whether a request for an exception is legitimate. Finally, if malware does get onto company computers, compartmentalisation will limit damage, effective monitoring facilitates detection, and good backups allow rapid recovery.

 

An earlier version of this article was previously published by the New Statesman.

Online security won’t improve until companies stop passing the buck to the customer

It’s normally in the final seconds of a TV or radio interview that security experts get asked for advice for the general public – something simple, unambiguous, and universally applicable. It’s a fair question, and what the public want. But simple answers are usually wrong, and can do more harm than good.

For example, take the UK government’s Cyber Aware scheme to educate the public in cybersecurity. It recommends individuals choose long and complex passwords made out of three words. The problem with this advice is that the resulting passwords are hard to remember, especially as people have many passwords and use some infrequently. Consequently, they will be tempted to use the same password on multiple websites.

Password re-use is far more of a security problem than insufficiently complex passwords, so advice that doesn’t help people manage multiple passwords does more harm than good. Instead, I would recommend remembering your most important passwords (like banking and email), and store the rest in a password manager. This approach isn’t perfect or suitable for everyone, but for most people, it will improve their security.

Advice unfit for the real world

Cyber Aware also tells people not to write down their passwords, or let anyone else know them – banks require the same thing. But we know that people commonly share their banking credentials with family, for legitimate reasons. People also realise that writing down passwords is a pretty good approach if you’re only worried about internet hackers, rather than people who can get close to you to see the written notes. Security advice that doesn’t stand up to scrutiny or doesn’t fit with people’s lives will be ignored – and will discredit the organisation offering it.

Because everyone’s situation is different, good security advice should include helping people to understand what risks they should be worried about, and to take steps that mitigate these risks. This advice doesn’t have to be complicated. Teen Vogue published a tutorial on how to select and configure a secure messaging tool, which very sensibly explains that if you are more worried about invasions of privacy from people who can get their hands on your phone, you should make different choices than if you are just concerned about, for example, companies spying on you.

The Teen Vogue article was widely praised by security experts, in stark contrast to an article in The Guardian that made the eye-catching claim that encrypted messaging service WhatsApp is insecure, without making clear that this only applies in an obscure and extremely unlikely set of circumstances.

Zeynep Tufekci, a researcher studying the effects of technology on society, reported that the article was exploited to legitimise misleading advice given by the Turkish government that WhatsApp is unsafe, resulting in human rights activists using SMS instead – which is far easier for the government to censor and monitor.

The Turkish government’s “security advice” to move from WhatsApp to less secure SMS was clearly aimed more at assisting its surveillance efforts than helping the activists to whom the advice was directed. Another case where the advice is more for the benefit of the organisation giving it is that of banks, where the terms and conditions small print gives incomprehensible security advice that isn’t true security advice, instead merely a legal technique to allow the banks wiggle room to refuse to refund victims of fraud.

Continue reading Online security won’t improve until companies stop passing the buck to the customer