Thoughts on the Future Implications of Microsoft’s Legal Approach towards the TrickBot Takedown

Just this week, Microsoft announced its takedown operation against the TrickBot botnet, in collaboration with other cybersecurity partners, such as FS-ISAC, ESET, and Symantec. This takedown followed Microsoft’s successful application for a court order this month, enabling them to enact technical disruption against the botnet. Such legal processes are typical and necessary precursors to such counter-operations.

However, what was of particular interest, in this case, was the legal precedent Microsoft (successfully) sought, which was based on breaches of copyright law. Specifically, they founded their claim on the alleged reuse (and misuse) of Microsoft’s copyrighted software – the Windows 8 SDK – by the TrickBot malware authors.

Now, it is clear that this takedown operation is not likely to cripple the entirety of the TrickBot operation. As numerous researchers have found (e.g., Stone-Gross et al., 2011; Edwards et al., 2015), a takedown operation often works well in the short-term, but the long-term effects are highly variable. More often than not, unless they are arrested, and their infrastructure is seized, botnet operators tend to respond to such counter-operations by redeploying their infrastructure to new servers and ISPs, moving their operations to other geographic regions or new targets, and/or adapting their malware to become more resistant to detection and analysis. In fact, these are just some of the behaviours we observed in a case-by-case longitudinal study on botnets targeted by law enforcement (one of which involved Dyre, a predecessor of the TrickBot malware). A pre-print of this study is soon to be released.

So, no, I’m not proposing to discuss the long-term efficacy of takedown operations such as this. That is for another blog post.

Rather, what I want to discuss (or, perhaps, more accurately, put forward as some initial thoughts) are the potential implications of Microsoft’s legal approach to obtaining the court order (which is incumbent for such operations) on future botnet takedowns, particularly in the area of malicious code reuse.

Continue reading Thoughts on the Future Implications of Microsoft’s Legal Approach towards the TrickBot Takedown

The role of usability, power dynamics, and incentives in dispute resolutions around computer evidence

As evidence produced by a computer is often used in court cases, there are necessarily presumptions about the correct operation of the computer that produces it. At present, based on a 1997 paper by the Law Commission, it is assumed that a computer operated correctly unless there is explicit evidence to the contrary.

The recent Post Office trial (previously mentioned on Bentham’s Gaze) has made clear, if previous cases had not, that this assumption is flawed. After all, computers and the software they run are never perfect.

This blog post discusses a recent invited paper published in the Digital Evidence and Electronic Signature Law Review titled The Law Commission presumption concerning the dependability of computer evidence. The authors of the paper, collectively referred to as LLTT, are Peter Bernard Ladkin, Bev Littlewood, Harold Thimbleby and Martyn Thomas.

LLTT examine the basis for the presumption that a computer operated correctly unless there is explicit evidence to the contrary. They explain why the Law Commission’s belief in Colin Tapper’s statement in 1991 that “most computer error is either immediately detectable or results from error in the data entered into the machine” is flawed. Not only can computers be assumed to have bugs (including undiscovered bugs) but the occurrence of a bug may not be noticeable.

LLTT put forward three recommendations. First, a presumption that any particular computer system failure is not caused by software is not justified, even for software that has previously been shown to be very reliable. Second, evidence of previous computer failure undermines a presumption of current proper functioning. Third, the fact that a class of failures has not happened before is not a reason for assuming it cannot occur.

Continue reading The role of usability, power dynamics, and incentives in dispute resolutions around computer evidence

Transparency, evidence and dispute resolution

Despite the ubiquity of computers in everyday life, resolving a dispute regarding the misuse or malfunction of a system remains hard to do well. A recent example of this is the, now concluded, Post Office trial about the dispute between Post Office Limited and subpostmasters who operate some Post Office branches on their behalf.

Subpostmasters offer more than postal services, namely savings accounts, payment facilities, identity verification, professional accreditation, and lottery services. These services can involve large amounts of money, and subpostmasters were held liable for losses at their branch. The issue is that the accounting is done by the Horizon accounting system, a centralised system operated by Post Office Limited, and subpostmasters claim that their losses are not the result of errors or fraud on their part but rather a malfunction or malicious access to Horizon.

This case is interesting not only because of its scale (a settlement agreement worth close to £58 million was reached) but also because it highlights the difficulty in reasoning about issues related to computer systems in court. The case motivated us to write a short paper presented at the Security Protocols Workshop earlier this year – “Transparency Enhancing Technologies to Make Security Protocols Work for Humans”. This work focused on how the liability of a party could be determined when something goes wrong, i.e., whether a customer is a victim of a flaw in the service provider’s system or whether the customer has tried to defraud the service provider.

Applying Bayesian thinking to dispute resolution

An intuitive way of thinking about this problem is to apply Bayesian reasoning. Jaynes makes a good argument that any logically consistent form of reasoning will lead to taking this approach. Following this approach, we can consider the odd’s form of Bayes’ theorem expressed in the following way.

Odds form of Bayes' theorem

There is a good reason for considering the odd’s form of Bayes’ theorem over its standard form – it doesn’t just tell you if someone is likely to be liable, but whether they are more likely to be liable than not: a key consideration in civil litigation. If a party is liable, the probability that there is evidence is high so what matters is the probability that if the party is not liable there would be the same evidence. Useful evidence is, therefore, evidence that is unlikely to exist for a party that is not liable.

Continue reading Transparency, evidence and dispute resolution

By revisiting security training through economics principles, organisations can navigate how to support effective security behaviour change

Here I describe analysis by myself and colleagues Albesë Demjaha and David Pym at UCL, which originally appeared at the STAST workshop in late 2019 (where it was awarded best paper). The work was the basis for a talk I gave at Cambridge Computer Laboratory earlier this week (I thank Alice Hutchings and the Security Group for hosting the talk, as it was also an opportunity to consider this work alongside themes raised in our recent eCrime 2019 paper).

Secure behaviour in organisations

Both research and practice have shown that security behaviours, encapsulated in policy and advised in organisations, may not be adopted by employees. Employees may not see how advice applies to them, find it difficult to follow, or regard the expectations as unrealistic. Employees may, as a consequence, create their own alternative behaviours as an effort to approximate secure working (rather than totally abandoning security). Organisational support can then be critical to whether secure practices persist. Economics principles can be applied to explain how complex systems such as these behave the way they do, and so here we focus on informing an overarching goal to:

Provide better support for ‘good enough’ security-related decisions, by individuals within an organization, that best approximate secure behaviours under constraints, such as limited time or knowledge.

Traditional economics assumes decision-makers are rational, and that they are equipped with the capabilities and resources to make the decision which will be most beneficial for them. However, people have reasons, motivations, and goals when deciding to do something — whether they do it well or badly, they do engage in thinking and reasoning when making a decision. We must capture how the decision-making process looks for the employee, as a bounded agent with limited resources and knowledge to make the best choice. This process is more realistically represented in behavioural economics. And yet, behaviour intervention programmes mix elements of both of these areas of economics. It is by considering these principles in tandem that we explore a more constructive approach to decision-support in organisations.

Contradictions in current practice

A bounded agent often settles for a satisfactory decision, by satisficing rather than optimising. For example, the agent can turn to ‘rules of thumb’ and make ad-hoc decisions, based on a quick evaluation of perceived probability, costs, gains, and losses. We can already imagine how these restrictions may play out in a busy workplace. This leads us toward identifying those points of engagement at which employees ought to be supported, in order to avoid poor choices.

Continue reading By revisiting security training through economics principles, organisations can navigate how to support effective security behaviour change

Resolving disputes through computer evidence: lessons from the Post Office Trial

On Monday, the final judgement in the Post Office trial was handed down, finding in favour of the claimants on all counts. The outcome will be of particular interest to the group of 587 claimants who brought the case against Post Office Limited, but the judgement also illustrates problems handling evidence generated by computers that have much broader applicability. I think this trial demonstrates that the way such disputes are resolved is not fit for purpose and that changes are needed in both in how computers generate evidence and how such evidence is reasoned about in litigation.

This case centres around disputes between Post Office Limited and sub-postmasters who operate Post Office branches on its behalf. Post Office Limited supplies these sub-postmasters with products to sell, and the computer accounting system – Horizon – for managing the branch. The claimants contend that shortfalls between the money that was in their branch and what Horizon says result from bugs in Horizon or someone maliciously accessing it. The Post Office instead claims that the shortfalls are real, and it is the responsibility of the sub-postmaster to reimburse the Post Office.

Such disputes have resulted in sub-postmasters being bankrupted, and others have even been jailed because the Post Office contends that evidence produced by Horizon demonstrates fraud by the sub-postmaster. The judgement vindicates the sub-postmasters, concluding that Horizon “was not remotely robust”.

This trial is actually the second in this case, with the prior one also finding in favour of the sub-postmasters – that the contractual terms set by Post Office regarding how they investigate and handle shortfalls are unfair. There would have been at least two more trials, had the parties not settled last week with Post Office Limited offering an apology and £58m in compensation. Of this, the vast majority will go towards legal costs and to the fund which bankrolled the litigation – leaving claimants lucky to get much more than £10k on average. Disappointing, sure, but better than nothing and that is what they could have got had the trials and inevitable appeals continued.

As would be expected for a trial depending on highly technical arguments, expert evidence featured heavily. The Post Office expert took a quantitative approach, presenting a statistical argument that claimant’s losses were implausibly high. This argument went by making a rough approximation as to the total losses of all sub-postmasters resulting from bugs in Horizon. Then, by assuming that these losses were spread over all sub-postmasters equally, losses by the 587 claimants would be no more than £25,000 – far less than the £18.7 million claimed. On this basis, the Post Office said that it is implausible for Horizon bugs to be the cause of the losses, and instead they are the fault of the affected sub-postmasters.

This argument is fundamentally flawed; I said so at the time, as did others. The claimant group was selected specifically as people who thought they were victims of Horizon bugs so it’s quite reasonable to think this group might indeed be disproportionally affected by Horizon bugs. The judge agreed, saying, “The group has a bias, in statistical terms. They plainly cannot be treated, in statistical terms, as though they are a random group of 587 [sub-postmasters]”. This error can be corrected, but the argument becomes circular and a statistical approach adds little new information. As the judgement concludes, “probability theory only takes one so far in this case, and that is not very far”.

Continue reading Resolving disputes through computer evidence: lessons from the Post Office Trial

We’re fighting the good fight, but are we making full use of the armoury?

In this post, we reflect on the current state of cybersecurity and the fight against cybercrime, and identify, we believe, one of the most significant drawbacks Information Security is facing. We argue that what is needed is a new, complementary research direction towards improving systems security and cybercrime mitigation, which combines the technical knowledge and insights gained from Information Security with the theoretical models and systematic frameworks from Environmental Criminology. For the full details, you can read our paper – “Bridging Information Security and Environmental Criminology Research to Better Mitigate Cybercrime.”

The fight against cybercrime is a long and arduous one. Not a day goes by without us hearing (at an increasingly alarming rate) the latest flurry of cyber attacks, malware operations, (not so) newly discovered vulnerabilities being exploited, and the odd sprinkling of a high-profile victim or a widely-used service being compromised by cybercriminals.

A burden borne for too long?

Today, the topic of security and cybercrime is one that is prominent in a number of circles and fields of research (e.g., crime science and criminology, law, sociology, economics, policy, policing), not to talk of wider society. However, for the best part of the last half-century, the burden of understanding and mitigating cybercrime, and improving systems security has been predominantly borne by information security researchers and computer engineers. Of course, this is entirely reasonable. As circumstances had long dictated, the exponential penetration and growth in the capability of digital technologies co-dependently brought the opportunity for malicious exploitation, and, alongside it, the need to combat and prevent such malicious activities. Enter the arms race.

However, and potentially the biggest downside to holding this solitary responsibility for so long, the traditional, InfoSec approach to security and cybercrime prevention has leaned heavily towards the technical side of this mantle: discovering vulnerabilities, creating patches, redefining secure software design (e.g., STRIDE), conceptualising threat models for technical systems, and developing technologies to detect, prevent, and/or counter these threats. But, with the threat landscape of today, is this enough?

Taking stock

Make no mistake, it is clear that such technical skill-sets and innovations that abound and are produced from information security are invaluable in keeping up with similarly skilled and innovative cybercriminals. Unfortunately, however, one may find that such approaches to security and preventing cybercrime are generally applied in an ad hoc manner and lacking systemic structure, with, on the other hand, focus being constantly drawn towards the “top” vulnerabilities (e.g., OWASP’s Top 10) as opposed to “less important” ones (which are just as capable in enabling a compromise), or focus on the most recent wave of cyber threats as opposed to those only occurring a few years ago (e.g., the Mirai botnet and its variants, which have been active as far back as 2016, but are seemingly now on the back burner of priorities).

How much thought, can we say, is being directed towards understanding the operational aspects of cybercrime – the journey of the cybercriminal, so to speak, and their opportunity framework? Patching vulnerabilities and taking down botnets are indeed important, but how much attention is placed on understanding criminal displacement and adaptation: the shift of criminal activity from one form to another, or the adaptation of cybercriminals (and even the victims, targets, and other stakeholders), in reaction to new countermeasures? Are system designers taking the necessary steps to minimise the attack surfaces effectively, considering all techniques available to them? Is it enough to look a problem at face value, develop a state-of-the-art detection system, and move on to the next one? We believe much more can and should be done.

Continue reading We’re fighting the good fight, but are we making full use of the armoury?

UK Parliament on protecting consumers from economic crime

On Friday, the UK House of Commons Treasury Committee published their report on the consumer perspective of economic crime. I’ve frequently addressed this topic in my research, as well as here on Bentham’s Gaze, so I’m pleased to see several recommendations of the committee match what myself and colleagues have proposed. In other respects, the report could have gone further, so as well as discussing the positive aspects of the report, I would also like to suggest what more could be done to reduce economic crime and protect its victims.

Irrevocable payments are the wrong default

Transfers between UK bank accounts will generally use the Faster Payment System (FPS), where money will immediately show up in the recipient account. FPS transfers cannot be revoked, even in the case of fraud. This characteristic protects banks because if fraudulently obtained funds leave the banking system, the bank receiving the transfer has no obligation to reimburse the victim.

In contrast, the clearing system for paper cheques permits payments to be revoked for a few days after the funds appeared in the recipient account, should there have been a fraud. This period allows customers to quickly make use of funds they receive, while still giving a window of opportunity for banks and customers to identify and prevent fraud. There’s no reason why this same revocation window could not be applied to fully electronic payment systems like FPS.

In my submissions to consultations on how to prevent push payment scams, I argued that irrevocable payments are the wrong default, and transfers should be possible to reverse in cases of fraud. The same argument applies to consumer-oriented cryptocurrencies like Libra. I’m pleased to see that the Treasury Committee agrees and they have recommended that when a customer sends money to an account for the first time, that transfer be revocable for 24 hours.

Introducing Confirmation of Payee, finally

The banking industry has been planning on launching the Confirmation of Payee system to check if the name of the recipient of a transfer matches what the customer sending money thinks. The committee is clearly frustrated with delays on deploying this system, first promised for September 2018 but since slipped to March 2020. Confirmation of Payee will be a helpful tool for customers to help avoid certain frauds. Still, I’m pleased the committee also recognise it’s limitations and that the “onus will always be on financial firms to develop further methods and technologies to keep up with fraudsters.” It is for this reason that I argued that a bank showing a customer a Confirmation of Payee mismatch should not be a sufficient condition to hold customers liable for fraud, and the push-payment scam reimbursement scheme is wrong to do so. It doesn’t look like the committee is asking for the situation to be changed though.

Continue reading UK Parliament on protecting consumers from economic crime

Forcing phone companies to secure SMS authentication would cause more harm than good

Food-writer and campaigner, Jack Monroe, has become the latest high-profile victim of a SIM-swap scam, losing over £5,000 from both her PayPal and bank accounts to a criminal who intercepted SMS authentication codes. The Payment Services Directive requires that fraud victims get their money back, but banks act slowly and sometimes push the blame onto the victims. When (as I hope it will) the money does eventually get reimbursed, she’s still unlikely to get compensation for any consequential losses, nor for the upset caused. It’s no surprise that this experience has been stressful for Jack, as it would be for most people in her situation.

I am, of course, very sympathetic to victims of SIM-swap fraud and recognise the substantial financial costs, as well as the sense of violation that results. Naturally, fingers are being pointed at the phone companies and followed up with calls for them to do better identity checks before transferring a phone number to a new SIM card. I think this isn’t entirely fair. The real problem is that banks and other payment service providers have outsourced authentication to phone companies, without ensuring that the level of security is appropriate for the sums of money at risk. Banks could have chosen to distribute authentication devices and find a secure way to re-issue ones that are lost. Instead, they have pushed this task to unwitting phone companies, and leave their customers to pick up the pieces when things go wrong, so don’t have an incentive to do better.

More secure SMS authentication

But what if phone companies did do a better job at handing out replacement SIM cards? Maybe the government could push them into doing so, or the phone companies might just get fed up with the bad press. Phone companies could, in principle, set up a process for re-issuing SIM cards which would meet the highest standards of the banking industry. Let’s put aside the issue that SMS was never designed to be secure, and that these processes would put up the cost of phone bills – would it fix the problem? I would argue that it does not. Processes good enough for banking authentication could lock people out of receiving phone calls, and disproportionately harm the most vulnerable members of society.

Continue reading Forcing phone companies to secure SMS authentication would cause more harm than good

Confirmation of Payee is coming, but will it protect bank customers from fraud?

The Payment System Regulator (PSR) has just announced that the UK’s six largest banks must check whether the name of the recipient of a transfer matches what the sender thinks. This new feature should help address a security loophole in online payments: the name of the recipient of transfers is ignored, contrary to expectations and unlike cheques. This improved security should make some fraud more difficult, but banks must be prevented from exploiting the change to unfairly shift the liability of the remaining crime to the victims.

The PSR’s target is for checks to be fully implemented by March 2020, somewhat later than their initial promise to Parliament of September 2018 and subsequent target of July 2019. The new proposal, known as Confirmation of Payee, also only covers the six largest banking groups, but this should cover 90% of transfers. Its goal is to defend against criminals who trick victims into transferring funds under the false pretence that the money is going to the victim’s new account, whereas it is really going to the criminal. The losses from such fraud, known as push payment scams, are often life-changing, resulting in misery for the victims.

Checks on the recipient name will make this particular scam harder, so while unlikely to prevent all types of push payment scams they will hopefully force criminals to adopt strategies that are easier to prevent. The risk that consumer representatives and regulators will need to watch out for is that these new security measures could result in victims being unfairly held liable. This scenario is, unfortunately, likely because the voluntary consumer protection code for push payment scams excuses the bank from liability if they show the customer a Confirmation of Payee warning.

Warning fatigue and misaligned incentives

In my response to the consultation over this consumer protection code, I raised the issue of “warning fatigue” – that customers will be shown many irrelevant warnings while they do online banking and this reduces the likelihood that customers will notice important ones. Even Confirmation of Payee warnings will frequently be wrong, such as if the recipient’s bank account is under a different name to what the sender expects. If the two names are very dissimilar, the sender won’t be given more details but if the name entered is close to the name in bank records the sender should be told what the correct one is and asked to compare.

Continue reading Confirmation of Payee is coming, but will it protect bank customers from fraud?

Will dispute resolution be Libra’s Achilles’ heel?

Facebook’s new cryptocurrency, Libra, has the ambitious goal of being the “financial infrastructure that empowers billions of people”. This aspiration will only be achievable if the user-experience (UX) of Libra and associated technologies is competitive with existing payment channels. Now, Facebook has an excellent track record of building high-quality websites and mobile applications, but good UX goes further than just having an aesthetically pleasing and fast user interface. We can already see aspects of Libra’s design that will have consequences on the experience of its users making payments.

For example, the basket of assets that underly the Libra currency should ensure that its value should not be too volatile in terms of the currencies represented within the reserve, so easing international payments. However, Libra’s value will fluctuate against every other currency, creating a challenge for domestic payments. People won’t be paid their salary in Libra any time soon, nor will rents be denominated in Libra. If the public is expected to hold significant value in Libra, fluctuations in the currency markets could make the difference between someone being able to pay their rent or not – a certainly unwelcome user experience.

Whether the public will consider the advantages of Libra are worth the exposure to the foibles of market fluctuations is an open question, but in this post, I’m mostly going to discuss the consequences another design decision baked into the design of Libra: that transactions are irrevocable. Once a transaction is accepted by the validator network, the user may proceed “knowing that the transaction can never be changed or reversed“. This is a common design decision within cryptocurrencies because it ensures that companies, governments and regulators should be unable to revoke payments they dislike. When coupled with anonymity or decentralisation, to prevent blacklisted transactions being blocked beforehand, irrevocability creates a censorship-resistant payment system.

Mitigating the cost of irrevocable transactions

Libra isn’t decentralised, nor is it anonymous, so it is unlikely to be particularly resistant to censorship over matters when there is an international consensus. Irrevocability does, however, make fraud easier because once stolen funds are gone, they cannot be reinstated, even if the fraud is identified. Other cryptocurrencies share Libra’s irrevocability (at least in theory), but they are designed for technically sophisticated users, and their risk of theft can be balanced against the potentially substantial gains (and losses) that can be made from volatile cryptocurrencies. While irrevocability is common within cryptocurrencies, it is not within the broader payments industry. Exposing billions of people to the risk of their Libra holdings being stolen, without the potential for recourse, isn’t good UX. I’ve argued that irrevocable transactions protect the interests of financial institutions over those of the public, and are the wrong default for payments. Eventually, public pressure and regulatory intervention forced UK banks to revoke fraudulent transactions, and they take on the risk that they are unable to do so, rather than pass it onto the victims. The same argument applies to Libra, and if fraud becomes common, they will see the same pressures as UK banks.

Continue reading Will dispute resolution be Libra’s Achilles’ heel?