Winkle – Decentralised Checkpointing for Proof-of-Stake

Several blockchain projects are considering proof-of-stake mechanisms in place of proof-of-work, attracted by the lower energy costs. Some proof-of-stake protocols based on BFT systems such as HotStuff or Tendermint appear to provide faster and deterministic finality. In these protocols, a set of nodes known as validators, that are identified by their public key, operates the consensus protocol such that any user can verify it using only publicly available information by verifying the validators’ signatures. The set of validators changes periodically, with respect to a specific governance mechanism.

However, as any consensus protocol that is not based on resource consumption (such as proof-of-work, proof-of-space and so on) they are vulnerable to an attack known in the literature as Long-Range Attack. In a Long-Range Attack, an adversary obtains the secret keys of past validators (e.g., by bribing them at no cost since they do not use these keys any more) and is thus able to re-write the entire history of the blockchain with those. A user that has been offline for a long period of time could then be fooled by the adversarial chain.

The number of keys holding a given fraction of stake (logarithmic scale).

To solve this problem, we propose Winkle, a decentralised checkpointing mechanism operated by coin holders, whose keys are harder to compromise than validators’ as they are more numerous. By analogy, in Bitcoin, taking control of one-third of the total supply of money would require at least 889 keys, whereas only 4 mining pools control more than half of the hash power (see figure above).

Our Protocol

The idea of Winkle is that coin holders will checkpoint the honest chain, such that if an adversary creates an alternative chain, its chain will not be checkpointed (since the adversary does not control the keys of coin holders) and is thus easily differentiable from the honest chain.

Continue reading Winkle – Decentralised Checkpointing for Proof-of-Stake

The role of usability, power dynamics, and incentives in dispute resolutions around computer evidence

As evidence produced by a computer is often used in court cases, there are necessarily presumptions about the correct operation of the computer that produces it. At present, based on a 1997 paper by the Law Commission, it is assumed that a computer operated correctly unless there is explicit evidence to the contrary.

The recent Post Office trial (previously mentioned on Bentham’s Gaze) has made clear, if previous cases had not, that this assumption is flawed. After all, computers and the software they run are never perfect.

This blog post discusses a recent invited paper published in the Digital Evidence and Electronic Signature Law Review titled The Law Commission presumption concerning the dependability of computer evidence. The authors of the paper, collectively referred to as LLTT, are Peter Bernard Ladkin, Bev Littlewood, Harold Thimbleby and Martyn Thomas.

LLTT examine the basis for the presumption that a computer operated correctly unless there is explicit evidence to the contrary. They explain why the Law Commission’s belief in Colin Tapper’s statement in 1991 that “most computer error is either immediately detectable or results from error in the data entered into the machine” is flawed. Not only can computers be assumed to have bugs (including undiscovered bugs) but the occurrence of a bug may not be noticeable.

LLTT put forward three recommendations. First, a presumption that any particular computer system failure is not caused by software is not justified, even for software that has previously been shown to be very reliable. Second, evidence of previous computer failure undermines a presumption of current proper functioning. Third, the fact that a class of failures has not happened before is not a reason for assuming it cannot occur.

Continue reading The role of usability, power dynamics, and incentives in dispute resolutions around computer evidence

Transparency, evidence and dispute resolution

Despite the ubiquity of computers in everyday life, resolving a dispute regarding the misuse or malfunction of a system remains hard to do well. A recent example of this is the, now concluded, Post Office trial about the dispute between Post Office Limited and subpostmasters who operate some Post Office branches on their behalf.

Subpostmasters offer more than postal services, namely savings accounts, payment facilities, identity verification, professional accreditation, and lottery services. These services can involve large amounts of money, and subpostmasters were held liable for losses at their branch. The issue is that the accounting is done by the Horizon accounting system, a centralised system operated by Post Office Limited, and subpostmasters claim that their losses are not the result of errors or fraud on their part but rather a malfunction or malicious access to Horizon.

This case is interesting not only because of its scale (a settlement agreement worth close to £58 million was reached) but also because it highlights the difficulty in reasoning about issues related to computer systems in court. The case motivated us to write a short paper presented at the Security Protocols Workshop earlier this year – “Transparency Enhancing Technologies to Make Security Protocols Work for Humans”. This work focused on how the liability of a party could be determined when something goes wrong, i.e., whether a customer is a victim of a flaw in the service provider’s system or whether the customer has tried to defraud the service provider.

Applying Bayesian thinking to dispute resolution

An intuitive way of thinking about this problem is to apply Bayesian reasoning. Jaynes makes a good argument that any logically consistent form of reasoning will lead to taking this approach. Following this approach, we can consider the odd’s form of Bayes’ theorem expressed in the following way.

Odds form of Bayes' theorem

There is a good reason for considering the odd’s form of Bayes’ theorem over its standard form – it doesn’t just tell you if someone is likely to be liable, but whether they are more likely to be liable than not: a key consideration in civil litigation. If a party is liable, the probability that there is evidence is high so what matters is the probability that if the party is not liable there would be the same evidence. Useful evidence is, therefore, evidence that is unlikely to exist for a party that is not liable.

Continue reading Transparency, evidence and dispute resolution

By revisiting security training through economics principles, organisations can navigate how to support effective security behaviour change

Here I describe analysis by myself and colleagues Albesë Demjaha and David Pym at UCL, which originally appeared at the STAST workshop in late 2019 (where it was awarded best paper). The work was the basis for a talk I gave at Cambridge Computer Laboratory earlier this week (I thank Alice Hutchings and the Security Group for hosting the talk, as it was also an opportunity to consider this work alongside themes raised in our recent eCrime 2019 paper).

Secure behaviour in organisations

Both research and practice have shown that security behaviours, encapsulated in policy and advised in organisations, may not be adopted by employees. Employees may not see how advice applies to them, find it difficult to follow, or regard the expectations as unrealistic. Employees may, as a consequence, create their own alternative behaviours as an effort to approximate secure working (rather than totally abandoning security). Organisational support can then be critical to whether secure practices persist. Economics principles can be applied to explain how complex systems such as these behave the way they do, and so here we focus on informing an overarching goal to:

Provide better support for ‘good enough’ security-related decisions, by individuals within an organization, that best approximate secure behaviours under constraints, such as limited time or knowledge.

Traditional economics assumes decision-makers are rational, and that they are equipped with the capabilities and resources to make the decision which will be most beneficial for them. However, people have reasons, motivations, and goals when deciding to do something — whether they do it well or badly, they do engage in thinking and reasoning when making a decision. We must capture how the decision-making process looks for the employee, as a bounded agent with limited resources and knowledge to make the best choice. This process is more realistically represented in behavioural economics. And yet, behaviour intervention programmes mix elements of both of these areas of economics. It is by considering these principles in tandem that we explore a more constructive approach to decision-support in organisations.

Contradictions in current practice

A bounded agent often settles for a satisfactory decision, by satisficing rather than optimising. For example, the agent can turn to ‘rules of thumb’ and make ad-hoc decisions, based on a quick evaluation of perceived probability, costs, gains, and losses. We can already imagine how these restrictions may play out in a busy workplace. This leads us toward identifying those points of engagement at which employees ought to be supported, in order to avoid poor choices.

Continue reading By revisiting security training through economics principles, organisations can navigate how to support effective security behaviour change

Consider unintended harms of cybersecurity controls, as they might harm the people you are trying to protect

Well-meaning cybersecurity risk owners will deploy countermeasures in an effort to manage the risks they see affecting their services or systems. What is not often considered is that those countermeasures may produce unintended, negative consequences themselves. These unintended consequences can potentially be harmful, adversely affecting user behaviour, user inclusion, or the infrastructure itself (including services of others).

Here, I describe a framework co-developed with several international researchers at a Dagstuhl seminar in mid-2019, resulting in an eCrime 2019 paper later in the year. We were drawn together by an interest in understanding unintended harms of cybersecurity countermeasures, and encouraging efforts to preemptively identify and avoid these harms. Our collaboration on this theme drew on our varied and multidisciplinary backgrounds and interests, including not only risk management and cybercrime, but also security usability, systems engineering, and security economics.

We saw it as necessary to focus on situations where there is often an urgency to counter threats, but where efforts to manage threats have the potential to introduce harms. As documented in the recently published seminar report, we explored specific situations in which potential harms may make resolving the overarching problems more difficult, and as such cannot be ignored – especially where potentially harmful countermeasures ought to be avoided. Example case studies of particular importance include tech-abuse by an intimate partner, online disinformation campaigns, combating CEO fraud and phishing emails in organisations, and online dating fraud.

Consider disinformation campaigns, for example. Efforts to counter disinformation on social media platforms can include fact-checking and automated detection algorithms behind the scenes. These can reduce the burden on users to address the problem. However, automation can also reduce users’ scepticism towards the information they see; fact-checking can be appropriated as a tool by any one group to challenge viewpoints of dissimilar groups.

We then see how unintended harms can shift the burden of managing cybersecurity to others in the ecosystem without them necessarily expecting it or being prepared for it. There can be vulnerable populations which are disadvantaged by the effects of a control more than others. An example may be legitimate users of social media who are removed – or have their content removed – from a platform, due to traits shared with malicious actors or behaviour, e.g., referring to some of the same topics, irrespective of sentiment – an example of ‘Misclassification’, in the list below. If a user, user group, or their online activity are removed from the system, the risk owner for that system may not notice that problems have been created for users in this way – they simply will not see them, as their actions have excluded them. Anticipating and avoiding unintended harms is then crucial before any such outcomes can occur.

Continue reading Consider unintended harms of cybersecurity controls, as they might harm the people you are trying to protect

Resolving disputes through computer evidence: lessons from the Post Office Trial

On Monday, the final judgement in the Post Office trial was handed down, finding in favour of the claimants on all counts. The outcome will be of particular interest to the group of 587 claimants who brought the case against Post Office Limited, but the judgement also illustrates problems handling evidence generated by computers that have much broader applicability. I think this trial demonstrates that the way such disputes are resolved is not fit for purpose and that changes are needed in both in how computers generate evidence and how such evidence is reasoned about in litigation.

This case centres around disputes between Post Office Limited and sub-postmasters who operate Post Office branches on its behalf. Post Office Limited supplies these sub-postmasters with products to sell, and the computer accounting system – Horizon – for managing the branch. The claimants contend that shortfalls between the money that was in their branch and what Horizon says result from bugs in Horizon or someone maliciously accessing it. The Post Office instead claims that the shortfalls are real, and it is the responsibility of the sub-postmaster to reimburse the Post Office.

Such disputes have resulted in sub-postmasters being bankrupted, and others have even been jailed because the Post Office contends that evidence produced by Horizon demonstrates fraud by the sub-postmaster. The judgement vindicates the sub-postmasters, concluding that Horizon “was not remotely robust”.

This trial is actually the second in this case, with the prior one also finding in favour of the sub-postmasters – that the contractual terms set by Post Office regarding how they investigate and handle shortfalls are unfair. There would have been at least two more trials, had the parties not settled last week with Post Office Limited offering an apology and £58m in compensation. Of this, the vast majority will go towards legal costs and to the fund which bankrolled the litigation – leaving claimants lucky to get much more than £10k on average. Disappointing, sure, but better than nothing and that is what they could have got had the trials and inevitable appeals continued.

As would be expected for a trial depending on highly technical arguments, expert evidence featured heavily. The Post Office expert took a quantitative approach, presenting a statistical argument that claimant’s losses were implausibly high. This argument went by making a rough approximation as to the total losses of all sub-postmasters resulting from bugs in Horizon. Then, by assuming that these losses were spread over all sub-postmasters equally, losses by the 587 claimants would be no more than £25,000 – far less than the £18.7 million claimed. On this basis, the Post Office said that it is implausible for Horizon bugs to be the cause of the losses, and instead they are the fault of the affected sub-postmasters.

This argument is fundamentally flawed; I said so at the time, as did others. The claimant group was selected specifically as people who thought they were victims of Horizon bugs so it’s quite reasonable to think this group might indeed be disproportionally affected by Horizon bugs. The judge agreed, saying, “The group has a bias, in statistical terms. They plainly cannot be treated, in statistical terms, as though they are a random group of 587 [sub-postmasters]”. This error can be corrected, but the argument becomes circular and a statistical approach adds little new information. As the judgement concludes, “probability theory only takes one so far in this case, and that is not very far”.

Continue reading Resolving disputes through computer evidence: lessons from the Post Office Trial

We’re fighting the good fight, but are we making full use of the armoury?

In this post, we reflect on the current state of cybersecurity and the fight against cybercrime, and identify, we believe, one of the most significant drawbacks Information Security is facing. We argue that what is needed is a new, complementary research direction towards improving systems security and cybercrime mitigation, which combines the technical knowledge and insights gained from Information Security with the theoretical models and systematic frameworks from Environmental Criminology. For the full details, you can read our paper – “Bridging Information Security and Environmental Criminology Research to Better Mitigate Cybercrime.”

The fight against cybercrime is a long and arduous one. Not a day goes by without us hearing (at an increasingly alarming rate) the latest flurry of cyber attacks, malware operations, (not so) newly discovered vulnerabilities being exploited, and the odd sprinkling of a high-profile victim or a widely-used service being compromised by cybercriminals.

A burden borne for too long?

Today, the topic of security and cybercrime is one that is prominent in a number of circles and fields of research (e.g., crime science and criminology, law, sociology, economics, policy, policing), not to talk of wider society. However, for the best part of the last half-century, the burden of understanding and mitigating cybercrime, and improving systems security has been predominantly borne by information security researchers and computer engineers. Of course, this is entirely reasonable. As circumstances had long dictated, the exponential penetration and growth in the capability of digital technologies co-dependently brought the opportunity for malicious exploitation, and, alongside it, the need to combat and prevent such malicious activities. Enter the arms race.

However, and potentially the biggest downside to holding this solitary responsibility for so long, the traditional, InfoSec approach to security and cybercrime prevention has leaned heavily towards the technical side of this mantle: discovering vulnerabilities, creating patches, redefining secure software design (e.g., STRIDE), conceptualising threat models for technical systems, and developing technologies to detect, prevent, and/or counter these threats. But, with the threat landscape of today, is this enough?

Taking stock

Make no mistake, it is clear that such technical skill-sets and innovations that abound and are produced from information security are invaluable in keeping up with similarly skilled and innovative cybercriminals. Unfortunately, however, one may find that such approaches to security and preventing cybercrime are generally applied in an ad hoc manner and lacking systemic structure, with, on the other hand, focus being constantly drawn towards the “top” vulnerabilities (e.g., OWASP’s Top 10) as opposed to “less important” ones (which are just as capable in enabling a compromise), or focus on the most recent wave of cyber threats as opposed to those only occurring a few years ago (e.g., the Mirai botnet and its variants, which have been active as far back as 2016, but are seemingly now on the back burner of priorities).

How much thought, can we say, is being directed towards understanding the operational aspects of cybercrime – the journey of the cybercriminal, so to speak, and their opportunity framework? Patching vulnerabilities and taking down botnets are indeed important, but how much attention is placed on understanding criminal displacement and adaptation: the shift of criminal activity from one form to another, or the adaptation of cybercriminals (and even the victims, targets, and other stakeholders), in reaction to new countermeasures? Are system designers taking the necessary steps to minimise the attack surfaces effectively, considering all techniques available to them? Is it enough to look a problem at face value, develop a state-of-the-art detection system, and move on to the next one? We believe much more can and should be done.

Continue reading We’re fighting the good fight, but are we making full use of the armoury?

UK Parliament on protecting consumers from economic crime

On Friday, the UK House of Commons Treasury Committee published their report on the consumer perspective of economic crime. I’ve frequently addressed this topic in my research, as well as here on Bentham’s Gaze, so I’m pleased to see several recommendations of the committee match what myself and colleagues have proposed. In other respects, the report could have gone further, so as well as discussing the positive aspects of the report, I would also like to suggest what more could be done to reduce economic crime and protect its victims.

Irrevocable payments are the wrong default

Transfers between UK bank accounts will generally use the Faster Payment System (FPS), where money will immediately show up in the recipient account. FPS transfers cannot be revoked, even in the case of fraud. This characteristic protects banks because if fraudulently obtained funds leave the banking system, the bank receiving the transfer has no obligation to reimburse the victim.

In contrast, the clearing system for paper cheques permits payments to be revoked for a few days after the funds appeared in the recipient account, should there have been a fraud. This period allows customers to quickly make use of funds they receive, while still giving a window of opportunity for banks and customers to identify and prevent fraud. There’s no reason why this same revocation window could not be applied to fully electronic payment systems like FPS.

In my submissions to consultations on how to prevent push payment scams, I argued that irrevocable payments are the wrong default, and transfers should be possible to reverse in cases of fraud. The same argument applies to consumer-oriented cryptocurrencies like Libra. I’m pleased to see that the Treasury Committee agrees and they have recommended that when a customer sends money to an account for the first time, that transfer be revocable for 24 hours.

Introducing Confirmation of Payee, finally

The banking industry has been planning on launching the Confirmation of Payee system to check if the name of the recipient of a transfer matches what the customer sending money thinks. The committee is clearly frustrated with delays on deploying this system, first promised for September 2018 but since slipped to March 2020. Confirmation of Payee will be a helpful tool for customers to help avoid certain frauds. Still, I’m pleased the committee also recognise it’s limitations and that the “onus will always be on financial firms to develop further methods and technologies to keep up with fraudsters.” It is for this reason that I argued that a bank showing a customer a Confirmation of Payee mismatch should not be a sufficient condition to hold customers liable for fraud, and the push-payment scam reimbursement scheme is wrong to do so. It doesn’t look like the committee is asking for the situation to be changed though.

Continue reading UK Parliament on protecting consumers from economic crime

Forcing phone companies to secure SMS authentication would cause more harm than good

Food-writer and campaigner, Jack Monroe, has become the latest high-profile victim of a SIM-swap scam, losing over £5,000 from both her PayPal and bank accounts to a criminal who intercepted SMS authentication codes. The Payment Services Directive requires that fraud victims get their money back, but banks act slowly and sometimes push the blame onto the victims. When (as I hope it will) the money does eventually get reimbursed, she’s still unlikely to get compensation for any consequential losses, nor for the upset caused. It’s no surprise that this experience has been stressful for Jack, as it would be for most people in her situation.

I am, of course, very sympathetic to victims of SIM-swap fraud and recognise the substantial financial costs, as well as the sense of violation that results. Naturally, fingers are being pointed at the phone companies and followed up with calls for them to do better identity checks before transferring a phone number to a new SIM card. I think this isn’t entirely fair. The real problem is that banks and other payment service providers have outsourced authentication to phone companies, without ensuring that the level of security is appropriate for the sums of money at risk. Banks could have chosen to distribute authentication devices and find a secure way to re-issue ones that are lost. Instead, they have pushed this task to unwitting phone companies, and leave their customers to pick up the pieces when things go wrong, so don’t have an incentive to do better.

More secure SMS authentication

But what if phone companies did do a better job at handing out replacement SIM cards? Maybe the government could push them into doing so, or the phone companies might just get fed up with the bad press. Phone companies could, in principle, set up a process for re-issuing SIM cards which would meet the highest standards of the banking industry. Let’s put aside the issue that SMS was never designed to be secure, and that these processes would put up the cost of phone bills – would it fix the problem? I would argue that it does not. Processes good enough for banking authentication could lock people out of receiving phone calls, and disproportionately harm the most vulnerable members of society.

Continue reading Forcing phone companies to secure SMS authentication would cause more harm than good

A Marlin is One of the Fastest SNARKs in the Ocean

In this post, we discuss our new zero-knowledge proving system, Marlin, by Chiesa, Hu, Maller, Mishra, Vesely, and Ward. This year has been the year of the universal SNARK, with Sonic, Libra, and Plonk all bidding for attention. Marlin is yet another competitor, one which we recommend using when you require fast verification time without the use of batching.

Why Universal SNARKs?

A universal SNARK is a proving system in which a single trusted setup suffices to prove anything that we know how to prove. That means that the same setup could be used across all applications and that parameters could be stored in a general-purpose library. Additionally, these universal SNARKs typically have relatively easy to coordinate setup procedures, which makes it easier to convince users that the procedure has been carried out correctly and securely.

Some SNARKs avoid setup procedures altogether. Such works include Spartan, Halo, and Hyrax. However, the cost of avoiding a trusted setup can generally be seen in the proof sizes and verification time.

Marlin or Sonic?

In this authors’ humble opinion, Sonic is fabulous. The proofs are small, the provers are fast, and the verification is fast provided one is verifying many proofs at the same time. For applications that use batched verifications, Sonic currently remains the state-of-the-art. Cryptocurrency transactions are a classic example of this – nodes can verify all the transactions in a new block simultaneously (provided the miner aggregates the transactions). However, this setting in which Sonic excels, i.e. the setting in which the verifier is not just given a single proof but many, many proofs of the same thing, is not always a given. For an example where Sonic’s batched proofs would not suffice, consider a randomness beacon. Here verification of the beacons outputs is only done once in a while. Therefore it would be a setting where batching is totally inappropriate.

Continue reading A Marlin is One of the Fastest SNARKs in the Ocean