The role of usability, power dynamics, and incentives in dispute resolutions around computer evidence

As evidence produced by a computer is often used in court cases, there are necessarily presumptions about the correct operation of the computer that produces it. At present, based on a 1997 paper by the Law Commission, it is assumed that a computer operated correctly unless there is explicit evidence to the contrary.

The recent Post Office trial (previously mentioned on Bentham’s Gaze) has made clear, if previous cases had not, that this assumption is flawed. After all, computers and the software they run are never perfect.

This blog post discusses a recent invited paper published in the Digital Evidence and Electronic Signature Law Review titled The Law Commission presumption concerning the dependability of computer evidence. The authors of the paper, collectively referred to as LLTT, are Peter Bernard Ladkin, Bev Littlewood, Harold Thimbleby and Martyn Thomas.

LLTT examine the basis for the presumption that a computer operated correctly unless there is explicit evidence to the contrary. They explain why the Law Commission’s belief in Colin Tapper’s statement in 1991 that “most computer error is either immediately detectable or results from error in the data entered into the machine” is flawed. Not only can computers be assumed to have bugs (including undiscovered bugs) but the occurrence of a bug may not be noticeable.

LLTT put forward three recommendations. First, a presumption that any particular computer system failure is not caused by software is not justified, even for software that has previously been shown to be very reliable. Second, evidence of previous computer failure undermines a presumption of current proper functioning. Third, the fact that a class of failures has not happened before is not a reason for assuming it cannot occur.

Continue reading The role of usability, power dynamics, and incentives in dispute resolutions around computer evidence

Transparency, evidence and dispute resolution

Despite the ubiquity of computers in everyday life, resolving a dispute regarding the misuse or malfunction of a system remains hard to do well. A recent example of this is the, now concluded, Post Office trial about the dispute between Post Office Limited and subpostmasters who operate some Post Office branches on their behalf.

Subpostmasters offer more than postal services, namely savings accounts, payment facilities, identity verification, professional accreditation, and lottery services. These services can involve large amounts of money, and subpostmasters were held liable for losses at their branch. The issue is that the accounting is done by the Horizon accounting system, a centralised system operated by Post Office Limited, and subpostmasters claim that their losses are not the result of errors or fraud on their part but rather a malfunction or malicious access to Horizon.

This case is interesting not only because of its scale (a settlement agreement worth close to £58 million was reached) but also because it highlights the difficulty in reasoning about issues related to computer systems in court. The case motivated us to write a short paper presented at the Security Protocols Workshop earlier this year – “Transparency Enhancing Technologies to Make Security Protocols Work for Humans”. This work focused on how the liability of a party could be determined when something goes wrong, i.e., whether a customer is a victim of a flaw in the service provider’s system or whether the customer has tried to defraud the service provider.

Applying Bayesian thinking to dispute resolution

An intuitive way of thinking about this problem is to apply Bayesian reasoning. Jaynes makes a good argument that any logically consistent form of reasoning will lead to taking this approach. Following this approach, we can consider the odd’s form of Bayes’ theorem expressed in the following way.

Odds form of Bayes' theorem

There is a good reason for considering the odd’s form of Bayes’ theorem over its standard form – it doesn’t just tell you if someone is likely to be liable, but whether they are more likely to be liable than not: a key consideration in civil litigation. If a party is liable, the probability that there is evidence is high so what matters is the probability that if the party is not liable there would be the same evidence. Useful evidence is, therefore, evidence that is unlikely to exist for a party that is not liable.

Continue reading Transparency, evidence and dispute resolution

Thoughts on the Libra blockchain: too centralised, not private, and won’t help the unbanked

Facebook recently announced a new project, Libra, whose mission is to be “a simple global currency and financial infrastructure that empowers billions of people”. The announcement has predictably been met with scepticism by organisations like Privacy International, regulators in the U.S. and Europe, and the media at large. This is wholly justified given the look of the project’s website, which features claims of poverty reduction, job creation, and more generally empowering billions of people, wrapped in a dubious marketing package.

To start off, there is the (at least for now) permissioned aspect of the system. One appealing aspect of cryptocurrencies is their potential for decentralisation and censorship resistance. It wasn’t uncommon to see the story of PayPal freezing Wikileak’s account in the first few slides of a cryptocurrency talk motivating its purpose. Now, PayPal and other well-known providers of payment services are the ones operating nodes in Libra.

There is some valid criticism to be made about the permissioned aspect of a system that describes itself as a public good when other cryptocurrencies are permissionless. These are essentially centralised, however, with inefficient energy wasting mechanisms like Proof-of-Work requiring large investments for any party wishing to contribute.

There is a roadmap towards decentralisation, but it is vague. Achieving decentralisation, whether at the network or governance level, hasn’t been done even in a priori decentralised cryptocurrencies. In this sense, Libra hasn’t really done worse so far. It already involves more members than there are important Bitcoin or Ethereum miners, for example, and they are also more diverse. However, this is more of a fault in existing cryptocurrencies rather than a quality of Libra.

Continue reading Thoughts on the Libra blockchain: too centralised, not private, and won’t help the unbanked

Improving the auditability of access to data requests

Data is increasingly collected and shared, with potential benefits for both individuals and society as a whole, but people cannot always be confident that their data will be shared and used appropriately. Decisions made with the help of sensitive data can greatly affect lives, so there is a need for ways to hold data processors accountable. This requires not only ways to audit these data processors, but also ways to verify that the reported results of an audit are accurate, while protecting the privacy of individuals whose data is involved.

We (Alexander Hicks, Vasilios Mavroudis, Mustafa Al-Basam, Sarah Meiklejohn and Steven Murdoch) present a system, VAMS, that allows individuals to check accesses to their sensitive personal data, enables auditors to detect violations of policy, and allows publicly verifiable and privacy-preserving statistics to be published. VAMS has been implemented twice, as a permissioned distributed ledger using Hyperledger Fabric and as a verifiable log-backed map using Trillian. The paper and the code are available.

Use cases and setting

Our work is motivated by two scenarios: controlling the access of law-enforcement personnel to communication records and controlling the access of healthcare professionals to medical data.

The UK Home Office states that 95% of serious and organized criminal cases make use of communications data. Annual reports published by the IOCCO (now under the IPCO name) provide some information about the request and use of communications data. There were over 750 000 requests for data in 2016, a portion of which were audited to provide the usage statistics and errors that can be found in the published report.

Not only is it important that requests are auditable, the requested data can also be used as evidence in legal proceedings. In this case, it is necessary to ensure the integrity of the data or to rely on representatives of data providers and expert witnesses, the latter being more expensive and requiring trust in third parties.

In the healthcare case, individuals usually consent for their GP or any medical professional they interact with to have access to relevant medical records, but may have concerns about the way their information is then used or shared.  The NHS regularly shares data with researchers or companies like DeepMind, sometimes in ways that may reduce the trust levels of individuals, despite the potential benefits to healthcare.

Continue reading Improving the auditability of access to data requests

Incentives in Security Protocols

The 2018 edition of the International Security Protocols Workshop took place last week. The theme this year was “fail-safe and fail-deadly concepts in protocol design”.

One common theme at this year’s workshop is that of threat models and incentives, which is covered by the majority of accepted papers. One of these is our (Sarah Azouvi, Alexander Hicks and Steven Murdoch) submission – Incentives in Security Protocols. The aim of the paper is to discuss how incentives can be considered and incorporated in the security of systems. In line with the given theme, the focus is on fail-safe and fail-deadly cases which we look at for the cases of the EMV protocol, consensus in cryptocurrencies, and non-economic systems such as Tor. This post will summarise the main ideas laid out in the paper.

Fail safe, fail deadly and people

Systems can fail, which requires some thought by system designers to account for these failures. From this setting comes the idea behind fail safe protocols which are such that even if the protocol fails, the failure can be dealt with or the protocol can be aborted to limit damage. The idea of a fail deadly setting is an extension of this where failure is defended against through deterrence, as in the case of nuclear deterrence (sometimes a realistic case).

Human input often plays a role in the use of the system, particularly when decisions are required as in fail safe and fail deadly instances. These decisions are then made according to incentives which can aligned to make the system robust to failure. For a fail deadly alignment, this means that a person in position to prevent system failure will be harmed by the failure. In the fail safe case, the innocent parties should be protected from the consequences of system failure. The two concepts are really two sides of the same coin that assigns liability.

It is often said that people are the weakest link in security, but that is an easy excuse for broken protocols. If security incentives are aligned properly, then humans are the strongest link.

The EMV protocol, adding incentives after the fact

As a first example, we consider the case of the EMV protocol, which is used for the majority of smart card payments worldwide, as well as smartphone and card-based contactless payment. Over the years, many vulnerabilities have been identified and removed. Fraud still exists however, due not to unexpected protocol vulnerabilities but to decisions made by banks (e.g., omitting the ability for cards to produce digital signatures), merchants (e.g., omitting PIN verification) and payment networks not sending transactions details back to banks. These are intentional choices, aiming to saves costs and cut transaction times but make fraud harder to detect.

Continue reading Incentives in Security Protocols

Smart Contracts and Bribes

We propose smart contracts that allows a wealthy adversary to rent existing hashing power and attack Nakamoto-style consensus protocols. Our bribery smart contracts highlight:

  • The use of Ethereum’s uncle block reward to directly subsidise a bribery attack,
  • The first history-revision attack requiring no trust between the briber and bribed miners.
  • The first realisation of a Goldfinger attack, using a contract that rewards miners in one cryptocurrency (e.g. Ethereum) for reducing the utility of another cryptocurrency (e.g. Bitcoin).

This post provides an overview of the full paper (by Patrick McCorry, Alexander Hicks and Sarah Meiklejohn) which will be presented at the 5th Workshop on Bitcoin and Blockchain Research, held at this year’s Financial Cryptography and Data Security conference.

What is a bribery attack?

Fundamentally, a wealthy adversary (let’s call her Alice) wishes to manipulate the blockchain in some way. For example, by censoring transactions, revising the blockchain’s history or trying to reduce the utility of another blockchain.

But purchasing hardware up front and competing with existing miners is discouragingly expensive, and may require a Boeing or two. Instead, it may be easier and more cost-effective for Alice to temporarily rent hashing power and obtain a majority of the network’s hash rate before performing the attack.

Continue reading Smart Contracts and Bribes

EPFL blockchain summer school

This year EPFL hosted a Blockchain Summer School from the 21st to the 24th of June. UCL was well represented with Sarah Meiklejohn presenting two talks whilst Sarah Azouvi, Patrick McCorry, Mustafa Al-Bassam and Alexander Hicks also attended. This blog post is a joint effort from the four of us, aimed at highlighting the talks presented last week.

Patrick, Sarah, Sarah, Mustafa, Rebekah (UCL alumni) and Alex. Credit: Emin Gün Sirer

The Summer School featured talks on several aspects of blockchain technology ranging from classical distributed computing, security of smart contracts in Ethereum and proving the security of proof of work/stake. Here, we will provide a small summary for each of the talks. Slides can be found by clicking on each talk on the school’s program page.

TLS-N: Non-repudiation over TLS Enabling Ubiquitous Content Signing for Disintermediation by Arthur Gervais: Gervais’ talk highlights that a slight modification to TLS can allow a smart contract to verify the authenticity of data received from website.  Essentially, at the end of the TLS session the server signs evidence of the TLS session if requested by the client. This evidence is verified and stored by the smart contract. It is also worth mentioning that the protocol relies on redactable signatures that ensures private data isn’t revealed.

Town Crier: An Authenticated Data Feed for Smart Contracts – Ari Juels: Juel’s talk highlights that trusted execution environments can be leveraged to build authenticated data feeds. This trusted hardware communicates with the website before sending the data to the smart contract.  It is responsible for setting up a HTTPS session and fetching data from a website before sending the data to the smart contract. TownCrier is currently implemented using Intel SGX and is currently released for testing.

It is also worth mentioning that Juels beautifully provided a good definition for a smart contract:

“A smart contract is a trusted third party with public state.”

This is one of the reasons why cryptography and smart contracts are a great combination. The contract can ensure the cryptography is faithfully executed, whereas the cryptography can provide integrity and confidentiality for data used by the contract.

Continue reading EPFL blockchain summer school