Evidence Critical Systems: Designing for Dispute Resolution

On Friday, 39 subpostmasters had their criminal convictions overturned by the Court of Appeal. These individuals ran post office branches and were prosecuted for theft, fraud and false accounting based on evidence from Horizon, the Post Office computer system created by Fujitsu. Horizon’s evidence was asserted to be reliable by the Post Office, who mounted these prosecutions, and was accepted as proof by the courts for decades. It was only through a long and expensive court case that a true record of Horizon’s problems became publicly known, with the judge concluding that it was “not remotely reliable”, and so allowing these successful appeals against conviction.

The 39 quashed convictions are only the tip of the iceberg. More than 900 subpostmasters were prosecuted based on evidence from Horizon, and many more were forced to reimburse the Post Office for losses that might never have existed. It could be the largest miscarriage of justice the UK has ever seen, and at the centre is the Horizon computer system. The causes of this failure are complex, but one of the most critical is that neither the Post Office nor Fujitsu disclosed the information necessary to establish the reliability (or lack thereof) of Horizon to subpostmasters disputing its evidence. Their reasons for not doing so include that it would be expensive to collect the information, that the details of the system are confidential, and disclosing the information would harm their ability to conduct future prosecutions.

The judgment quashing the convictions had harsh words about this failure of disclosure, but this doesn’t get away from the fact that over 900 prosecutions took place before the problem was identified. There could easily have been more. Similar questions have been raised relating to payment disputes: when a customer claims to be the victim of fraud but the bank says it’s the customer’s fault, could a computer failure be the cause? Both the Post Office and banking industry rely on the legal presumption in England and Wales that computers operate correctly. The responsibility for showing otherwise is for the subpostmaster or banking customer.

This presumption can and should be changed, and there should be more robust enforcement of the principle that organisations disclose all relevant information they hold, even if it might harm their case. However, that isn’t enough. Organisations might not have the information they need to show whether their computer systems are reliable or not (and may even choose not to collect it, in case it discredits their position). The information might be expensive to assemble, and so they might argue it is not justifiable to disclose. In some cases, publicly revealing details about the functioning of a system could assist criminals, so it gives organisation yet another reason (or excuse) to not disclose relevant information. For all these reasons, there will be resistance to a change in the presumption that computers operate correctly.

I believe that we need a new way to build systems that need to produce information to help resolve high-stakes disputes: evidence-critical systems. The analogy to safety-critical systems is deliberate – a malfunction of a safety-critical system can lead to serious harm to individuals or equipment. The failure of an evidence-critical system to produce accurate and interpretable information that can be disclosed could lead to the loss of significant sums of money or an individual’s liberty. Well designed evidence-critical systems can cost-effectively resolve disputes quickly and with confidence, removing the impediments to disclosure, allowing a change in the presumption that computers are operating correctly.

We already know how to build safety-critical systems, but doing so is expensive, and it would not be realistic to apply these standards to all systems. The good news is that evidence-critical engineering is easier than safety-critical engineering in several important ways. While a safety-critical system must continue working, an evidence-critical system can stop when an error is detected. Safety-critical systems must also meet tight response-time requirements, whereas an evidence-critical system can involve manual interpretation to resolve difficult situations. Also, only some parts of a system will be critical for resolving disputes; other parts of the system can be left unchanged. Evidence-critical systems do, however, need to work even when some individuals are acting maliciously, unlike many safety-critical systems.

I would welcome discussion on what we should expect from evidence-critical systems. What requirements should they meet? How can these be verified? What re-usable components are needed to make evidence-critical systems engineering cost-effective? Some of my initial thoughts are in my presentation at the Security and Human Behavior workshop (starts at 10 minutes). Leave your comments below or join the discussion on Twitter.

 

Photo by Volodymyr Hryshchenko on Unsplash.

5 thoughts on “Evidence Critical Systems: Designing for Dispute Resolution”

    1. Thanks for your comment, and also for your interesting posts on the Post Office legal team’s behaviour. There’s certainly a lot that can be learned from audit practices in terms of how to design systems. Ian Henderson has done a great job in very difficult circumstances of teasing out what has gone wrong with Horizon. Of course audit can also go horribly wrong, like in Enron. I’d like to go further in scope than just an audit working out what has happend: how do we ensure the right information is available for the auditor, how can this information be made safe to disclose outside of the organisation, and how do we convince a third-party. James Christie has also commented on this topic.

  1. Boldly assuming that a consensus can be found on the answers to these questions, they leave open some important further questions: what difference there would be between the legal treatment of systems certified (how?), and recertified after every change (presumably), and all the rest; would the reliability presumption be applied to certified systems; would evidence derived from other systems have to be proved reliable by those seeking to rely on it, case by case?

    The answers to these further questions might have some bearing on how the original questions are to be answered.

      1. Your point is well taken. Whom can we trust to certify the certifiers?

        If software is only to be trusted to the extent that it demonstrates that it is trustworthy, this places a burden on the judges to engage with the experts in understanding what has really been demonstrated. Not all judges are the equal of Mr Justice Fraser (although a few more may now have come to realise this after recent events).

Leave a Reply

Your email address will not be published. Required fields are marked *