As evidence produced by a computer is often used in court cases, there are necessarily presumptions about the correct operation of the computer that produces it. At present, based on a 1997 paper by the Law Commission, it is assumed that a computer operated correctly unless there is explicit evidence to the contrary.
The recent Post Office trial (previously mentioned on Bentham’s Gaze) has made clear, if previous cases had not, that this assumption is flawed. After all, computers and the software they run are never perfect.
This blog post discusses a recent invited paper published in the Digital Evidence and Electronic Signature Law Review titled The Law Commission presumption concerning the dependability of computer evidence. The authors of the paper, collectively referred to as LLTT, are Peter Bernard Ladkin, Bev Littlewood, Harold Thimbleby and Martyn Thomas.
LLTT examine the basis for the presumption that a computer operated correctly unless there is explicit evidence to the contrary. They explain why the Law Commission’s belief in Colin Tapper’s statement in 1991 that “most computer error is either immediately detectable or results from error in the data entered into the machine” is flawed. Not only can computers be assumed to have bugs (including undiscovered bugs) but the occurrence of a bug may not be noticeable.
LLTT put forward three recommendations. First, a presumption that any particular computer system failure is not caused by software is not justified, even for software that has previously been shown to be very reliable. Second, evidence of previous computer failure undermines a presumption of current proper functioning. Third, the fact that a class of failures has not happened before is not a reason for assuming it cannot occur.
This blog post aims to argue that these recommendations are based on reasoning that focuses on software faults and ignores other important factors such the usability of the system as well as the incentives and power relations of the environment that the software is used in. As a result, the recommendations put forward by LLTT that appear to bring assumptions about the operation of computers to neutral ground do not go far enough to achieve this.
Usability, and why errors per line of code is not a good metric for faults
A typical argument to show that software can have faults is based on errors per line of code and LLTT rely in part on that argument. It is not just buggy code that can cause faults, however, so can usability issues, e.g., bad UX, documentation, or training material. Moreover, even software that has a very low amount of errors per line of code can have significant usability issues. Error logs will also ignore these, unless specifically configured to log implausible user behaviour.
It is important not to forget about errors caused by usability issues because they do not occur in the same way as software faults. They cannot be modelled as some (for example) Poisson process. They are also not treated in the same way. A bug can harm a user if it is not detected, but if it is found that a bug occurred at a certain point then it will be hard to put the blame on the user for the resulting error. On the other hand, an error caused by usability issues still involves a user making an error. Determining that it was not their fault is much harder to argue, despite the truth that may lie in these claims.
This situation is made worse by the fact that those deciding the outcome of a case do not have any experience using the system. Usability concerns are then all the more likely to be ignored, and arguing that usability issues are at fault for a user’s error is unlikely to work. When a system has significant usability issues it, therefore, plays to the advantage of the system operator.
Incentives and power structures matter
What relates to both system errors and usability issues are the incentives of the parties involved, in particular the system operator, and the power relations between them.
As the party operating the system, the system operator has an inherent information advantage over users of the system and has power over them if their ability to use the system is dependent on the system operator. By operating the system, it also can change the system, potentially also in ways that would count as tampering.
Despite these advantages, a system operator may try to frame their system as a black box to make it seem neutral rather than something they have significant control over. Paula Vennells showcased this during the UK Parliament’s investigation of the Post Office.
This property, aided by the presumption that computers operate correctly, has an impact on the incentives around the system and its faults. To win a dispute, the system operator needs to show less information than a user (an absence of a logged error rather than evidence of an error), despite having an inherent information advantage over users. Consequently if the system shows an unwanted outcome (e.g., an accounting discrepancy) then as long as there is no system error self-reported by the system, the system operator can place the blame on a user. Moreover, because the system operator has control over the system, they are also the ones that can determine how errors are logged.
As the system operator can potentially benefit from errors occurring as long as they are not publicly revealed and usability issues as discussed above, they have little incentive to fix these issues at their own costs. Users, on the other hand, cannot reliably identify faults in the system. Knowing that a system they have no control over and limited information about can in principle detect any attempt at fraud, they have little incentive to attempt fraud and risk their livelihood, as in the subpostmasters’ case.
Finally, there is also an imbalance of power when it comes to the ability of a system operator to go after a user, compared to the ability of said user to defend themselves in court. Not only does the system operator start with better prior information, they likely have access to better legal resources. For employees of the system operator handling court cases, this will be a part of their job rather than a major source of stress in their life. Users, on the other hand, face losing vast sums of money and prison sentences with comparatively few legal resources and effectively no way of definitively proving they are innocent.
Going beyond LLTT’s recommendations
The recommendations put forward by LLTT appear to be based on finding neutral ground, requiring neither that system operators demonstrate the correctness of their system nor that users demonstrate that the system has made an error if they are accused of misbehaviour. The question is then, what is left?
If the software has significant usability issues, which originate at the point of the system operator, a user is still likely to take the blame. The information asymmetry about the system will still allow the system operator to better argue about the system than the user. If the case rests on whether or not a computer has made a mistake, but neither party can reliably show what happened, bias and erroneous reasoning are likely to creep in. The system operator will also be able to draw on more expensive legal resources.
Given that a system’s power dynamics are in favour of the system operator, who will also be the plaintiff, it seems that they should bear any burden. A suggestion is that evidence from a system that indicates that someone is liable should only be acceptable if the system operator can demonstrate that the system would be able to clear someone who is innocent. This approach would balance out some of the system operator’s information advantage.
Beyond this, systems that produce evidence could also be improved. Maintaining records of system events is required by ISO security standards and the like, but the relation between what is mandated by these standards and what is considered to be evidence in court is not always the same. The robustness of the evidence, in particular, is a concern given that a system operator controls its production. Distributing the production of evidence could positively change the required trust assumptions. Transparency could also be an effective tool, allowing users to learn more about the system as a whole and gather evidence from events unrelated to themselves that they currently would not have access to.