Justice for victims of bank fraud – learning from the Post Office trial

In London, this week, a trial is being held over a dispute between the Justice for Subpostmasters Alliance (JFSA) and the Post Office, but the result will have far-reaching repercussions for anyone disputing computer evidence. The trial currently focuses on whether the legal agreements and processes set up by the Post Office are a fair basis for managing its relationship with the subpostmasters who operate branches on its behalf. Later, the court will assess whether the fact that the Post Office computer system – Horizon – indicates that a subpostmaster is in debt to the Post Office is sufficient evidence for the subpostmaster to be indeed liable to repay the debt, even when the subpostmaster claims the accounts are incorrect due to computer error or fraud.

Disputes over Horizon have led to subpostmasters being bankrupted, losing their homes, or even being jailed but these cases also echo the broader issues at the heart of the many phantom withdrawals disputes I see between a bank and its customers. Customers claim that money was taken from their accounts without their permission. The bank claims that their computer system shows that either the customer authorised the withdrawal or was grossly negligent and so the customer is liable. The customer may also claim that the bank’s handling of the dispute is poor and the contract with the bank protects the bank’s interests more than that of the customer so is an unfair basis for managing disputes.

There are several lessons the Post Office trial will have for the victims of phantom withdrawals, particularly for cases of push payment fraud, but in this post, I’m going to explore why these issues are being dealt with first in a trial initiated by subpostmasters and not by the (far more numerous) bank customers. In later posts, I’ll look more into the specific details that are being disclosed as a result of this trial.

Continue reading Justice for victims of bank fraud – learning from the Post Office trial

Coconut E-Petition Implementation

An interesting new multi-authority selective-disclosure credential scheme called Coconut has been recently released, which has the potential to enable applications that were not possible before. Selective-disclosure credential systems allow issuance of a credential (having one or more attributes) to a user, with the ability to unlinkably reveal or “show” said credential at a later instance, for purposes of authentication or authorization. The system also provides the user with the ability to “show” specific attributes of that credential or a specific function of the attributes embedded in the credential; e.g. if the user gets issued an identification credential and it has an attribute x representing their age, let’s say x = 25, they can show that f(x) > 21 without revealing x.

High-level overview of Coconut, from the Coconut paper

A particular use-case for this scheme is to create a privacy-preserving e-petition system. There are a number of anonymous electronic petition systems that are currently being developed but all lack important security properties: (i) unlinkability – the ability to break the link between users and specific petitions they signed, and (ii) multi-authority – the absence of a single trusted third party in the system. Through multi-authority selective disclosure credentials like Coconut, these systems can achieve unlinkability without relying on a single trusted third party. For example, if there are 100 eligible users with a valid credential, and there are a total of 75 signatures when the petition closes, it is not possible to know which 75 people of the total 100 actually signed the petition.

Continue reading Coconut E-Petition Implementation

UK Faster Payment System Prompts Changes to Fraud Regulation

Banking transactions are rapidly moving online, offering convenience to customers and allowing banks to close branches and re-focus on marketing more profitable financial products. At the same time, new payment methods, like the UK’s Faster Payment System, make transactions irrevocable within hours, not days, and so let recipients make use of funds immediately.

However, these changes have also created a new opportunity for fraud schemes that trick victims into performing a transaction under false pretences. For example, a criminal might call a bank customer, tell them that their account has been compromised, and help them to transfer money to a supposedly safe account that is actually under the criminal’s control. Losses in the UK from this type of fraud were £145.4 million during the first half of 2018 but importantly for the public, such frauds fall outside of existing consumer protection rules, leaving the customer liable for sometimes life-changing amounts.

The human cost behind this epidemic has persuaded regulators to do more to protect customers and create incentives for banks to do a better job at preventing the fraud. These measures are coming sooner than UK Finance – the trade association for UK based banking payments and cards businesses – would like, but during questioning by the House of Commons Treasury Committee, their Chief Executive conceded that change is coming. They now focus on who will reimburse customers who have been defrauded through no fault of their own. Who picks up the bill will depend not just on how good fraud prevention measures are, but how effectively banks can demonstrate this fact.

UK Faster Payment Creates an Opportunity for Social Engineering Attacks

One factor that contributed to the new type of fraud is that online interactions lack the usual cues that help customers tell whether a bank is genuine. Criminals use sophisticated social engineering attacks that create a sense of urgency, combined with information gathered about the customer through illicit means, to convince even diligent victims that it could only be their own bank calling. These techniques, combined with the newly irrevocable payment system, create an ideal situation for criminals.

Continue reading UK Faster Payment System Prompts Changes to Fraud Regulation

What We Disclose When We Choose Not To Disclose: Privacy Unraveling Around Explicit HIV Disclosure Fields

For many gay and bisexual men, mobile dating or “hook-up” apps are a regular and important part of their lives. Many of these apps now ask users for HIV status information to create a more open dialogue around sexual health, to reduce the spread of the virus, and to help fight HIV related stigma. Yet, if a user wants to keep their HIV status private from other app users, this can be more challenging than one might first imagine. While most apps provide users with the choice to keep their status undisclosed with some form of “prefer not to say” option, our recent study which we describe in a paper being presented today at the ACM Conference on Computer-Supported Cooperative Work and Social Computing 2018, finds privacy may “unravel” around users who choose this non-disclosure option, which could limit disclosure choice.

Privacy unraveling is a theory developed by Peppet in which he suggests people will self-disclose their personal information when it is easy to do so, low-cost, and personally beneficial. Privacy may then unravel around those who keep their information undisclosed, as they are assumed to be “hiding” undesirable information, and are stigmatised and penalised as a consequence.

In our study, we explored the online views of Grindr users and found concerns over assumptions developing around HIV non-disclosures. For users who believe themselves to be HIV negative, the personal benefits of disclosing are high and the social costs low. In contrast, for HIV positive users, the personal benefits of disclosing are low, whilst the costs are high due to the stigma that HIV still attracts. As a result, people may assume that those not disclosing possess the low gain, high cost status, and are therefore HIV positive.

We developed a series of conceptual designs that utilise Peppet’s proposed limits to privacy unraveling. One of these designs is intended to artificially increase the cost of disclosing an HIV negative status. We suggest time and financial as two resources that could be used to artificially increase disclosure cost. For example, users reporting to be HIV negative could be asked to watch an educational awareness video on HIV prior to disclosing (time), or only those users who had a premium subscription could be permitted to disclose their status (financial). An alternative (or in parallel) approach is to reduce the high cost of disclosing an HIV positive status by designing in mechanisms to reduce social stigma around the condition. For example, all users could be offered the option to sign up to “living stigma-free” which could also appear on their profile to signal others of their pledge.

Another design approach is to create uncertainty over whether users are aware of their own status. We suggest profiles disclosing an HIV negative status for more than 6 months be switched automatically to undisclosed unless they report a recent HIV test. This could act as a testing reminder, as well as increasing uncertainty over the reason for non-disclosures. We also suggest increasing uncertainty or ambiguity around HIV status disclosure fields by clustering undisclosed fields together. This may create uncertainty around the particular field the user is concerned about disclosing. Finally, design could be used to cultivate norms around non-disclosures. For example, HIV status disclosure could be limited to HIV positive users, with non-disclosures then assumed to be a HIV negative status, rather than HIV positive status.

In our paper, we discuss some of the potential benefits and pitfalls of implementing Peppet’s proposed limits in design, and suggest further work needed to better understand the impact privacy unraveling could have in online social environments like these. We explore ways our community could contribute to building systems that reduce its effect in order to promote disclosure choice around this type of sensitive information.

 

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 675730.

Can Ethics Help Restore Internet Freedom and Safety?

Internet services are suffering from various maladies ranging from algorithmic bias to misinformation and online propaganda. Could computer ethics be a remedy? Mozilla’s head Mitchell Baker warns that computer science education without ethics will lead the next generation of technologists to inherit the ethical blind spots of those currently in charge. A number of leaders in the tech industry have lent their support to Mozilla’s Responsible Computer Science Challenge initiative to integrate ethics with undergraduate computer science training. There is a heightened interest in the concept of ethical by design, the idea of baking ethical principles and human values into the software development process from design to deployment.

Ethical education and awareness are important, and there exist a number of useful relevant resources. Most computer science practitioners refer to the codes of ethics and conduct provided by the field’s professional bodies such as the Association for Computing Machinery and the Institute of Electrical and Electronics Engineers, and in the UK the British Computing Society and the Institute of Engineering and Technology. Computer science research is predominantly guided by the principles laid out in the Menlo Report.

But aspirations and reality often diverge, and ethical codes do not directly translate to ethical practice. Or the ethical practices of about five companies to be precise. The concentration of power among a small number of big companies means that their practices define the online experience of the majority of Internet users. I showed this amplified power in my study on the Web’s differential treatment of the users of Tor anonymity network.

Ethical code alone is not enough and needs to be complemented by suitable enforcement and reinforcement. So who will do the job? Currently, for the most part, companies themselves are the judge and jury in how their practices are regulated. This is not a great idea. The obvious misalignment of incentives is aptly captured in an Urdu proverb that means: “The horse and grass can never be friends”. Self-regulation by companies can result in inconsistent and potentially biased regulation patterns, and/or over-regulation to stay legally safe.

Continue reading Can Ethics Help Restore Internet Freedom and Safety?