Making sense of EMV card data – how to decode the TLV data format

At the Payment Village in DEFCON 28, I presented a talk about my research in payment system security. While my talks have in the past covered high-level issues or particular security vulnerabilities, for this presentation, I went into depth about the TLV (tag-length-value) data format that anyone researching payment security is going to have to deal with. This format is used for Chip and PIN cards, as specified by the EMV standard, and is present in related standards like contactless and mobile payments. The TLV format used in EMV is also closely related to the ASN.1 format used in HTTPS certificates. There are automated decoders for TLV (the one I wrote is available on EMVLab), but for the purposes of debugging, testing and handling corrupt or incomplete data, it’s sometimes necessary to get your hands dirty and understand the format yourself. In this talk, I show how this can be done.

Rather than the usual PowerPoint, I tried something different for this talk. The slides are an interactive RISE show based on a Juptyer notebook, demonstrating a Python library I wrote to show TLV data-structure decoding. Everything is in my talk’s GitHub repository, and you can experiment with the notebook and view the slides without installing any software through its Binder. I have an accompanying Sway notebook with the reference guides I relied upon for the talk. Do have a try with this material, and I’d welcome your comments on how well (or badly) this approach works.

The DEFCON Payment Village is running again this year in August. If you’ve got something you would like to share with the community, the call for papers is open until 15 July 2021.

Evidence Critical Systems: Designing for Dispute Resolution

On Friday, 39 subpostmasters had their criminal convictions overturned by the Court of Appeal. These individuals ran post office branches and were prosecuted for theft, fraud and false accounting based on evidence from Horizon, the Post Office computer system created by Fujitsu. Horizon’s evidence was asserted to be reliable by the Post Office, who mounted these prosecutions, and was accepted as proof by the courts for decades. It was only through a long and expensive court case that a true record of Horizon’s problems became publicly known, with the judge concluding that it was “not remotely reliable”, and so allowing these successful appeals against conviction.

The 39 quashed convictions are only the tip of the iceberg. More than 900 subpostmasters were prosecuted based on evidence from Horizon, and many more were forced to reimburse the Post Office for losses that might never have existed. It could be the largest miscarriage of justice the UK has ever seen, and at the centre is the Horizon computer system. The causes of this failure are complex, but one of the most critical is that neither the Post Office nor Fujitsu disclosed the information necessary to establish the reliability (or lack thereof) of Horizon to subpostmasters disputing its evidence. Their reasons for not doing so include that it would be expensive to collect the information, that the details of the system are confidential, and disclosing the information would harm their ability to conduct future prosecutions.

The judgment quashing the convictions had harsh words about this failure of disclosure, but this doesn’t get away from the fact that over 900 prosecutions took place before the problem was identified. There could easily have been more. Similar questions have been raised relating to payment disputes: when a customer claims to be the victim of fraud but the bank says it’s the customer’s fault, could a computer failure be the cause? Both the Post Office and banking industry rely on the legal presumption in England and Wales that computers operate correctly. The responsibility for showing otherwise is for the subpostmaster or banking customer.

Continue reading Evidence Critical Systems: Designing for Dispute Resolution

Aggregatable Distributed Key Generation

We present our work on designing an aggregatable distributed key generation algorithm, which will appear at Eurocrypt 2021.  This is joint work with Kobi Gurkan, Philipp Jovanovic, Mary Maller, Sarah Meiklejohn, Gilad Stern, and Alin Tomescu.

What is a Distributed Key Generation Algorithm?

Ever heard of Shamir’s secret sharing algorithm? It’s a classic. The overriding idea is that it is harder to corrupt many people than corrupting one person. Shamir’s secret sharing algorithm ensures that you can only learn a secret if multiple people cooperate. In cryptography, we often want to share a secret key so that we can distribute trust. The secret key might be used to decrypt a database, sign a transaction, or compute some pseudo-randomness.

In a secret sharing scheme, there is a trusted dealer who knows the whole secret, shares it out, and then goes offline. This begs the question: why bother to share the secret in the first place if you have a trusted dealer who knows the whole secret? Often the reason is that the secret sharing scheme is merely being used as an ingredient in a larger distributed key generation algorithm in which nobody knows the full secret. This isn’t always true; certainly, there are cases where a central authority might delegate tasks to workers with less authority. But in the case where there is no central authority, we need a more complete solution.

Continue reading Aggregatable Distributed Key Generation

Still treating users as the enemy: entrapment and the escalating nastiness of simulated phishing campaigns

Three years ago, we made the case against phishing your own employees through simulated phishing campaigns. They do little to improve security: click rates tend to be reduced (temporarily) but not to zero – and each remaining click can enable an attack. They also have a hidden cost in terms of productivity – employees have to spend time processing more emails that are not relevant to their work, and then spend more time pondering whether to act on emails. In a recent paper, Melanie Volkamer and colleagues provided a detailed listing of the pros and cons from the perspectives of security, human factors and law. One of the legal risks was finding yourself in court with one of the 600-pound digital enterprise gorillas for trademark infringement – Facebook objected to their trademark and domain being impersonated. They also likely don’t want their brand to be used in attacks because, contrary to what some vendors tell you, being tricked by your employer is not a pleasant experience. Negative emotions experienced with an event often transfer to anyone or anything associated with it – and negative emotions are not what you want associated with your brand if your business depends on keeping billions of users engaging with your services as often as possible.

Recent tactics employed by the providers of phishing campaigns can only be described as entrapment – to “demonstrate” the need for their services, they create messages that almost everyone will click on. Employees of the Chicago Tribune and GoDaddy, for instance, received emails promising bonuses. Employees had hope of extra pay raised and then cruelly dashed, and on top, were hectored for being careless about phishing. Some employees vented their rage publicly on Twitter, and the companies involved apologised. The negative publicity may eventually be forgotten, but the resentment of employees feeling not only tricked but humiliated and betrayed, will not fade any time soon. The increasing nastiness of entrapment has seen employees targeted with promises of COVID vaccinations from employers – who then find themselves being ridiculed for their gullibility instead of lauded for their willingness to help.

Continue reading Still treating users as the enemy: entrapment and the escalating nastiness of simulated phishing campaigns

Thoughts on the Future Implications of Microsoft’s Legal Approach towards the TrickBot Takedown

Just this week, Microsoft announced its takedown operation against the TrickBot botnet, in collaboration with other cybersecurity partners, such as FS-ISAC, ESET, and Symantec. This takedown followed Microsoft’s successful application for a court order this month, enabling them to enact technical disruption against the botnet. Such legal processes are typical and necessary precursors to such counter-operations.

However, what was of particular interest, in this case, was the legal precedent Microsoft (successfully) sought, which was based on breaches of copyright law. Specifically, they founded their claim on the alleged reuse (and misuse) of Microsoft’s copyrighted software – the Windows 8 SDK – by the TrickBot malware authors.

Now, it is clear that this takedown operation is not likely to cripple the entirety of the TrickBot operation. As numerous researchers have found (e.g., Stone-Gross et al., 2011; Edwards et al., 2015), a takedown operation often works well in the short-term, but the long-term effects are highly variable. More often than not, unless they are arrested, and their infrastructure is seized, botnet operators tend to respond to such counter-operations by redeploying their infrastructure to new servers and ISPs, moving their operations to other geographic regions or new targets, and/or adapting their malware to become more resistant to detection and analysis. In fact, these are just some of the behaviours we observed in a case-by-case longitudinal study on botnets targeted by law enforcement (one of which involved Dyre, a predecessor of the TrickBot malware). A pre-print of this study is soon to be released.

So, no, I’m not proposing to discuss the long-term efficacy of takedown operations such as this. That is for another blog post.

Rather, what I want to discuss (or, perhaps, more accurately, put forward as some initial thoughts) are the potential implications of Microsoft’s legal approach to obtaining the court order (which is incumbent for such operations) on future botnet takedowns, particularly in the area of malicious code reuse.

Continue reading Thoughts on the Future Implications of Microsoft’s Legal Approach towards the TrickBot Takedown

Winkle – Decentralised Checkpointing for Proof-of-Stake

Several blockchain projects are considering proof-of-stake mechanisms in place of proof-of-work, attracted by the lower energy costs. Some proof-of-stake protocols based on BFT systems such as HotStuff or Tendermint appear to provide faster and deterministic finality. In these protocols, a set of nodes known as validators, that are identified by their public key, operates the consensus protocol such that any user can verify it using only publicly available information by verifying the validators’ signatures. The set of validators changes periodically, with respect to a specific governance mechanism.

However, as any consensus protocol that is not based on resource consumption (such as proof-of-work, proof-of-space and so on) they are vulnerable to an attack known in the literature as Long-Range Attack. In a Long-Range Attack, an adversary obtains the secret keys of past validators (e.g., by bribing them at no cost since they do not use these keys any more) and is thus able to re-write the entire history of the blockchain with those. A user that has been offline for a long period of time could then be fooled by the adversarial chain.

The number of keys holding a given fraction of stake (logarithmic scale).

To solve this problem, we propose Winkle, a decentralised checkpointing mechanism operated by coin holders, whose keys are harder to compromise than validators’ as they are more numerous. By analogy, in Bitcoin, taking control of one-third of the total supply of money would require at least 889 keys, whereas only 4 mining pools control more than half of the hash power (see figure above).

Our Protocol

The idea of Winkle is that coin holders will checkpoint the honest chain, such that if an adversary creates an alternative chain, its chain will not be checkpointed (since the adversary does not control the keys of coin holders) and is thus easily differentiable from the honest chain.

Continue reading Winkle – Decentralised Checkpointing for Proof-of-Stake

The role of usability, power dynamics, and incentives in dispute resolutions around computer evidence

As evidence produced by a computer is often used in court cases, there are necessarily presumptions about the correct operation of the computer that produces it. At present, based on a 1997 paper by the Law Commission, it is assumed that a computer operated correctly unless there is explicit evidence to the contrary.

The recent Post Office trial (previously mentioned on Bentham’s Gaze) has made clear, if previous cases had not, that this assumption is flawed. After all, computers and the software they run are never perfect.

This blog post discusses a recent invited paper published in the Digital Evidence and Electronic Signature Law Review titled The Law Commission presumption concerning the dependability of computer evidence. The authors of the paper, collectively referred to as LLTT, are Peter Bernard Ladkin, Bev Littlewood, Harold Thimbleby and Martyn Thomas.

LLTT examine the basis for the presumption that a computer operated correctly unless there is explicit evidence to the contrary. They explain why the Law Commission’s belief in Colin Tapper’s statement in 1991 that “most computer error is either immediately detectable or results from error in the data entered into the machine” is flawed. Not only can computers be assumed to have bugs (including undiscovered bugs) but the occurrence of a bug may not be noticeable.

LLTT put forward three recommendations. First, a presumption that any particular computer system failure is not caused by software is not justified, even for software that has previously been shown to be very reliable. Second, evidence of previous computer failure undermines a presumption of current proper functioning. Third, the fact that a class of failures has not happened before is not a reason for assuming it cannot occur.

Continue reading The role of usability, power dynamics, and incentives in dispute resolutions around computer evidence

Transparency, evidence and dispute resolution

Despite the ubiquity of computers in everyday life, resolving a dispute regarding the misuse or malfunction of a system remains hard to do well. A recent example of this is the, now concluded, Post Office trial about the dispute between Post Office Limited and subpostmasters who operate some Post Office branches on their behalf.

Subpostmasters offer more than postal services, namely savings accounts, payment facilities, identity verification, professional accreditation, and lottery services. These services can involve large amounts of money, and subpostmasters were held liable for losses at their branch. The issue is that the accounting is done by the Horizon accounting system, a centralised system operated by Post Office Limited, and subpostmasters claim that their losses are not the result of errors or fraud on their part but rather a malfunction or malicious access to Horizon.

This case is interesting not only because of its scale (a settlement agreement worth close to £58 million was reached) but also because it highlights the difficulty in reasoning about issues related to computer systems in court. The case motivated us to write a short paper presented at the Security Protocols Workshop earlier this year – “Transparency Enhancing Technologies to Make Security Protocols Work for Humans”. This work focused on how the liability of a party could be determined when something goes wrong, i.e., whether a customer is a victim of a flaw in the service provider’s system or whether the customer has tried to defraud the service provider.

Applying Bayesian thinking to dispute resolution

An intuitive way of thinking about this problem is to apply Bayesian reasoning. Jaynes makes a good argument that any logically consistent form of reasoning will lead to taking this approach. Following this approach, we can consider the odd’s form of Bayes’ theorem expressed in the following way.

Odds form of Bayes' theorem

There is a good reason for considering the odd’s form of Bayes’ theorem over its standard form – it doesn’t just tell you if someone is likely to be liable, but whether they are more likely to be liable than not: a key consideration in civil litigation. If a party is liable, the probability that there is evidence is high so what matters is the probability that if the party is not liable there would be the same evidence. Useful evidence is, therefore, evidence that is unlikely to exist for a party that is not liable.

Continue reading Transparency, evidence and dispute resolution

By revisiting security training through economics principles, organisations can navigate how to support effective security behaviour change

Here I describe analysis by myself and colleagues Albesë Demjaha and David Pym at UCL, which originally appeared at the STAST workshop in late 2019 (where it was awarded best paper). The work was the basis for a talk I gave at Cambridge Computer Laboratory earlier this week (I thank Alice Hutchings and the Security Group for hosting the talk, as it was also an opportunity to consider this work alongside themes raised in our recent eCrime 2019 paper).

Secure behaviour in organisations

Both research and practice have shown that security behaviours, encapsulated in policy and advised in organisations, may not be adopted by employees. Employees may not see how advice applies to them, find it difficult to follow, or regard the expectations as unrealistic. Employees may, as a consequence, create their own alternative behaviours as an effort to approximate secure working (rather than totally abandoning security). Organisational support can then be critical to whether secure practices persist. Economics principles can be applied to explain how complex systems such as these behave the way they do, and so here we focus on informing an overarching goal to:

Provide better support for ‘good enough’ security-related decisions, by individuals within an organization, that best approximate secure behaviours under constraints, such as limited time or knowledge.

Traditional economics assumes decision-makers are rational, and that they are equipped with the capabilities and resources to make the decision which will be most beneficial for them. However, people have reasons, motivations, and goals when deciding to do something — whether they do it well or badly, they do engage in thinking and reasoning when making a decision. We must capture how the decision-making process looks for the employee, as a bounded agent with limited resources and knowledge to make the best choice. This process is more realistically represented in behavioural economics. And yet, behaviour intervention programmes mix elements of both of these areas of economics. It is by considering these principles in tandem that we explore a more constructive approach to decision-support in organisations.

Contradictions in current practice

A bounded agent often settles for a satisfactory decision, by satisficing rather than optimising. For example, the agent can turn to ‘rules of thumb’ and make ad-hoc decisions, based on a quick evaluation of perceived probability, costs, gains, and losses. We can already imagine how these restrictions may play out in a busy workplace. This leads us toward identifying those points of engagement at which employees ought to be supported, in order to avoid poor choices.

Continue reading By revisiting security training through economics principles, organisations can navigate how to support effective security behaviour change

Consider unintended harms of cybersecurity controls, as they might harm the people you are trying to protect

Well-meaning cybersecurity risk owners will deploy countermeasures in an effort to manage the risks they see affecting their services or systems. What is not often considered is that those countermeasures may produce unintended, negative consequences themselves. These unintended consequences can potentially be harmful, adversely affecting user behaviour, user inclusion, or the infrastructure itself (including services of others).

Here, I describe a framework co-developed with several international researchers at a Dagstuhl seminar in mid-2019, resulting in an eCrime 2019 paper later in the year. We were drawn together by an interest in understanding unintended harms of cybersecurity countermeasures, and encouraging efforts to preemptively identify and avoid these harms. Our collaboration on this theme drew on our varied and multidisciplinary backgrounds and interests, including not only risk management and cybercrime, but also security usability, systems engineering, and security economics.

We saw it as necessary to focus on situations where there is often an urgency to counter threats, but where efforts to manage threats have the potential to introduce harms. As documented in the recently published seminar report, we explored specific situations in which potential harms may make resolving the overarching problems more difficult, and as such cannot be ignored – especially where potentially harmful countermeasures ought to be avoided. Example case studies of particular importance include tech-abuse by an intimate partner, online disinformation campaigns, combating CEO fraud and phishing emails in organisations, and online dating fraud.

Consider disinformation campaigns, for example. Efforts to counter disinformation on social media platforms can include fact-checking and automated detection algorithms behind the scenes. These can reduce the burden on users to address the problem. However, automation can also reduce users’ scepticism towards the information they see; fact-checking can be appropriated as a tool by any one group to challenge viewpoints of dissimilar groups.

We then see how unintended harms can shift the burden of managing cybersecurity to others in the ecosystem without them necessarily expecting it or being prepared for it. There can be vulnerable populations which are disadvantaged by the effects of a control more than others. An example may be legitimate users of social media who are removed – or have their content removed – from a platform, due to traits shared with malicious actors or behaviour, e.g., referring to some of the same topics, irrespective of sentiment – an example of ‘Misclassification’, in the list below. If a user, user group, or their online activity are removed from the system, the risk owner for that system may not notice that problems have been created for users in this way – they simply will not see them, as their actions have excluded them. Anticipating and avoiding unintended harms is then crucial before any such outcomes can occur.

Continue reading Consider unintended harms of cybersecurity controls, as they might harm the people you are trying to protect