Managing conflicts between ethical principles and job duties

Despite its international context, discussion of the social implications of technology is surprisingly parochial. For example, the idea that individuals should have control over how their data is used is considered radical and innovative in the US, despite it being commonly accepted in Europe since the early 1980’s. The same applies to including professional and ethical training as part of computer science curricula – while a recent move in many US institutions, it’s been mandatory for BCS accredited courses in the UK for as long as I can remember. One lesson that comes from the UK’s experience here, and that I think would be of help to institutions following its lead, is that students being aware of ethics is not enough to protect society and individuals. There needs to also be strong codes of conduct, built on ethical principles, which practitioners are expected to follow.

For most computer science practitioners in the UK, the codes of conduct of relevance are from the field’s professional bodies – BCS and IET. They say roughly what you might expect – do a good job, follow instructions, avoid conflicts of interest, and consider the public interest. I’ve always found these to be a bit unsatisfactory, treating ethical decisions as the uncontroversial product of the application of consistent rules of professional conduct. These rules however don’t help with reality, where practitioners are faced with decisions where all options come at substantial personal or financial cost, where rules are inconsistent with themselves and ethical principles, all while faced with substantial uncertainty as to the consequences of their actions.

That’s why I am pleased to see that the ACM ethical code released today goes some way to acknowledge the complex interaction between technology and society, and provides tools to help practitioners navigate the challenges. In particular it gives some guidance on a topic I have long felt sorely lacking in the BCS and IET codes – what to do when instructions from your employer conflict with the public interest. At best, the BCS and IET codes are silent on how to handle such situations – if anything the BCS code puts emphasis on acting “in accordance” with employer instructions compared to requiring that members only “have due regard” for the public interest. In contrast, the ACM code is clear “that the public good is the paramount consideration”.

The ACM code also is clear that ethical practices are the responsibility of all. Management should enact rules that require ethical practices – they “should pursue clearly defined organizational policies that are consistent with the Code and effectively communicate them to relevant stakeholders. In addition, leaders should encourage and reward compliance with those policies, and take appropriate action when policies are violated.” But also, the code puts the duty on employees, through individual or collective action, to follow ethical practices even if management has not discharged their duty – “rules that are judged unethical should be challenged”.

Courses of action discussed in the ACM code are not limited to just challenging rules, but also actively disrupting unethical practices – “consider challenging the rule through existing channels before violating the rule. A computing professional who decides to violate a rule because it is unethical, or for any other reason, must consider potential consequences and accept responsibility for that action”.

One specific example of such disruptive action is whistleblowing, which the code recognizes as a legitimate course of action in the right circumstances – “if leaders do not act to curtail or mitigate such risks, it may be necessary to ‘blow the whistle’ to reduce potential harm”. However, my one disappointment in the code is that such disclosures are restricted to being made only through the “appropriate authorities” even though such authorities are often ineffective at instituting organizational change or protecting whistleblowers.

Implementing ethical policies is not without cost, and when doing so runs against business opportunities, profit often wins. It is nevertheless helpful that the code suggests that “in cases where misuse or harm are predictable or unavoidable, the best option may be to not implement the system”. The UK banks currently saying they can’t prevent push-payment fraud, resulting in life-changing losses to their customers, would do well to consider this principle. The current situation, where customers are held liable despite taking a normal level of care, is not an ethical practice.

Overall, I think this code is helpful and I am impressed at the breadth and depth of thought that clearly went into it. The code is also timely, as practitioners are now discovering their power to disrupt unethical practices through collective action and could take advantage of being given the permission to do so. The next task will be how to support and encourage the adoption of ethical principles and counteract the powerful forces that run into conflict with their practice.

JavaCard: The execution environment you didn’t know you were using

This is the story of the most popular execution environment, its shortcomings, and how open source and hacking saved the day.

According to recent revelations, the MINIX operating system is embedded in the Management Engine of all Intel CPUs released after 2015. A side-effect of this is that MINIX became known as the most widespread operating system in the world almost overnight. However, in the last decade another tiny OS has silently pushed itself into even more devices around the world while remaining unknown to most: JavaCard.

Your SIM Card, Credit Cards, Loyalty Cards are all most likely JavaCards.

With more than six billion JavaCards sold last year, and approximately 20 billion estimated to have been purchased in total, JavaCard is the winner no one knows about. The execution environment was designed in 1996 for devices with limited memory and processing capabilities and was the first smartcard platform to give developers the ability to execute the same applet on cards produced by different manufacturers. This was a breakthrough for the industry that established JavaCard as the default platform for applications in need of a secure, tamper-proof element.

*While JavaCard is technically not an OS in the standard sense (smart cards do have their own proprietary OSes), in practice it provides very similar functionality with modern embedded OSes. For instance, unless granted by the vendor, app developers are strictly limited within the JavaCard runtime environment; this distinguishes JavaCard from e.g, the classic Java VM where developers can also execute native applications in addition to those executed in the JVM.

A malfunctioning operating system

An operating system is much more than the sum of its source code: it’s also the ecosystem built around it, including the specifications, support, and most importantly the application developers and the user community.

For JavaCard, the specification part is handled formally by Oracle and Java Card Forum who make periodical releases of the platform’s virtual machine (JCVM) specification, runtime library (JCRL) and application programming interface (JC API). All these contribute towards homogeneity between cards from different manufacturers; aiming to ensure applet interoperability and a minimum level of support for basic cryptographic algorithms – at least in theory. In practice, our research has shown that no product in the market implements JC API completely, and different cards support different sets of features. This severely hinders the interoperability of applications and constrains developers within the limited subset of JavaCard features supported by most manufacturers. Developers who choose to use all the features provided by a specific card will, with high likelihood, abolish the interoperability of their applets. Furthermore, some of those specifications are also inadvertently limiting the scope of the platform. For instance, the API specification that lists all the calls to be potentially available to smart card applications ended up acting as an evolutionary bottleneck. This is due to:

  1. Approximately three-year-long API revision cycles that severely delay support for newer cryptographic algorithms (the last API revision was released in 2015).
  2. More complex cryptographic operations (especially asymmetric cryptography) requiring design and production of dedicated hardware accelerators to actually support newly added cryptographic algorithms.
  3. The business model is geared towards the large corporations. Hence, newer cards are available only to those buying in large quantities while smaller development houses and researchers are forced to work with five years old, or even older, cards.

Continue reading JavaCard: The execution environment you didn’t know you were using

Will new UK rules reduce the harm of push-payment fraud?

On Friday’s Rip off Britain I’ll be talking about new attempts by UK banks to prevent fraud, and the upcoming scheme for reimbursing the victims. While these developments have the potential to better protect customers, the changes could equally leave customers in a more vulnerable situation than before. What will decide between these two extremes is how well designed will be the rules surrounding these new schemes.

The beginning of this story is September 2016, when the consumer association – Which? – submitted a super-complaint to the UK Payment System Regulator (PSR) regarding push payment fraud – where a customer is tricked into transferring money into a criminal’s account. Such bank transfers are known as push payments because they are initiated by the bank sending the money, as opposed to pull payments, like credit and debit cards, where it is the receiving bank that starts the process. Banks claim that since the customer was involved in the process, they “authorised” the transaction, and so under UK and EU law, the customer is not entitled to a refund. I’ve argued that this interpretation doesn’t match any reasonable definition of the word “authorised” but nevertheless the term “authorised push payment scams” seems to have stuck as the commonly used terminology for this type of fraud, I’m sure much to the banks’ delight.

The Which? super-complaint asked for banks to be held liable for such frauds, and so reimburse the victims unless the bank can demonstrate the customer has acted with gross negligence. Which? argued that this approach would protect the customers from a fraud that exists as a consequence of bank design decisions, and provides banks with both a short-term incentive to prevent frauds that they can stop, as well as a medium-to-long term incentive for the banks to enhance payment systems to be resistant to fraud. The response from the PSR was disappointing, recognising that banks should do more, but rejecting the recommendation to hold banks liable for this fraud and requesting only that the banks collect more data. Nevertheless, the data collected proved useful in understanding the scale of the problem – £236 million stolen from over 42,000 victims in 2017, with banks only being able to recover 26% of the losses. This revelation led to Parliament asking difficult questions of the PSR.

The PSR’s alternative to holding banks liable for push payment fraud is for victims to be reimbursed if they can demonstrate they have acted with an appropriate level of care and that the bank has not. The precise definition of each level of care was a subject of consultation, and will now be decided by a steering group consisting of representatives of the banking industry and consumers. In my response to this consultation, I explained my reasons for recommending that banks be liable for fraud, including that fairly deciding whether customers met a level of care is a process fraught with difficulties. This is particularly the case due to the inequality in power between a bank and its customer, and that taking a banking dispute to court is ruinously expensive for most people since the option of customers spreading the cost through collective actions was removed from the Financial Services Act. More generally, banks – as the designers of payment systems and having real-world understanding of their use – have the greatest capacity to mitigate the risks these systems introduce.

Nevertheless, if the rules for the reimbursement scheme are set up well, it would be a substantial improvement over the current situation. On the other hand, if the process is bad then it could entrench the worst of current practices. Because the PSR has decided that reimbursement should depend on compliance to a level of care, my response also included what should be the process for defining these levels, and for adjudicating disputes.

Continue reading Will new UK rules reduce the harm of push-payment fraud?

Attack papers are case studies

We should treat attack papers like case studies. When we read them, review them, use them for evidence, and learn from them. This claim is not derogatory. Case studies are useful. But like anything, to be useful case studies need to be done and used appropriately.

Let’s be clear what I mean by attack paper. Any paper that reports how to attack some system. Any paper that includes details of an exploit, discloses a vulnerability, or demonstrates a proof-of-concept for breaching the security of a system. The efail paper that Steven discussed recently is an example. Security conferences are full of these; the ratio of attack papers to total papers varies per conference. USENIX Security tends to contain a fair few.

Let’s be clear what I mean by case study. I mean a scientific report that details a specific occurrence of interest as observed by the author. Case studies can be active, and include interviews or other questioning. They can be solely passive observation. Case studies can follow just one case in isolation, or might follow a series of related cases in similar ways for comparison. Case studies usually do not involve a planned intervention by the observer, otherwise we start to call them experiments. But they may track changes as the result of interventions outside the observer’s control.

What might change if we think about attack papers as case studies? We can apply our scientific experience from other disciplines. I’ve argued before that security is a science. We need to adapt scientific techniques, and other sciences might learn from what we do in security. But we need to be in a dialogue there. Calling attack papers what they are opens up this dialogue in several ways.

Continue reading Attack papers are case studies

Security code AutoFill: is this new iOS feature a security risk for online banking?

A new feature for iPhones in iOS 12 – Security Code AutoFill – is supposed to improve the usability of Two Factor Authentication but could place users at risk of falling victim to online banking fraud.

Two Factor Authentication (2FA), which is often referred to as Two Step Verification, is an essential element for many security systems, especially those online and accessed remotely. In most cases, it provides extended security by checking if the user has access to a device. In SMS-based 2FA, for example, a user registers their phone number with an online service. When this service sees a login attempt for the corresponding user account, it sends a One Time Password (OTP), e.g. four to six digits, to the registered phone number. The legitimate user then receives this code and is able to quote it during the login process, but an impersonator won’t.

In a recent development by Apple, announced at its developer conference WWDC18, they are set to automate this last step to improve user experience with 2FA with a new feature that is set to be introduced to iOS in version 12. The Security Code AutoFill feature, currently available to developers in a beta version, will allow the mobile device to scan incoming SMS messages for such codes and suggest them at the top of the default keyboard.

Description of new iOS 12 Security Code AutoFill feature (source: Apple)

Currently, these SMS codes rely on the user actively switching apps and memorising the code, which can take a couple of seconds. Some users deploy alternative try strategies such as memorising the code from the preview banner and hastily typing it down. Apple’s new iOS feature will require only a single tap from the user. This will make the login process faster and less error prone, a significant improvement to the usability of 2FA. It could also translate into an increased uptake of 2FA among iPhone users.

Example of Security Code AutoFill feature in operation on iPhone (source: Apple)

If users synchronise SMS with their MacBook or iMac, the existing Text Message Forwarding feature will push codes from their iPhone and enable Security Code AutoFill in Safari.

Example of Security Code AutoFill feature synchronised with macOS Mojave (source: Apple)

Reducing friction in user interaction to improve technology uptake for new users, and increase the usability and satisfaction for existing users, is not a new concept. It has not only been discussed in academia at length but is also a common goal within industry, e.g. in banking. This is evident in how the financial and payment industry has encouraged contactless (Near Field Communication – NFC) payments, which makes transactions below a certain threshold much quicker than traditional Chip and PIN payments.

Continue reading Security code AutoFill: is this new iOS feature a security risk for online banking?

Improving the auditability of access to data requests

Data is increasingly collected and shared, with potential benefits for both individuals and society as a whole, but people cannot always be confident that their data will be shared and used appropriately. Decisions made with the help of sensitive data can greatly affect lives, so there is a need for ways to hold data processors accountable. This requires not only ways to audit these data processors, but also ways to verify that the reported results of an audit are accurate, while protecting the privacy of individuals whose data is involved.

We (Alexander Hicks, Vasilios Mavroudis, Mustafa Al-Basam, Sarah Meiklejohn and Steven Murdoch) present a system, VAMS, that allows individuals to check accesses to their sensitive personal data, enables auditors to detect violations of policy, and allows publicly verifiable and privacy-preserving statistics to be published. VAMS has been implemented twice, as a permissioned distributed ledger using Hyperledger Fabric and as a verifiable log-backed map using Trillian. The paper and the code are available.

Use cases and setting

Our work is motivated by two scenarios: controlling the access of law-enforcement personnel to communication records and controlling the access of healthcare professionals to medical data.

The UK Home Office states that 95% of serious and organized criminal cases make use of communications data. Annual reports published by the IOCCO (now under the IPCO name) provide some information about the request and use of communications data. There were over 750 000 requests for data in 2016, a portion of which were audited to provide the usage statistics and errors that can be found in the published report.

Not only is it important that requests are auditable, the requested data can also be used as evidence in legal proceedings. In this case, it is necessary to ensure the integrity of the data or to rely on representatives of data providers and expert witnesses, the latter being more expensive and requiring trust in third parties.

In the healthcare case, individuals usually consent for their GP or any medical professional they interact with to have access to relevant medical records, but may have concerns about the way their information is then used or shared.  The NHS regularly shares data with researchers or companies like DeepMind, sometimes in ways that may reduce the trust levels of individuals, despite the potential benefits to healthcare.

Continue reading Improving the auditability of access to data requests

Scanning the Internet for Liveness

Internet-wide scanning (or probing) has emerged as a key measurement technique to study a diverse set of the Internet’s properties, including address space utilization, host reachability, topology, service availability, vulnerabilities, and service discrimination. But despite its widespread use and critical importance for Internet measurement, we still lack a clear understanding of IP liveness—whether a target IP address responds to a probe packet. What type of probe packets should we send if we, for example, want to maximize the responding host population? What type of responses can we expect and which factors determine such responses? What degree of consistency can we expect when probing the same host with different probe packets?

In our recent paper Scanning the Internet for Liveness, we presented a systematic analysis of liveness and how it shows up in active scanning campaigns. We developed a taxonomy of liveness which we employed to develop a method to perform concurrent IPv4 scans using ICMP, five TCP-based, and two UDP-based protocols, capturing all responses to our probes. Our key findings are:

  • Responsive host populations are highly sensitive to the choice of probe. While ICMP discovers the highest number of raw IPs, our TCP and UDP measurements exclusively contribute a fifth to the total population of responsive hosts.
  • Collecting ICMP Error messages for TCP and UDP scans increases the responsive population by more than 13%, and provides new opportunities to interpret scan results.
  • At the transport layer, our concurrent measurements reveal that the majority of hosts exhibit inconsistent behaviour when probed on different ports, and that capturing negative responses significantly improves scanning completeness.
  • Our concurrent scans allow us to identify nearly 2M tarpits (IPs masquerading as fake hosts) that would bias measurements that do not take them into account.
  • Our study of cross-protocol liveness shows that responsiveness for some protocols is correlated, suggesting that the same seed set of responsive IP addresses can be potentially used to bootstrap multiple highly-correlated target populations to reduce scan traffic.

This work recently appeared in the April 2018 issue of ACM SIGCOMM Computer Communication Review (CCR), and was conducted in collaboration with Philipp Richter (MIT), Mobin Javed (LUMS Pakistan, ICSI Berkeley), Srikanth Sundaresan (Princeton University), Zakir Durumeric (Stanford University), Steven J. Murdoch (University College London), Richard Mortier (University of Cambridge) and Vern Paxson (UC Berkeley, ICSI Berkeley). Overall, this study yields practical insights and methodological improvements for the design and the execution of active Internet measurement studies. We released the code and data of this work as open source to allow for reproducibility of the results, and to enable further research.

Continue reading Scanning the Internet for Liveness

Tampering with OpenPGP digitally signed messages by exploiting multi-part messages

The EFAIL vulnerability in the OpenPGP and S/MIME secure email systems, publicly disclosed yesterday, allows an eavesdropper to obtain the contents of encrypted messages. There’s been a lot of finger-pointing as to which particular bit of software is to blame, but that’s mostly irrelevant to the people who need secure email. The end result is that users of encrypted email, who wanted formatting better than what a mechanical typewriter could offer, were likely at risk.

One of the methods to exploit EFAIL relied on the section of the email standard that allows messages to be in multiple parts (e.g. the body of the message and one or more attachments) – known as MIME (Multipurpose Internet Mail Extensions). The authors of the EFAIL paper used the interaction between MIME and the encryption standard (OpenPGP or S/MIME as appropriate) to cause the email client to leak the decrypted contents of a message.

However, not only can MIME be used to compromise the secrecy of messages, but it can also be used to tamper with digitally-signed messages in a way that would be difficult if not impossible for the average person to detect. I doubt I was the first person to discover this, and I reported it as a bug 5 years ago, but it still seems possible to exploit and I haven’t found a proper description, so this blog post summarises the issue.

The problem arises because it is possible to have a multi-part email where some parts are signed and some are not. Email clients could have adopted the fail-safe option of considering such a mixed message to be malformed and therefore treated as unsigned or as having an invalid signature. There’s also the fail-open option where the message is considered signed and both the signed and unsigned parts are displayed. The email clients I looked at (Enigmail with Mozilla Thunderbird, and GPGTools with Apple Mail) opt for a variant of the fail-open approach and thus allow emails to be tampered with while keeping their status as being digitally signed.

Continue reading Tampering with OpenPGP digitally signed messages by exploiting multi-part messages

“The pool’s run dry” – analyzing anonymity in Zcash

Zcash is a cryptocurrency whose main feature is a “shielded pool” that is designed to provide strong anonymity guarantees. Indeed, the cryptographic foundations of the shielded pool are based in highly-regarded academic research. The deployed Zcash protocol, however, allows for transactions outside of the shielded pool (which, from an anonymity perspective, are identical to Bitcoin transactions), and it can be easily observed from blockchain data that the majority of transactions do not use the pool. Nevertheless, users of the shielded pool should be able to treat it as their anonymity set when attempting to spend coins in an anonymous fashion.

In a recent paper, An Empirical Analysis of Anonymity in Zcash, we (George Kappos, Haaroon Yousaf, Mary Maller, and Sarah Meiklejohn) conducted an empirical analysis of Zcash to further our understanding of its shielded pool and broader ecosystem. Our main finding is that is possible in many cases to identify the activity of founders and miners using the shielded pool (who are required by the consensus rules to put all newly generated coins into it). The implication for anonymity is that this activity can be excluded from any attempt to track coins as they move through the pool, which acts to significantly shrink the effective anonymity set for regular users. We have disclosed all our findings to the developers of Zcash, who have written their own blog post about this research.  This work will be presented at the upcoming USENIX Security Symposium.

What is Zcash?

In Bitcoin, the sender(s) and receiver(s) in a transaction are publicly revealed on the blockchain. As with Bitcoin, Zcash has transparent addresses (t-addresses) but gives users the option to hide the details of their transactions using private addresses (z-addresses). Private transactions are conducted using the shielded pool and allow users to spend coins without revealing the amount and the sender or receiver. This is possible due to the use of zero-knowledge proofs.

Like Bitcoin, new coins are created in public “coingen” transactions within new blocks, which reward the miners of those blocks. In Zcash, a percentage of the newly minted coins are also sent to the founders (a predetermined list of Zcash addresses owned by the developers and embedded into the protocol).

Continue reading “The pool’s run dry” – analyzing anonymity in Zcash

Scanning beyond the horizon: long-term planning for cybersecurity and the post-quantum challenge

I recently came across an interesting white paper published by PwC, “A false sense of security? Cyber-security in the Middle East”. This paper is interesting for a number of reasons. Most obviously, I guess, it’s about an area of the world that’s a bit different from that of my immediate experience in the West and which faces many well-reported challenges. Indeed, it seems, as reported in the PwC paper, that companies and governments in the region suffer from more cyberattacks, resulting in bigger financial losses, than anywhere else in the world.

The paper confirms that many of the problems faced by companies and governments in the Middle East are, as of course one would expect, exactly those faced by their Western counterparts – too often, the cybersecurity industry responds to incidents in a fire-fighting style, rolling out patches in rushed knee-jerk reactions to imminent threats.

The way to counteract these problems is, of course, to train cybersecurity professionals who will be capable of making appropriate strategic and tactical investments in security and able to respond to respond better to attacks. All well and good, but there is global skills deficit in the cybersecurity industry and it seems that this problem is particularly acute in the Middle East;  and it seems to be a notable contributory factor to the problems experienced in the region. The problem needs some long-term thinking: in the average user, we need to encourage good security behaviours, which are learned over many years; in the security profession, we need to ensure that there is sufficient upcoming talent to fill our growing needs over the next century.

Exploring this topic a bit, I came across a company called SiConsult, a security services provider (with which I have no personal connection), with offices in the Middle East. They are taking an initiative, which provides students (or, indeed, anyone I think) with an interesting opportunity. They have been thinking about cryptography in the post-quantum world, and how to develop solutions and relevant expertise in the long term.

All public key cryptography as we currently know it may be rendered insecure by the deployment of quantum computers. Your Internet connection to the bank, the keys protecting your Dropbox, and your secure messaging applications will all be compromised. But a quantum computer that can run Shor’s algorithm, which means large numbers can be factorized in polynomial time, is still maybe ten years away (or five, or twenty, or … ). So why should we care now? Well, the consequences of losing the protection of good public-key crypto would be very serious and, consequently, NIST (the US’s National Institute of Standards and Technology) is running a process to standardize quantum-resistant algorithms. The first round of submissions has just closed, but we will have to wait until 2025 for draft standards, which could be too late for some use cases.

As a result of the process timeline, companies and academics are likely to search for their own solutions long before NIST standardizes theirs. SiConsult, the company I mentioned, is inviting students (or anyone else) develop a quantum-safe application messaging application, for a small prize – the Post Quantum Innovation Challenge. What is interesting is that the company’s motivation here is not purely financial – they are not looking to retain ownership of any designs or applications that may be submitted to the competition – but instead they are looking to spark interest in post-quantum cryptography, search for new cybersecurity talent, and encourage cybersecurity education, especially in the Middle East.

Initiatives like the Post Quantum Innovation Challenge are needed to energise those that may be considering a career in cyber security, to make sure that the talent pipeline is flowing well for years to come. Importantly, the barrier for entry to PQIC is relatively low: anyone with an interest in security should consider entering. Perhaps it will go a little of the way towards a solution to both the quantum and education long-term problems.