Analyzing privacy aspects of the W3C Vibration API

When making web standards, multiple scenarios possibly affecting privacy are considered. This includes even extreme ones; and this is a good thing. It’s best to predict the creative use and abuse of web features, before they are exploited.

Vibration API

The mechanism allowing websites to utilize a device’s vibration motor is called the Vibration API. The mechanism allows a device to be vibrated in particular patterns. The argument to the vibration() function is a list called a pattern. The list’s odd indices cause a vibration for a specific length of time, and even values are the still periods. For example, a web designer can make the device to vibrate for a specific duration, say 50 ms and follow that with a still period of 100 ms using the following call:


In certain circumstances this can create several interesting potential privacy risks. Let’s look at the Vibration API from a privacy point of view. I will consider a number of scenarios on various technical levels.

Toy de-anonymisation scenario

One potential risk is the identification of a particular person in real life. Imagine several people in the same room placing their devices on a table. At some point, one person’s device vibrates in specific patterns. This individual might then become marked to a potential observer.

How could such a script be delivered? One possibility is though web advertising infrastructures. These offer capabilities of targeting individuals with a considerable accuracy (with respect to their location).

Continue reading Analyzing privacy aspects of the W3C Vibration API

Microsoft Ireland: winning the battle for privacy but losing the war

On Thursday, Microsoft won an important federal appeals court case against the US government. The case centres on a warrant issued in December 2013, requiring Microsoft to disclose emails and other records for a particular email address which was related to a narcotics investigation. It transpired that these emails were stored in a Microsoft datacenter in Ireland, but the US government argued that, since Microsoft is a US company and can easily copy the data into the US, a US warrant would suffice. Microsoft argued that the proper way for the US government to obtain the data is through the Mutual Legal Assistance Treaty (MLAT) between the US and Ireland, where an Irish court would decide, according to Irish law, whether the data should be handed over to US authorities. Part of the US government’s objection to this approach was that the MLAT process is sometimes very slow, although though the Irish government has committed to consider any such request “expeditiously”.

The appeal court decision is an important victory for Microsoft (following two lower courts ruling against them) because they sell their european datacenters as giving their european customers confidence that their data will be subject to the more stringent european privacy laws. Microsoft’s case was understandably supported by other technology companies in the same position, as well as civil liberties organisations such as the Electronic Frontier Foundation in the US and the Open Rights Group in the UK. However, I have mixed opinions about the outcome: while probably the right decision in this case, the wider consequences could be detrimental to privacy.

Both sides of the case wanted to set a precedent (if not legally, at least in practice). The US government wanted US law to apply to data held by US companies, wherever in the world the data resides. Microsoft wanted the location of the data to imply which legal regime applied, and so their customers could be confident that their own country’s laws will be respected, provided Microsoft have a datacenter in their own country (or at least one with compatible laws). My concern is that this ruling will give false assurance to customers of US companies, because in other circumstances a different decision could quite easily be taken.

We know about this case because Microsoft chose to challenge it in court, and were able to do so. This is the first time Microsoft has challenged a US warrant for data stored in their Irish datacenter despite it being in operation for three years prior to the case. Had the email address been associated with a more serious crime, or the demand for emails accompanied by a gagging order, it may not have been challenged. Microsoft and other technology companies may still choose to accept, or may even be forced to accept, the applicability of future US warrants to data they control, regardless of the court decision last week. One extreme approach to compel this approach would be for the US to jail employees until their demands are complied with.

For this reason, I have argued that control over data is more important than where data resides. If a company does not have the technical capability to comply with an order, it is easier for them to defend their case, and so protects both the company’s customers and staff. Microsoft have taken precisely this approach for their new German datacenters, which will be operated by staff in Germany working for a German “data trustee” (Deutsche Telekom). In contrast to their Irish datacenter, Microsoft staff will be unable to access customer data, except with the permission of and oversight from the data trustee.

While the data trustee model resists information being obtained through improper legal means, a malicious employee could still break rules for personal gain, or the systems designed to process legal requests could be hacked into. With modern security techniques it is possible to do better. End-to-end encryption for instant messaging is one such example, because (if designed properly) the communications provider does not have access to messages they carry. A more sophisticated approach is “distributed consensus”, where a decision is only taken if a majority of participants agree. The consensus process is automated and enforced through cryptography, ensuring that rules are respected even if some participants are malicious. Critical decisions in the Tor network and in Bitcoin are taken this way. More generally, there is a growing recognition that purely legal or procedural mechanisms are insufficient to protect privacy. This is one of the common threads present in much of the research presented at the Privacy Enhancing Technologies Symposium, being held this week in Darmstadt: recognising that there will always be imperfections in software, people and procedures and showing that nevertheless individual’s privacy can still be protected.

Cybersecurity: Supporting a Resilient and Trustworthy System for the UK

Yesterday, the Royal Society published their report on cybersecurity policy, practice and research – Progress and Research in Cybersecurity: Supporting a Resilient and Trustworthy System for the UK. The report includes 10 recommendations for government, industry, universities and research funders, covering the topics of trust, resilience, research and translation. This major report was written based on evidence gathered from an open call, as well as meetings with key stakeholders, guided by a steering committee which included UCL members M. Angela Sasse and Steven Murdoch. Here, we summarise what we think are the most important signposts for cybersecurity research and practice.

The report points out that, as online technology and services touches nearly everyone’s lives, the role of cybersecurity is to support a resilient digital economy and society in the UK. Previously, the government focus was very much on national security – but it is just as important that we are able to secure our personal data, financial assets and homes, and that our decisions as consumers and citizens are not manipulated or subverted. The report rightly states that the national authority for cybersecurity needs to be transparent, expert and have a clear and widely-understood remit. The creation of the National Cyber Security Center (NCSC) may be a first step towards this, but the report also points out that currently, it is to be under control of GCHQ – and this is bound to be a problem given the lack of trust they have from parts of industry and civil society, as a result of their role in subverting the development of security standards in order to make surveillance easier.

The report furthermore recommends that the government preserves the robustness of encryption, including end-to-end encryption and promotes its widespread use. Encryption and other computer security measures provides the foundation that allows individuals to trust organisations and attempts to weaken these measures in order to facilitate surveillance will create security risks and reduce robustness. Whether weaknesses are created by requiring fragile encryption algorithms or mandating exceptional access, these attempts increase the risk of unauthorised parties gaining access to sensitive computer systems.

The report also rightly says that companies need to take more responsibility for cyber security: to be a trustworthy business partner or service provider, they need to be competent, and have the correct motivation. “Dumping” the risks associated with online transactions on customers or business partners who don’t have skills and resources to deal with them, and hiding this in complex terms and conditions, is not trustworthy behaviour. Making companies take liability for the security failures will likely play a part in improving trustworthiness, but needs to be done carefully. Important open source software such as OpenSSL is developed by a handful of people in their spare time. When something goes wrong (such as Heartbleed), multi-billion dollar companies who built their business around open source software without contributing or even properly evaluating the risk, should not be able to assign liability to the volunteer developers. Companies should also be transparent and be required to disclose vulnerabilities and breaches. The report calls for such disclosures to be made to a central body, but we would go further and recommend that they be disclosed to the customers exposed to risks as a result of the security failures.

In order to improve and demonstrate competence in cybersecurity, we need evidence-based guidance on state-of-the-art cybersecurity principles, standards and practices. These go further than just following widely used industry practice, or following craft knowledge based on expert opinion, but should be an an ambitious set of criteria which have been demonstrated to make a pronounced improvement in security. A significant effort is required to transform what is currently a set of common practices (the term “best practice” is a misnomer) through empirical tests and measurements into a set of practices and tools that we know to be effective and efficient under real-world conditions (this is the mission of The Research Institute in Science of Cyber Security (RISCS), which has just started a new 5 year phase). The report in particular calls for research on ways to quantify the security offered by anonymization algorithms and anonymous communication techniques, as these perform an critical role in supporting privacy by design.

The report calls for more research, and new means to assess and support research. Cybersecurity is an international field, and research funders should seek for peer-review to be performed by the best expertise available internationally and to remove barriers to international and multidisciplinary research. However, supporting multidisciplinary research should not be at the expense of addressing the many hard technical problems which remain. The report also identifies the benefits of challenge-led funding, where a research programme is led by a world-leading expert with substantial freedom in how research funds are distributed. For this model to work it is critical to create the right environment for recruiting international experts to both lead and participate in such challenges, which as fellow steering-group member Ross Anderson has pointed out, the vote to leave the EU has seriously harmed. Finally, the report calls for improvements to the research commercialisation process, including that universities priorities getting research out into the real world over trying to extract as much money as possible, and that new investment sources are developed to fill in the gaps left by traditional venture capital, such as for software developed for the public good.

Workshop: Theory and Practice of Secure Multiparty Computation

Members of the UCL information security group visiting Aarhus rainbow panorama.
Members of the UCL information security group visiting the Aarhus rainbow panorama

The workshop was organized by CFEM and CTIC, and took place in Aarhus from May 30 until June 3, 2016. The speakers presented both theoretical advancements and practical implementations (e.g., voting, auction systems) of MPC, as well as open problems and future directions.

The first day started with Ivan Damgård presenting TinyTable, a new simple 2-party secure computation protocol. Then Martin Hirt introduced the open problem of general adversary characterization and efficient protocol generation. The last two talks of the day discussed Efficient Constant-Round Multiparty Computation and Privacy-Preserving Outsourcing by Distributed Verifiable Computation.

The first session of the second day included two presentations on theoretical results which introduced a series of three-round secure two-party protocols and their security guarantees, and fast circuit garbling under weak assumptions. On the practical side, Rafael Pass presented formal analysis of the block-chain, and abhi shelat outlined how MPC can enable secure matchings. After the lunch break, probabilistic termination of MPC protocols and low-effort VSS protocols were discussed.

Yuval Ishai and Elette Boyle kicked off the third day by presenting constructions of function secret sharing schemes, and recent developments in the area. After the lunch break, a new hardware design enabling Verifiable ASICs was introduced and the latest progress on “oblivious memories” were discussed.

The fourth day featured presentations on RAMs, Garbled Circuits and a discussion on the computational overhead of MPC under specific adversarial models. Additionally, there was a number of presentations on practical problems, potential solutions and deployed systems. For instance, Aaron Johnson presented a system for private measurements on Tor, and Cybernetica representatives demonstrated Sharemind and their APIs. The rump session of the workshop took place in the evening, where various speakers were given at most 7 minutes to present new problems or their latest research discoveries.

On the final day, Christina Brzuska outlined the connections between different types of obfuscation and one-way functions, and explained why some obfuscators were impossible to construct. Michael Zohner spoke about OT extensions, and how they could be used to improve 2-party computation in conjunction with look-up tables. Claudio Orlandi closed the workshop with his talk on Amortised Garbled Circuits, which explained garbling tricks all the way from Yao’s original work up to the state of the art, and provided a fascinating end to the week.