On the security and privacy of the ultrasound tracking ecosystem

In April 2016, the US Federal Trade Commission (FTC) sent warning letters to 12 Google Play app developers. The letters were addressed to those who incorporated the SilverPush framework in their apps, and reminded developers who used tracking software to explicitly inform their users (as seen in Section 5 of the Federal Trade Commission Act). The incident was covered by popular press and privacy concerns were raised. Shortly after, SilverPush claimed no active partnerships in the US and the buzz subsided.

Unfortunately, as the incident was resolved relatively fast, very few technical details of the technology were made public. To fill in this gap and understand the potential security implications, we conducted an in-depth study of the SilverPush framework and all the associated technologies.

The development of the framework was motivated by a fast-increasing interest of the marketing industry in products performing high-accuracy user tracking, and their derivative monetization schemes. This resulted in a high demand for cross-device tracking techniques with increased accuracy and reduced prerequisites.

The SilverPush framework fulfilled both of these requirements, as it provided a novel way to track users between their devices (e.g., TV, smartphone), without any user actions (e.g., login to a single platform from all their devices). To achieve that, the framework realized a previously unseen cross-device tracking technique (i.e., ultrasound cross-device tracking, uXDT) that offered high tracking accuracy, and came with various desirable features (e.g., easy to deploy, imperceptible by users). What differentiated that framework from existing ones was the use of high-frequency, inaudible ultrasonic beacons (uBeacons) as a medium/channel for identifier transmission between the user’s devices. This is also offered a major advantage to uXDT, against other competing technologies, as uBeacons can be emitted by most commercial speakers and captured by most commercial microphones. This eliminates the need for specialized emission and/or capturing equipment.

Aspects of a little-known ecosystem

The low deployment cost of the technology fueled the growth of a whole ecosystem of frameworks and applications that use uBeacons for various purposes, such as proximity marketing, audience analytics, and device pairing. The ecosystem is built around the near-ultrasonic transmission channel, and enables marketers to profile users.

Unfortunately, users are often given limited information on the ecosystem’s inner workings. This lack of transparency has been the target of great criticism from the users, the security community and the regulators. Moreover, our security analysis revealed a false assumption in the uBeacon threat model that can be exploited by state-level adversaries to launch complex attacks, including one that de-anonymizes the users of anonymity networks (e.g., Tor).

On top of these, a more fundamental shortcoming of the ecosystem is the violation of the least privilege principle, as a consequence of the access to the device’s microphone. More specifically, any app that wants to employ ultrasound-based mechanisms needs to gain full access to the device’s microphone, as there is currently no way to gain access only to the ultrasound spectrum. This clearly violates the least privilege principle, as the app has now access to all audible frequencies and allows a potentially malicious developer to request access to the microphone for ultrasound-pairing purposes, and then use it to eavesdrop the user’s discussions. This also results in any ultrasound-enabled apps to risk being perceived as “potentially malicious” by the users.

Mitigation

To address these shortcomings, we developed a set of countermeasures aiming to provide protection to the users in the short and medium term. The first one is an extension for the Google Chrome browser, which filters out all ultrasounds from the audio output of the websites loaded by the user. The extension actively prevents web pages from emitting inaudible sounds, and thus completely thwarts any unsolicited ultrasound-tracking attempts. Furthermore, we developed a patch for the Android permission system that allows finer-grained control over the audio channel, and forces applications to declare their intention to capture sound from the inaudible spectrum. This will properly separate the permissions for listening to audible sound and sound in the high-frequency spectrum, and will enable the end users to selectively filter the ultrasound frequencies out of the signal acquired by the smartphone microphone.

More importantly, we argue that the ultrasound ecosystem can be made secure only with the standardization of the ultrasound beacon format. During this process, the threat model will be revised and the necessary security features for uBeacons will be specified. Once this process is completed, APIs for handling uBeacons can be implemented in all major operating systems. Such an API would provide methods for uBeacon discovery, processing, generation and emission, similar to those found in the Bluetooth Low Energy APIs. Thereafter, all ultrasound-enabled apps will need access only to this API, and not to the device’s microphone. Thus, solving the problem of over-privileging that exposed the user’s sensitive data to third-party apps.

Discussion

Our work provides an early warning on the risks looming in the ultrasound ecosystem, and lays the foundations for the secure use of this set of technologies. However, it also raises several questions regarding the security of the audio channel. For instance, in a recent incident a journalist accidentally injected commands to several amazon echo devices, which then allegedly tried to order products online. This underlines the need for security features in the audio channel. Unfortunately, due to the variety of use cases, a universal solution that could be applied to the lower communication layers seems unlikely. Instead, solutions must be sought in the higher communications layers (e.g., application layers), and should be the outcome of careful threat modeling.

Inaugural Lecture: Zero-Knowledge Proofs

We held our annual ACE-CSR event in November 2016. The last talk was my inaugural lecture to full professor. I did not write the summary below myself, hence the use of third person, not because I now consider myself royalty! 🙂

In introducing Jens Groth, professor of cryptology, George Danezis, head of the information security group, commented that Groth’s work provided the only viable solutions to many of the hard privacy problems he himself was tackling. To most qualified engineers, he said, the concept of zero-knowledge proofs seems impossible: the idea is to show the properties of a secret without revealing them. A zero-knowledge proof could, for example, verify the result of a computation on some data without revealing the data itself. Most engineers believe that you must choose between integrity and confidentiality; Groth has proved this is not true. In addition, Danezis praised Groth’s work as highly creative, characterised by great mathematical depth and subtlety, and admired Groth’ willingness to speak his mind fearlessly even to government funders. Angela Sasse, head of the department, called Groth’s work “security tools we’re going to need for future generations”, and noted that simultaneously with these other accomplishments Groth helped put in place the foundation for the group as it is today.

Groth, jokingly opted to structure his talk around papers he’s had rejected to illustrate how hard it can be to publish innovative research. The concept of zero-knowledge proofs originated with a 1985 paper by Shafi Goldwasser, Silvio Micali, and Charles Rackoff. Zero-knowledge proofs have three characteristics: completeness (the prover can convince the verifier that the statement is true); soundness (the claiming prover cannot convince the verifier when the statement is false); and secrecy (no information other than the truth of the statement is leaked, even when the prover is interacting with a verifier who cheats). Groth illustrated the latter idea with a simple card trick: he asked an audience member to choose a card and then say whether the card was a heart or not. If the respondent shows all the cards that are not hearts, counting these proves that the selected card must be a heart without revealing what it is.

Zero-knowledge proofs can be extended to think about more complicated statements. Groth listed some examples:

  • Assert that a logical formula has an assignment to the variables that makes it true
  • Verify that a graph is Hamiltonian – that is, there is a path that touches each vertex exactly once
  • That a set of inputs into a Boolean circuit will produce an output of 1
  • Any statement of the general form U belongs to some NP-language L

Groth could see many possible applications for these proofs: signatures, encryption, electronic cash, electronic auctions, internet voting, multiparty computation, and verifiable cloud computing. His overall career has focused on building versatile and efficient proofs with the goal of moving them from being expensive and slow to being just a fraction of the cost of the task that’s being executed so that people would stop thinking about the cost and just toss them in as a standard part of any transaction.

Continue reading Inaugural Lecture: Zero-Knowledge Proofs

Privacy analysis of the W3C Proximity Sensor specification

Mobile developers are familiar with proximity sensors. These provide information about the distance of an object from the device providing access to the sensor (i.e. smartphone, tablet, laptop, or another Web of Things device). This object is usually the user’s head or hand.

Proximity sensors provide binary values such as far (from the object) or near (respectively), or a more verbose readout, in centimeters.

There are many useful applications for proximity. For example, an application can turn off the screen if the user holds the device close to his/her head (face detection). It is also handy for avoiding the execution of undesired actions, possibly arising from scratching the head with the phone’s screen.

The Proximity Sensor API specification is being standardized by W3C. Every web site will be able to access this information. Let’s focus on the privacy engineering point of view.

Proximity is the distance between an object and device. The latest version of W3C Proximity Sensors API takes advantage of a soon-to-be standard Generic Sensors API. The sensor provides the proximity distance in centimeters.

The use of Proximity Sensors is simple; an example displaying the current distance in centimeters using the modern syntax of Generic Sensors API is as follows:

let sensor = new ProximitySensor();

sensor.start();

sensor.onchange = function(){console.log(event.reading.distance)};  

This syntax might later allow requesting various proximity sensors (if the device has more than one), including those based on the sensor’s position (such as “rear” or “front”), but the current implementations generally still use the legacy version from the previous edition of the spec:

window.addEventListener('deviceproximity', function(event) {  
console.log(event.value)  
});

From the perspective of privacy considerations, there is currently no significant difference between those two API versions.

Privacy analysis of Proximity Sensor API

Proximity sensor readout is not providing much data, so it may appear there are no privacy implications, but there are good reasons for performing such analysis. Let’s list two:

Firstly, designing new standards and systems with privacy in mind — privacy by design — is a required practice and a good idea. Secondly, in some circumstances even potentially insignificant mechanisms can still bring consequences from privacy point of view. For example, multiple identifiers might be helpful in deanonymizing users. Non-obvious data leaks can surface, as well.

So let’s discuss some of the possible issues.

The average distance from the device to the user’s face (i.e. object) could be used to differentiate and discriminate between users. While the severity is hard to establish, future changes (i.e. influencing with other external data) cannot be ruled out.

If proximity patterns would be individually-attributable, this would offer a possibility to enhance user profiling based on the analysis of device use patterns. For example, the following could be obtained and analyzed:

  • Frequency of the user’s patterns of device use (i.e. waving a hand in front of the proximity sensor)
  • Frequency of the user’s zooming in and out (i.e. device close to the user’s head)
  • Patterns of use. Does the way the user hold the device vary during the day? How?
  • What are the mechanics of holding the device close to the user’s head? Can the distance vary?

Some users may use the mobile operating system’s zoom capability to increase the font or images, but others might casually prefer to hold the device closer to their eyes. Such behavioral differences can also be of note.

Proximity sensors can provide the following data: the current proximity distance and max, the maximum distance readout supported by the device. In case the value of max would differ between various implementations (e.g. among browsers, devices), it could form an identifier. And the two (max, distance) combined could also be used as short-lived identifiers. Moreover, it is not so difficult to imagine a situation where those identifiers are actually not so short-lived.

Recommendations

First of all, is there a need to provide a verbose proximity readout at all? For example, is providing readouts of proximity (distance) value up to 150 cm necessary?

In general, a device (browser, a Web of Things device) should be capable of informing the user when a web site accesses proximity information. The user should also be able to inspect which web sites – and how frequently – accessed the API.

Finally, the sensors should provide an adequately verbose readout of the distance.

Proximity Sensors should also be subject to permissions.

Demonstration

At the moment, the implementations of proximity sensors are limited. But a demonstration is running on my SensorsPrivacy research project (tested against Firefox Mobile on Android). In my case (Nexus 5), Firefox offered data in centimeters, but the verbosity was limited to 5 centimeters on my device. Again, this is a matter of hardware and software. In principle, the accuracy could be much higher and one can imagine an accuracy a sensor providing more accurate readouts (even up to 1 meter).

The screen dump below shows how the example demonstrates the use of proximity data (and whether it works on a browser). The site changes its background color if an object is placed near the proximity sensor of a device.

The readout from the demonstration shows a relative time between events (first column), and the proximity distance (second column).

398  5  
1011 0  
404  5  
607  0  

The sensor on my device is reporting two values of distance: 0 (near) and 5 (far), in centimeters. In general, the granularity is subject to the following: standard, implementation and hardware.

And this suggests something, of course. So let’s complement the privacy analysis from the previous section to incorporate a timing analysis.

Even such a limited readout can help performing behavioural analysis, simply by profiling based on time series. As we can see, the sensor readout is quite sensitive in respect to time and can measure with sub-200 ms granularity. The demonstration proof of principle is also showing the minimum relative detected time between events. Feel free to test the performance of your fingers and/or hardware!

Summary

When designing a standard project or an implementation, paying attention to details is imperative. This includes consideration of even potential risks. This is especially the case if the software or systems will be used by millions or hundreds of millions of users, i.e. you are aiming for success.

Finally, standards and specifications are ideal for issuing guidance and good practices

 

This post originally appeared on Security, Privacy & Tech Inquiries, the blog of Lukasz Olejnik. An accompanying demonstration is available on SensorsPrivacy (a project studing privacy of web sensors).

A Privacy Enhancing Architecture for Secure Wearable Devices

Why do we need security on wearable devices? The primary reason comes from the fact that, being in direct contact with the user, wearable devices have access to very private and sensitive user’s information more often than traditional technologies. The huge and increasing diversity of wearable technologies makes almost any kind of information at risk, going from medical records to personal habits and lifestyle. For that reason, when considering wearables, it is particularly important to introduce appropriate technologies to protect these data, and it is primary that both the user and the engineer are aware of the exact amount of collected information as well as the potential threats pending on the user’s privacy. Moreover, it has also to be considered that the privacy of the wearable’s user is not the only one at risk. In fact, more and more devices are not limited to record the user’s activity, but can also gather information about people standing around.

This blog post presents a flexible privacy enhancing system from its architecture to the prototyping level. The system takes advantage from anonymous credentials and is based on the protocols developed by M. Chase, S. Meiklejohn, and G. Zaverucha in Algebraic MACs and Keyed-Verification Anonymous Credentials. Three main entities are involved in this multi-purpose system: a main server, wearable devices and localisation beacons. In this multipurpose architecture, the server firstly issue some anonymous credentials to the wearables. Then, each time a wearable reach a particular physical location (gets close to a localisation beacon) where it desires to perform an action, it starts presenting its credentials in order to ask the server the execution of that a particular action. Both the design of the wearable and the server remain generic and scalable in order to encourage further enhancements and easy integration into real-world applications; i.e., the central server can manage an arbitrary number of devices, each device can posses an arbitrary number of credentials and the coverage area of the localisation system is arbitrarily extendable.

Architecture

The complete system’s architecture can be modelled as depicted in the figure below. Roughly speaking, a web interface is used to manage and display the device’s functions. Each user and admin access the system from that interface.

complete_architecture

During the setup phase, the server issues the credentials to a selected device (according to the algorithms presented in Algebraic MACs and Keyed-Verification Anonymous Credentials) granting it a given privilege level. The credentials’ issuance is a short-range process. In fact, the wearable needs to be physically close to the server to allow the admins to physically verify, once and for all, the identity of the wearables’ users. In order to improve security and battery life, the wearable only communicates using extremely low-power and short-range radio waves (dotted line on the figure). The server beacons can be seen as continuities of the main server and have essentially two roles: the first is to operate as an interface between the wearables and the server, and the second is to act as a RF localisation system. Each time the wearable granted with enough privileges reaches some particular physical location (gets close to a localisation beacon), it starts presenting its credentials in order to prove to the server that it possesses credentials over some attributes (without revealing them), and that these credentials have been previously issued by the server itself. Note that the system preserves anonymity only if many users are involved (for each privilege level), but this is a classic requirement of anonymous systems. Finally, once the credentials have been successfully verified by the server, the server issues a signed request to an external entity (which can be, for instance, an automatic door, an alarm system or any compatible IoT entity) to perform the requested action.

Continue reading A Privacy Enhancing Architecture for Secure Wearable Devices

Yes, we have no receipts

Internet voting is a hard problem: there are many ways to fail, and the cost of failing is high. Furthermore, the security requirements appear to be self-contradicting at times. Verifiability requires that a voter, let’s say Alice, must have sufficient knowledge about her ballot to detect if it was tampered with, while Privacy requires that Alice’s vote remains secret, even if Alice herself is bribed or forced into revealing everything she knows.

Can we build a system that allows Alice to verify that her ballot remains unchanged, but also allows her to “forget” who she voted for?

In joint work with Véronique Cortier, Georg Fuchsbauer and David Galindo we present a solution to this problem, in the form of BeleniosRF. We use rerandomizable encryption to change how ballots are encrypted without changing their contents. Digital signatures and zero knowledge techniques ensure ballot integrity, and prevent any tampering during the rerandomization.

Verifiability and privacy

Verifiability means that there must be some way for voters to monitor the election such that they can detect foul play from other voters, or even officials. This is often split into two notions: individual verifiability, dealing with a voter monitoring the way her vote was processed (i.e. that it was not changed or simply thrown away), and universal verifiability, covering requirements for the entirety of the election (e.g. “every ballot must be valid”).

Privacy, in its simplest form, keeps the contents of a vote secret from malicious observers, or even authorities and other compromised voters. In plain terms, we don’t want hackers to decrypt votes, and we don’t allow authorities to decrypt votes when they’re not supposed to; current systems anonymize votes by shuffling or summing them before they are decrypted.

Unfortunately, the above discussion fails to cover one potential adversary: the voter herself. An adversarial voter might be fully corrupt, as in the case of a vote seller who does not value her vote apart from any potential financial gain, or simply be placed in a coercive environment where she is “encouraged” to vote the “right” way. A good voting system should aim to hinder vote sellers and protect coerced users. This might not be possible in all cases: if a voter is monitored 24/7 or simply prevented from voting, the problem may well be unsolvable.

Continue reading Yes, we have no receipts

QUUX: a QUIC un-multiplexing of the Tor relay transport

Latency is a key factor in the usability of web browsing. This has added relevance in the context of anonymity systems such as Tor, because the anonymity property is strengthened by having a larger user-base.

Decreasing the latency of typical web requests in Tor could encourage a wider user base, making it more viable for typical users who value their privacy and less conspicuous for the people who most need it. With this in mind for my MSc Information Security project at UCL, supervised by Dr Steven J. Murdoch, I looked at the transport subsystem used by the Tor network, hoping to improve its performance.

After a literature review of the area (several alternative transport designs have been proposed in the past), I started to doubt my initial mental model for an alternative design.

Data flow in Freedom
Data flow in Freedom (Murdoch, 2011)
Data flow in Tor
Data flow in Tor (Murdoch, 2011)

These diagrams show an end-to-end design (Freedom) and hop-by-hop design (Tor) respectively. In the end-to-end design, encrypted IP packets are transported between relays using UDP, with endpoints ensuring reliable delivery of packets. In the hop-by-hop design, TCP data is transported between relays, with relays ensuring reliable delivery of data.

The end-to-end Freedom approach seems elegant, with relays becoming somewhat closer to packet routers, however it also leads to longer TCP round-trip times (RTT) for web browser HTTP connections. Other things being equal, a longer TCP RTT will result in a slower transfer. Additional issues include difficulty in ensuring fairness of utilisation (requiring an approach outlined by Viecco), and potentially greater vulnerability to latency-based attack.

Therefore I opted to follow the hop-by-hop transport approach Tor currently takes. Tor multiplexes cells for different circuits over a single TCP connection between relay-pairs, and as a result a lost packet for one circuit could hold up all circuits that share the same connection (head-of-line blocking). A long-lived TCP connection is beneficial for converging on an optimal congestion window size, but the approach suffers from head-of-line blocking and doesn’t compete effectively with other TCP connections using the same link.

To remedy these issues, I made a branch of Tor which used a QUIC connection in place of the long-lived TCP connection. Because a QUIC connection carries multiple TCP-like streams, it doesn’t suffer from head-of-line blocking. The streams also compete for utilisation at the same level as TCP connections, allowing them to more effectively use either the link capacity or the relay-configured bandwidth limit.

Download time for a 320KiB file
Download time for a 320KiB file

Initial results from the experiments are promising, as shown above. There’s still a way to go before such a design could make it into the Tor network. This branch shows the viability of the approach for performance, but significant engineering work still lies ahead to create a robust and secure implementation that would be suitable for deployment. There will also likely be further research to more accurately quantify the performance benefits of QUIC for Tor. Further details can be found in my MSc thesis.

Battery Status readout as a privacy risk

Privacy risks and threats arise even in seemingly innocuous mechanisms. It is a fairly regular issue.

Over a year ago, I was researching the risk of the W3C Battery Status API. The mechanism allows a web site to read the battery level of a device (smartphone, laptop, etc.). One of the positive use cases may be, for example, stopping the execution of intensive operations if the battery is running low.

Our privacy analysis of Battery Status API revealed interesting results.

Privacy analysis of Battery API

The battery status provides the following information:

  • the current level of battery (format: 0.0–1.0, for empty and full battery, respectively)
  • time to a full discharge of battery (in seconds)
  • time to a full charge of battery, if connected to a charger (in seconds)

These items are updated whenever a new value is supplied by the operating system

It turns out that privacy risks may surface even in this kind of – seemingly innocuous – data and access mechanisms.

Frequency of changes

The frequency of changes in the reported readouts from Battery Status API potentially allow the monitoring of users’ computer use habits; for example, potentially enabling analyzing of how frequently the user’s device is under heavy use. This could lead to behavioral analysis.

Additionally, identical installations of computer deployments in standard environments (e.g. at schools, work offices, etc.) are often are behind a NAT. In simple terms, NAT allows a number of users to browse the Internet with an – externally seen – single IP address. The ability of observing any differences between otherwise identical computer installations – potentially allows particular users to be identified (and targeted?).

Battery readouts as identifiers

The information provided by the Battery Status API is not always subject to rapid changes. In other words, this information may be static for a period of time; this in turn may give rise to a short-lived identifier. The situation gets especially interesting when we consider a scenario of users sometimes clearing standard web identifiers (such as cookies). In such a case, a web script could potentially analyse identifiers provided by Battery Status API, and this information then could possibly even lead to re-creation of other identifiers. A simple sketch follows.

Continue reading Battery Status readout as a privacy risk

Steven Murdoch – Privacy and Financial Security

Probably not too many academic researchers can say this: some of Steven Murdoch’s research leads have arrived in unmarked envelopes. Murdoch, who has moved to UCL from the University of Cambridge, works primarily in the areas of privacy and financial security, including a rare specialty you might call “crypto for the masses”. It’s the financial security aspect that produces the plain, brown envelopes and also what may be his most satisfying work, “Trying to help individuals when they’re having trouble with huge organisations”.

Murdoch’s work has a twist: “Usability is a security requirement,” he says. As a result, besides writing research papers and appearing as an expert witness, his past includes a successful start-up. Cronto, which developed a usable authentication device, was acquired by VASCO, a market leader in authentication and is now used by banks such as Commerzbank and Rabobank.

Developing the Cronto product was, he says, an iterative process that relied on real-world testing: “In research into privacy, if you build unusable system two things will go wrong,” he says. “One, people won’t use it, so there’s a smaller crowd to hide in.” This issue affects anonymising technologies such as Mixmaster and Mixminion. “In theory they have better security than Tor but no one is using them.” And two, he says, “People make mistakes.” A non-expert user of PGP, for example, can’t always accurately identify which parts of the message are signed and which aren’t.

The start-up experience taught Murdoch how difficult it is to get an idea from research prototype to product, not least because what works in a small case study may not when deployed at scale. “Selling privacy remains difficult,” he says, noting that Cronto had an easier time than some of its forerunners since the business model called for sales to large institutions. The biggest challenge, he says, was not consumer acceptance but making a convincing case that the predicted threats would materialise and that a small company could deliver an acceptable solution.

Continue reading Steven Murdoch – Privacy and Financial Security

Moving towards security and privacy experiments for the real world

Jono and I recently presented our joint paper with Simon and Angela at the Learning from Authoritative Security Experiment Results (LASER) Workshop in San Jose, CA, USA. The workshop was co-located with the IEEE Security and Privacy Symposium. LASER has a different focus each year; in 2016, presented papers explored new approaches to computer security experiments that are repeatable and can be shared across communities.

Through our LASER paper, “Towards robust experimental design for user studies in security and privacy”, we wanted to advance the quest for better experiment design and execution. We proposed the following five principles for conducting robust experiments into usable security and privacy:

  1. Give participants a primary task
  2. Ensure participants experience realistic risk
  3. Avoid priming the participants
  4. Perform experiments double-blind whenever possible
  5. Define these elements precisely: threat model; security; privacy and usability

Understanding users and their interaction with security is a blind spot for many security practitioners and designers. Learning from prior studies within and outside our research group, we have defined principles for conducting robust experiments into usable security and privacy. These principles are informed by efforts in other fields such as biology, qualitative research methods, and medicine, where four overarching experiment-design factors guided our principles:

Internal validity – The experiment is of “suitable scope to achieve the reported results” and is not “susceptible to systematic error”.

External validity – The result of the experiment “is not solely an artifact of the laboratory setting”.

Containment  – There are no “confounds” in the results, and no experimental “effects are a threat to safety” of the participants, the environment, or society generally.

Transparency – “There are no explanatory gaps in the experimental mechanism” and the explanatory “diagram for the experimental mechanism is complete”, in that it covers all relevant entities and activities.

Continue reading Moving towards security and privacy experiments for the real world

Analyzing privacy aspects of the W3C Vibration API

When making web standards, multiple scenarios possibly affecting privacy are considered. This includes even extreme ones; and this is a good thing. It’s best to predict the creative use and abuse of web features, before they are exploited.

Vibration API

The mechanism allowing websites to utilize a device’s vibration motor is called the Vibration API. The mechanism allows a device to be vibrated in particular patterns. The argument to the vibration() function is a list called a pattern. The list’s odd indices cause a vibration for a specific length of time, and even values are the still periods. For example, a web designer can make the device to vibrate for a specific duration, say 50 ms and follow that with a still period of 100 ms using the following call:

navigator.vibration([50,100])

In certain circumstances this can create several interesting potential privacy risks. Let’s look at the Vibration API from a privacy point of view. I will consider a number of scenarios on various technical levels.

Toy de-anonymisation scenario

One potential risk is the identification of a particular person in real life. Imagine several people in the same room placing their devices on a table. At some point, one person’s device vibrates in specific patterns. This individual might then become marked to a potential observer.

How could such a script be delivered? One possibility is though web advertising infrastructures. These offer capabilities of targeting individuals with a considerable accuracy (with respect to their location).

Continue reading Analyzing privacy aspects of the W3C Vibration API