“I am yet to meet a young person that has not experienced some form of abuse via tech”

Technology-facilitated abuse describes the misuse of digital systems such as smartphones or other Internet-connected devices to monitor, control and harm individuals. In recent years increasing attention has been given to this phenomenon in school settings and the criminal justice system. Yet, an awareness in the healthcare sector is lacking. To address this gap, Dr Isabel Straw and Dr Leonie Tanczer from University College London (UCL) have been leading a new research project that examines technology-facilitated abuse in medical settings.

Technology-facilitated forms of abuse are on the rise, with perpetrators adapting digital technologies such as smartphones and drones, trackers such as AirTags, and spyware tools including parental control software, to cause harm. The impact of technology-facilitated abuse on patients may not always be immediately obvious to healthcare professionals. For instance, smart, Internet-connected devices have been showcased to be misused in domestic abuse cases to inflict physical harm. Smart locks have been used to trap individuals inside their homes, smart thermostats have been used to inflict extremes of temperature on victims, and remotely controlled lighting and sound systems have been manipulated to cause psychological distress. COVID-19 catalyzed the proliferation of these technologies within our environment, with sales of smart devices increasing 30% on last year. Yet, while these tools are advertised for their proposed safety and convenience, they are also providing new avenues for violence, harassment, and abuse.

The impact of technology-facilitated abuse is especially notable on young people. In recent years, pediatric safeguarding guidelines have been amended in response to increasing rates of knife crime, gang violence and drug trafficking in the UK. However, technology-facilitated abuse has evolved at a parallel rate and has not received the same level of attention. The impact of technology-facilitated abuse on children and teenagers may manifest as emotional distress, anxiety, suicidal ideation. Koubel reports the exacerbation of mental health risks born from websites that encourage self-harm, eating disorders, and suicide. Furthermore, technology-facilitated dating abuse and sextortion is increasing amongst adolescent populations. With 10% of children being affected by sexual solicitation online, the problem is widespread and under-investigated. As reported by Stonard et al. in “They’ll Always Find a Way to Get to You, digital devices are playing an increasing role in relationship abuse amongst young people.

Vulnerable individuals frequently perceive medical settings as a place of safety. Healthcare professionals, thus, have a role in providing both medical and psychosocial care to ensure their wellbeing. At present, existing clinical and patient management protocols are outdated and do not address the emerging threats of technology-facilitated abuse. For clinicians to provide effective care to patients affected by technological elements of abuse and violence, clinical safeguarding protocols need a radical update if they are to assist professionals navigating high risk scenarios.

Continue reading “I am yet to meet a young person that has not experienced some form of abuse via tech”

JavaCard: The execution environment you didn’t know you were using

This is the story of the most popular execution environment, its shortcomings, and how open source and hacking saved the day.

According to recent revelations, the MINIX operating system is embedded in the Management Engine of all Intel CPUs released after 2015. A side-effect of this is that MINIX became known as the most widespread operating system in the world almost overnight. However, in the last decade another tiny OS has silently pushed itself into even more devices around the world while remaining unknown to most: JavaCard.

Your SIM Card, Credit Cards, Loyalty Cards are all most likely JavaCards.

With more than six billion JavaCards sold last year, and approximately 20 billion estimated to have been purchased in total, JavaCard is the winner no one knows about. The execution environment was designed in 1996 for devices with limited memory and processing capabilities and was the first smartcard platform to give developers the ability to execute the same applet on cards produced by different manufacturers. This was a breakthrough for the industry that established JavaCard as the default platform for applications in need of a secure, tamper-proof element.

*While JavaCard is technically not an OS in the standard sense (smart cards do have their own proprietary OSes), in practice it provides very similar functionality with modern embedded OSes. For instance, unless granted by the vendor, app developers are strictly limited within the JavaCard runtime environment; this distinguishes JavaCard from e.g, the classic Java VM where developers can also execute native applications in addition to those executed in the JVM.

A malfunctioning operating system

An operating system is much more than the sum of its source code: it’s also the ecosystem built around it, including the specifications, support, and most importantly the application developers and the user community.

For JavaCard, the specification part is handled formally by Oracle and Java Card Forum who make periodical releases of the platform’s virtual machine (JCVM) specification, runtime library (JCRL) and application programming interface (JC API). All these contribute towards homogeneity between cards from different manufacturers; aiming to ensure applet interoperability and a minimum level of support for basic cryptographic algorithms – at least in theory. In practice, our research has shown that no product in the market implements JC API completely, and different cards support different sets of features. This severely hinders the interoperability of applications and constrains developers within the limited subset of JavaCard features supported by most manufacturers. Developers who choose to use all the features provided by a specific card will, with high likelihood, abolish the interoperability of their applets. Furthermore, some of those specifications are also inadvertently limiting the scope of the platform. For instance, the API specification that lists all the calls to be potentially available to smart card applications ended up acting as an evolutionary bottleneck. This is due to:

  1. Approximately three-year-long API revision cycles that severely delay support for newer cryptographic algorithms (the last API revision was released in 2015).
  2. More complex cryptographic operations (especially asymmetric cryptography) requiring design and production of dedicated hardware accelerators to actually support newly added cryptographic algorithms.
  3. The business model is geared towards the large corporations. Hence, newer cards are available only to those buying in large quantities while smaller development houses and researchers are forced to work with five years old, or even older, cards.

Continue reading JavaCard: The execution environment you didn’t know you were using

A witch-hunt for trojans in our chips

A Hardware Trojan (HT) is a malicious modification of the circuitry of an integrated circuit.

 

A malicious chip can make a device malfunction in several ways. It has been rumored that a hardware trojan implanted in a Syrian air-defense radar caused it to stop operating during an airstrike, thus instantly minimizing the country’s situational awareness and threat response capabilities. In other settings, hardware trojans may leak encryption keys or other secrets, or even generate weak keys that can be easily recovered by the adversary.

This article introduces a new trojan-resilient architecture, discusses its motivation and outlines how it differs from existing solutions. The full paper (Vasilios MavroudisAndrea CerulliPetr Svenda, Dan CvrcekDusan Klinec, George Danezis) has been presented in several academic and industrial venues including DEF CON 25, and ACM Conference on Computer and Communications Security 2017.

The Challenge of Detecting HT

Judging from the abundance of governmental, industrial and academic projects concerned with the prevention and the detection of hardware trojans, there is a consensus regarding the severity of the threat and it’s not taken lightly. DARPA has launched the “Integrity and Reliability of Integrated Circuits“ program aiming to develop techniques for the detection of malicious circuitry. The Intelligence Advanced Research Projects Activity funded a project aiming to redesign the fabrication of integrated circuits, while various other initiatives are currently undergoing (e.g., the COST Action project on “Trustworthy Manufacturing and Utilization of Secure Devices” and the DoD Trusted Foundry program). In addition to these, there are numerous other industrial and academic projects proposing new trojan detection techniques every year, only to be circumvented by follow up work.

But do Hardware Trojans exist?

Ironically, until now there have been no cases where malicious circuitry was detected in military-grade or even commercial chips. With nothing more than rumors to hint about hardware trojans (in places other than academic lab benches), one cannot but question their existence. In other words, is HT design and insertion too complex to be practical, or do our detection tools fail to detect the malicious circuitry embedded in the chips around us?

It could be that both are true: hardware trojans do not exist (yet) as malicious actors are focusing on other aspects of the hardware that are easier to compromise. In all cases where trojans were discovered, the erroneous behavior was traced to the chip’s firmware and not its circuitry. Interestingly, in the vast majority of those incidents the security flaws were attributed to honest fabrication mistakes (e.g., manufacturer failing to disable a testing interface).

Intentional vs. Unintentional Errors

It is safe to always assume that an IC will fail in the worst possible way, at the worst possible time (see Syrian airdefense incident). This “crash n’ burn” approach is common in critical systems (e.g., airplanes, satellites, dams), where any divergence from normal operation will result in an irrecoverable failure of the whole system.

To mitigate the risk, critical system designers employ redundancy techniques to eliminate single points of failure and thus make their setups resilient to faults. A common example are triple-redundant systems used in autopilots. Those systems employ three identical chips sourced from disjoint supply chains and replicate all the navigation computations across them. This allows the system to both tolerate a misbehaving chip and detect its presence.

It is noteworthy, that those systems do not consider the cause of the chip malfunction, and simply assume that they fail in the worst possible way. Following from this, a malicious chip is not significantly different from a defective one. After all, any adversary sophisticated enough to design and insert a hardware trojan is capable of making it indistinguishable from honest manufacturing errors. Similarly, from an operational perspective it makes little sense to distinguish between trojans in the circuitry and trojans in the firmware as the risk they pose for the system is identical.

Distributing Trust for Resilience

Given that it is impossible to achieve 100% detection rates of hardware trojans and errors, it is important that our devices maintain their security properties even in their presence. Our work introduces a new high-level device architecture that is resilient to both. In its core, it uses a redundancy-based architecture and secret-sharing protocols to distribute all secrets and computations among multiple chips. Hence, unless all chips are compromised by the same adversary, the security of the system remains intact. A key point is that those chips should originate from disjoint supply chains. This is to minimize the risk of the same adversary compromising more than one chips. To evaluate its practicality in real-life applications, we built a Hardware Security Module (HSM) that performs standard cryptographic operations (e.g., key generation, decryption, signing) at a very high rate. HSMs are commonly used in operations where security is critical, and an increased transaction throughput is needed (e.g., banking, certification Authorities). A demonstration is shown in the video above, and further details are on our website. Finally, our work can be easily combined with all existing detection and prevention techniques to further decrease the likelihood of compromises.

Privacy analysis of the W3C Proximity Sensor specification

Mobile developers are familiar with proximity sensors. These provide information about the distance of an object from the device providing access to the sensor (i.e. smartphone, tablet, laptop, or another Web of Things device). This object is usually the user’s head or hand.

Proximity sensors provide binary values such as far (from the object) or near (respectively), or a more verbose readout, in centimeters.

There are many useful applications for proximity. For example, an application can turn off the screen if the user holds the device close to his/her head (face detection). It is also handy for avoiding the execution of undesired actions, possibly arising from scratching the head with the phone’s screen.

The Proximity Sensor API specification is being standardized by W3C. Every web site will be able to access this information. Let’s focus on the privacy engineering point of view.

Proximity is the distance between an object and device. The latest version of W3C Proximity Sensors API takes advantage of a soon-to-be standard Generic Sensors API. The sensor provides the proximity distance in centimeters.

The use of Proximity Sensors is simple; an example displaying the current distance in centimeters using the modern syntax of Generic Sensors API is as follows:

let sensor = new ProximitySensor();

sensor.start();

sensor.onchange = function(){console.log(event.reading.distance)};  

This syntax might later allow requesting various proximity sensors (if the device has more than one), including those based on the sensor’s position (such as “rear” or “front”), but the current implementations generally still use the legacy version from the previous edition of the spec:

window.addEventListener('deviceproximity', function(event) {  
console.log(event.value)  
});

From the perspective of privacy considerations, there is currently no significant difference between those two API versions.

Privacy analysis of Proximity Sensor API

Proximity sensor readout is not providing much data, so it may appear there are no privacy implications, but there are good reasons for performing such analysis. Let’s list two:

Firstly, designing new standards and systems with privacy in mind — privacy by design — is a required practice and a good idea. Secondly, in some circumstances even potentially insignificant mechanisms can still bring consequences from privacy point of view. For example, multiple identifiers might be helpful in deanonymizing users. Non-obvious data leaks can surface, as well.

So let’s discuss some of the possible issues.

The average distance from the device to the user’s face (i.e. object) could be used to differentiate and discriminate between users. While the severity is hard to establish, future changes (i.e. influencing with other external data) cannot be ruled out.

If proximity patterns would be individually-attributable, this would offer a possibility to enhance user profiling based on the analysis of device use patterns. For example, the following could be obtained and analyzed:

  • Frequency of the user’s patterns of device use (i.e. waving a hand in front of the proximity sensor)
  • Frequency of the user’s zooming in and out (i.e. device close to the user’s head)
  • Patterns of use. Does the way the user hold the device vary during the day? How?
  • What are the mechanics of holding the device close to the user’s head? Can the distance vary?

Some users may use the mobile operating system’s zoom capability to increase the font or images, but others might casually prefer to hold the device closer to their eyes. Such behavioral differences can also be of note.

Proximity sensors can provide the following data: the current proximity distance and max, the maximum distance readout supported by the device. In case the value of max would differ between various implementations (e.g. among browsers, devices), it could form an identifier. And the two (max, distance) combined could also be used as short-lived identifiers. Moreover, it is not so difficult to imagine a situation where those identifiers are actually not so short-lived.

Recommendations

First of all, is there a need to provide a verbose proximity readout at all? For example, is providing readouts of proximity (distance) value up to 150 cm necessary?

In general, a device (browser, a Web of Things device) should be capable of informing the user when a web site accesses proximity information. The user should also be able to inspect which web sites – and how frequently – accessed the API.

Finally, the sensors should provide an adequately verbose readout of the distance.

Proximity Sensors should also be subject to permissions.

Demonstration

At the moment, the implementations of proximity sensors are limited. But a demonstration is running on my SensorsPrivacy research project (tested against Firefox Mobile on Android). In my case (Nexus 5), Firefox offered data in centimeters, but the verbosity was limited to 5 centimeters on my device. Again, this is a matter of hardware and software. In principle, the accuracy could be much higher and one can imagine an accuracy a sensor providing more accurate readouts (even up to 1 meter).

The screen dump below shows how the example demonstrates the use of proximity data (and whether it works on a browser). The site changes its background color if an object is placed near the proximity sensor of a device.

The readout from the demonstration shows a relative time between events (first column), and the proximity distance (second column).

398  5  
1011 0  
404  5  
607  0  

The sensor on my device is reporting two values of distance: 0 (near) and 5 (far), in centimeters. In general, the granularity is subject to the following: standard, implementation and hardware.

And this suggests something, of course. So let’s complement the privacy analysis from the previous section to incorporate a timing analysis.

Even such a limited readout can help performing behavioural analysis, simply by profiling based on time series. As we can see, the sensor readout is quite sensitive in respect to time and can measure with sub-200 ms granularity. The demonstration proof of principle is also showing the minimum relative detected time between events. Feel free to test the performance of your fingers and/or hardware!

Summary

When designing a standard project or an implementation, paying attention to details is imperative. This includes consideration of even potential risks. This is especially the case if the software or systems will be used by millions or hundreds of millions of users, i.e. you are aiming for success.

Finally, standards and specifications are ideal for issuing guidance and good practices

 

This post originally appeared on Security, Privacy & Tech Inquiries, the blog of Lukasz Olejnik. An accompanying demonstration is available on SensorsPrivacy (a project studing privacy of web sensors).

A Privacy Enhancing Architecture for Secure Wearable Devices

Why do we need security on wearable devices? The primary reason comes from the fact that, being in direct contact with the user, wearable devices have access to very private and sensitive user’s information more often than traditional technologies. The huge and increasing diversity of wearable technologies makes almost any kind of information at risk, going from medical records to personal habits and lifestyle. For that reason, when considering wearables, it is particularly important to introduce appropriate technologies to protect these data, and it is primary that both the user and the engineer are aware of the exact amount of collected information as well as the potential threats pending on the user’s privacy. Moreover, it has also to be considered that the privacy of the wearable’s user is not the only one at risk. In fact, more and more devices are not limited to record the user’s activity, but can also gather information about people standing around.

This blog post presents a flexible privacy enhancing system from its architecture to the prototyping level. The system takes advantage from anonymous credentials and is based on the protocols developed by M. Chase, S. Meiklejohn, and G. Zaverucha in Algebraic MACs and Keyed-Verification Anonymous Credentials. Three main entities are involved in this multi-purpose system: a main server, wearable devices and localisation beacons. In this multipurpose architecture, the server firstly issue some anonymous credentials to the wearables. Then, each time a wearable reach a particular physical location (gets close to a localisation beacon) where it desires to perform an action, it starts presenting its credentials in order to ask the server the execution of that a particular action. Both the design of the wearable and the server remain generic and scalable in order to encourage further enhancements and easy integration into real-world applications; i.e., the central server can manage an arbitrary number of devices, each device can posses an arbitrary number of credentials and the coverage area of the localisation system is arbitrarily extendable.

Architecture

The complete system’s architecture can be modelled as depicted in the figure below. Roughly speaking, a web interface is used to manage and display the device’s functions. Each user and admin access the system from that interface.

complete_architecture

During the setup phase, the server issues the credentials to a selected device (according to the algorithms presented in Algebraic MACs and Keyed-Verification Anonymous Credentials) granting it a given privilege level. The credentials’ issuance is a short-range process. In fact, the wearable needs to be physically close to the server to allow the admins to physically verify, once and for all, the identity of the wearables’ users. In order to improve security and battery life, the wearable only communicates using extremely low-power and short-range radio waves (dotted line on the figure). The server beacons can be seen as continuities of the main server and have essentially two roles: the first is to operate as an interface between the wearables and the server, and the second is to act as a RF localisation system. Each time the wearable granted with enough privileges reaches some particular physical location (gets close to a localisation beacon), it starts presenting its credentials in order to prove to the server that it possesses credentials over some attributes (without revealing them), and that these credentials have been previously issued by the server itself. Note that the system preserves anonymity only if many users are involved (for each privilege level), but this is a classic requirement of anonymous systems. Finally, once the credentials have been successfully verified by the server, the server issues a signed request to an external entity (which can be, for instance, an automatic door, an alarm system or any compatible IoT entity) to perform the requested action.

Continue reading A Privacy Enhancing Architecture for Secure Wearable Devices