Science “of” or “for” security?

The choice of preposition – science of security versus science for security – marks an important difference in mental orientation. This post grew out of a conversation last year with Roy Maxion, Angela Sasse and David Pym. Clarifying this small preposition will help us set expectations, understand goals, and ultimately give appropriately targeted advice on how to do better security research.

These small words (for vs. of) unpack into some big differences. Science for security seems to mean taking any scientific discipline or results and using that to make decisions about information security. Thus, “for” is agnostic as to whether there is any work within security that looks like science. Like the trend for evidence-based medicine, science for security would advocate for evidence-based security decisions. This view is advocated by RISCS here in the UK and is probably consistent with approaches like the New School of Information Security.

Science for security does not say security is not science. More accurately, it seems not to care. The view is agnostic and seems to say it does not matter whether security is science. The point seems to be there is enough difficulty in adapting other sciences for use by security, and that applying the methods of other sciences to security-relevant problems is what matters. There are many examples of this approach, in different flavours. We can see at least three: porting concepts, re-situating approaches, and borrowing methods. We’re adapting these first two from Morgan (2014).

Porting concepts

Economics of infosec is its own discipline (WEIS). The way Anderson (2001) applies economics is to take established principles in economics to shed light on established difficulties in infosec.

Re-situating approaches

This is when some other science understands something, and we generalise from that instance and try to make a concrete application to security. We might argue that program verification takes this approach, re-situating understanding from mathematics and logic. Studies on keystroke dynamics also re-situate the understanding of human psychology and physical forensics.

Borrowing methods

We might study a security phenomenon according to the methods of an established discipline. Usable security largely applies psychology- and sociology-based methods, for example. Of course, there are specific challenges that might arise in studying a new area such as security (Krol et al., 2016), but the approach is science for security because the challenges result in minor tweaks to the method of the home discipline.

Continue reading Science “of” or “for” security?

The Government published its draft domestic abuse bill, but risks ignoring the growing threat of tech abuse

Dr Leonie Tanczer, who leads UCL’s “Gender and IoT” research team, reflects on the release of the draft Domestic Abuse Bill and points out that in its current form, it misses emphasis on emerging forms of technology-facilitated abuse.

On the 21st of January, the UK Government published its long-awaited Domestic Abuse Bill. The 196-page long document focuses on a wide range of issues from providing a first statutory definition of domestic abuse to the recognition of economic abuse as well as controlling and coercive non-physical behaviour. In recent years, abuse facilitated through information and communication technologies (ICT) has been growing. Efforts to mitigate these forms of abuse (e.g. social media abuse or cyberstalking) are already underway, but we expect new forms of “technology-facilitated abuse” (“tech abuse”) to become more commonplace amongst abusive perpetrators.

We are currently seeing an explosion in the number of Internet-connected devices on the market, from gadgets like Amazon’s Alexa and Google’s Home hub, to “smart” home heating, lighting, and security systems as well as wearable devices such as smartwatches. What these products have in common is their networked capability, and many also include features such as remote, video, and voice control as well as GPS location tracking. While these capabilities are intended to make modern life easier, they also create new means to facilitate psychological, physical, sexual, economic, and emotional abuse as well as controlling and manipulating behaviour.

Although so-called “Internet of Things” (IoT) usage is not yet widespread (there were 7.5 billion total connections worldwide in 2017), GSMA expects there to be 25 billion devices globally by 2025. Sadly, we have already started to see examples of these technologies being misused. An investigation last year by the New York Times showed how perpetrators of domestic abuse could use apps on their smartphones to remotely control household appliances like air conditioning or digital locks in order to monitor and frighten their victims. In 2018, we saw a husband convicted of stalking after spying on his estranged wife by hacking into their wall-mounted iPad.

The risk of being a victim of tech abuse falls predominantly on women and especially migrant women. This is a result of men still being primarily in charge of the purchase and maintenance of technical systems as well as women and girls being over-proportionally affected by domestic abuse.

The absence of ‘tech abuse’ in the draft bill

While the four objectives of the draft Bill (promote awareness, protect and support, transform the justice process, improve performance) are to be welcomed, the absence of sufficient reference to the growing rise of tech abuse is a significant omission and missed opportunity.

Continue reading The Government published its draft domestic abuse bill, but risks ignoring the growing threat of tech abuse

TESSERACT’s evaluation framework and its use of MaMaDroid

In this blog post, we will describe and comment on TESSERACT, a system introduced in a paper to appear at USENIX Security 2019, and previously published as a pre-print. TESSERACT is a publicly available framework for the evaluation and comparison of systems based on statistical classifiers, with a particular focus on Android malware classification. The authors used DREBIN and our MaMaDroid paper as examples of this evaluation. Their choice is because these are two of the most important state-of-the-art papers, tackling the challenge from different angles, using different models, and different machine learning algorithms. Moreover, DREBIN has already been reproduced by researchers even though the code is not available anymore; MaMaDroid’s code is publicly available (the parsed data and the list of samples are available under request). I am one of MaMaDroid’s authors, and I am particularly interested in projects like TESSERACT. Therefore, I will go through this interesting framework and attempt to clarify a few misinterpretations made by the authors about MaMaDroid.

The need for evaluation frameworks

The information security community and, in particular, the systems part of it, feels that papers are often rejected based on questionable decisions or, on the other hand, that papers should be more rigorous, trying to respect certain important characteristics. Researchers from Dutch universities published a survey of papers published to top venues in 2010 and 2015 where they evaluated if these works were presenting “crimes” affecting completeness, relevancy, soundness, and reproducibility of the work. They have shown how the newest publications present more flaws. Even though the authors included their works in the analyzed ones and did not word the paper as a wall of shame by pointing the finger against specific articles, the paper has been seen as an attack to the community rather than an incitement to produce more complete papers. To the best of my knowledge, unfortunately, the paper has not yet been accepted for publication. TESSERACT is another example of researchers’ effort in trying to make the community work more rigorous: most system papers present accuracies that are close to 100% in all the tests done; however, when some of them have been tested on different datasets, their accuracy was worse than a coin toss.

These two works are part of a trend that I personally find important for our community, to allow works that are following other ones on the chronological aspects to be evaluated in a more fair way. I explain with a personal example: I recall when my supervisor told me that at the beginning he was not optimistic about MaMaDroid being accepted at the first attempt (NDSS 2017) because most of the previous literature shows results always over 98% accuracy and that gap of a few percentage points can be enough for some reviewers to reject. When we asked an opinion of a colleague about the paper, before we submitted it for peer-review, this was his comment on the ML part: “I actually think the ML part is super solid, and I’ve never seen a paper with so many experiments on this topic.” We can see completely different reactions over the same specific part of the work.

TESSERACT

The goal of this post is to show TESSERACT’s potential while pointing out the small misinterpretations of MaMaDroid present in the current version of the paper. The authors contacted us to let us read the paper and see whether there has been any misinterpretation. I had a constructive meeting with the authors where we also had the opportunity to exchange opinions on the work. Following the TESSERACT description, there will be a section related to MaMaDroid’s misinterpretations in the paper. The authors told me that the newest versions would be updated according to what we discussed.

Continue reading TESSERACT’s evaluation framework and its use of MaMaDroid

Introducing Sonic: A Practical zk-SNARK with a Nearly Trustless Setup

In this post, we discuss a new zk-SNARK, Sonic, developed by Mary Maller, Sean Bowe, Markulf Kohlweiss and Sarah Meiklejohn. Unlike other SNARKs, Sonic does not require a trusted setup for each circuit, but only a single setup for all circuits. Further, the setup for Sonic never has to end, so it can be continuously secured by accumulating more contributions. This property makes it ideal for any system where there is not a trusted party, and there is a need to validate data without leaking confidential information. For example, a company might wish to show solvency to an auditor without revealing which products they have invested in. The construction is highly practical.

More about zk-SNARKs

Like all other zero-knowledge proofs, zk-SNARKs are a tool used to build applications where users must prove the validity of their data, such as in verifiable computation or anonymous credentials. Additionally, zk-SNARKs have the smallest proof sizes and verifier time out of all other known techniques for building zero-knowledge proofs. However, they typically require a trusted setup process, introducing the possibility of fraudulent data being input by the actors that implemented the system. For example, Zcash uses zk-SNARKs to send private cryptocurrency transactions, and if their setup was compromised then a small number of users could generate an unlimited supply of currency without detection.

Characteristics of zk-SNARKs
🙂 Can be used to build many cryptographic protocols
🙂 Very small proof sizes
🙂 Very fast verifier time
😐 Average prover time
☹️ Requires a trusted setup
☹️ Security assumes non-standard cryptographic assumptions

In 2018, Groth et al. introduced a zk-SNARK that could be built from an updatable and universal setup. We describe these properties below and claim that these properties help mitigate the security concerns around trusted setup. However, unlike Sonic, Groth et al.’s setup outputs a large set of global parameters (in the order of terabytes), which would be unwieldy to store, update and verify.

Updatability

Updatability means that any user, at any time, can update the parameters, including after the system goes live. After a single honest user has participated, no party can prove fraudulent data. This property means that a distrustful user could update the parameters themselves and have personal confidence in the parameters from that point forward. The update proofs are short and quick to verify.

Universality

Universality means that the same parameters can be used for any application using this zk-SNARK. Thus one can imagine including the global parameters in an open-source implementation, or one could use the same parameters for all smart contracts in Ethereum.

Why Use Sonic?

Sonic is universal, updatable, and has a small set of global parameters (in the order of megabytes). Proof sizes are small (256 bytes) and verifier time is competitive with the fastest zk-SNARKs in the literature. It is especially well suited to systems where the same zk-SNARK is run by many different provers and verified by many different parties. This is exactly the situation for many blockchain systems.

Continue reading Introducing Sonic: A Practical zk-SNARK with a Nearly Trustless Setup

Protecting human rights by avoiding regulatory capture within surveillance oversight

Regulation is in the news again as a result of the Home Office blocking surveillance expert Eric Kind from taking up his role as Head of Investigation at the Investigatory Powers Commissioner’s Office (IPCO) – the newly created agency responsible for regulating organisations managing surveillance, including the Home Office. Ordinarily, it would be unheard of for a regulated organisation to be able to veto the appointment of staff to their regulator, particularly one established through statute as being independent. However, the Home Office was able to do so here by refusing to issue the security clearance required for Kind to do his job. The Investigatory Powers Commissioner, therefore, can’t override this decision, the Home Office doesn’t have to explain their reasoning, nor is there an appeal process.

Behaviour like this can lead to regulatory capture – where the influence of the regulated organisation changes the effect of regulation to direct away from the public interest and toward the interests of the organisations being regulated. The mechanism of blocking security clearances is specific to activities relating to the military and intelligence, but the phenomenon of regulatory capture is more widespread. Consequently, regulatory capture has been well studied, and there’s a body of work describing tried and tested ways to resist it. If the organisations responsible for surveillance regulation were to apply these recommendations, it would improve both the privacy of the public and the trust in agencies carrying out surveillance. When we combine these techniques with advanced cryptography, we can do better still.

Regulatory capture is also a problem in finance – likely contributing to high-profile scandals like Libor manipulation, and payment-protection-insurance misselling. In previous articles, we’ve discussed how regulators’ sluggish response to new fraud techniques has led to their victims unfairly footing the bill. Such behaviour by regulators is rarely the result of clear corruption – regulatory capture is often more subtle. For example, the skills needed by the regulator may only be available by hiring staff from the regulated organisations, bringing their culture and mindset along with them. Regulators’ staff often find career opportunities within the regulator limited and so are reluctant to take a hard-line against the regulated organisation and so close off the option of getting a job there later – likely at a much higher salary. Regulatory capture resulting from sharing of staff and their corresponding culture is, I think, a key reason for surveillance oversight bodies having insufficient regard for the public interest.

Continue reading Protecting human rights by avoiding regulatory capture within surveillance oversight

Memes are taking the alt-right’s message of hate mainstream

Unless you live under the proverbial rock, you surely have come across Internet memes a few times. Memes are basically viral images, videos, slogans, etc., which might morph and evolve but eventually enter popular culture. When thinking about memes, most people associate them with ironic or irreverent images, from Bad Luck Brian to classics like Grumpy Cats.

Bad Luck Brian (left) and Grumpy Cat (right) memes.

Unfortunately, not all memes are funny. Some might even look as innocuous as a frog but are in fact well-known symbols of hate. Ever since the 2016 US Presidential Election, memes have been increasingly associated with politics.

Pepe The Frog meme used in a Brexit-related context (left), Trump as Perseus beheading Hillary as Medusa (center), meme posted by Trump Jr. on Instagram (right).

But how exactly do memes originate, spread, and gain influence on mainstream media? To answer this question, our recent paper (“On the Origins of Memes by Means of Fringe Web Communities”) presents the largest scientific study of memes to date, using a dataset of 160 million images from various social networks. We show how “fringe” Web communities like 4chan’s “politically incorrect board” (/pol/) and certain “subreddits” like The_Donald are successful in generating and pushing a wide variety of racist, hateful, and politically charged memes.

Continue reading Memes are taking the alt-right’s message of hate mainstream

Exploring the multiple dimensions of Internet liveness through holographic visualisation

Earlier this year, Shehar Bano summarised our work on scanning the Internet and categorising IP addresses based on how “alive” they appear to be when probed through different protocols. Today it was announced that the resulting paper won the Applied Networking Research Prize, awarded by the Internet Research Task Force “to recognize the best new ideas in networking and bring them to the IETF and IRTF”. This occasion seems like a good opportunity to recall what more can be learned from the dataset we collected, but which couldn’t be included in the paper itself. Specifically, I will look at the multi-dimensional aspects to “liveness” and how this can be represented through holographic visualisation.

One of the most interesting uses of these experimental results was the study of correlations between responses to different combinations of network protocols. This application was only possible because the paper was the first to simultaneously scan multiple protocols and so give us confidence that the characteristics measured are properties of the hosts and the networks they are on, and not artefacts resulting from network disruption or changes in IP address allocation over time. These correlations are important because the combination of protocols responded to gives us richer information about the host itself when compared to the result of a scan of any one protocol. The results also let us infer what would likely be the result of a scan of one protocol, given the result of a scan of different ones.

In these experiments, 8 protocols were studied: ICMP, HTTP, SSH, HTTPS, CWMP, Telnet, DNS and NTP. The results can be represented as 28=256 values placed in a 8-dimensional space with each dimension indicating whether a host did or did not respond to a probe of that protocol. Each value is the number of IP addresses that respond to that particular combination of network protocols. Abstractly, this makes perfect sense but representing an 8-d space on a 2-d screen creates problems. The paper dealt with this issue through dimensional reduction, by projecting the 8-d space on to a 2-d chart to show the likelihood of a positive response to a probe, given a positive response to probe on another single protocol. This chart is useful and easy to read but hides useful information present in the dataset.

Continue reading Exploring the multiple dimensions of Internet liveness through holographic visualisation

New threat models in the face of British intelligence and the Five Eyes’ new end-to-end encryption interception strategy

Due to more and more services and messaging applications implementing end-to-end encryption, law enforcement organisations and intelligence agencies have become increasingly concerned about the prospect of “going dark”. This is when law enforcement has the legal right to access a communication (i.e. through a warrant) but doesn’t have the technical capability to do so, because the communication may be end-to-end encrypted.

Earlier proposals from politicians have taken the approach of outright banning end-to-end encryption, which was met with fierce criticism by experts and the tech industry. The intelligence community had been slightly more nuanced, promoting protocols that allow for key escrow, where messages would also be encrypted under an additional key (e.g. controlled by the government). Such protocols have been promoted by intelligence agencies as recently as 2016 and early as the 1990s but were also met with fierce criticism.

More recently, there has been a new set of legislation in the UK, statements from the Five Eyes and proposals from intelligence officials that propose a “different” way of defeating end-to-end encryption, that is akin to key escrow but is enabled on a “per-warrant” basis rather than by default. Let’s look at how this may effect threat models in applications that use end-to-end encryption in the future.

Legislation

On the 31st of August 2018, the governments of the United States, the United Kingdom, Canada, Australia and New Zealand (collectively known as the “Five Eyes”) released a “Statement of Principles on Access to Evidence and Encryption”, where they outlined their position on encryption.

In the statement, it says:

Privacy laws must prevent arbitrary or unlawful interference, but privacy is not absolute. It is an established principle that appropriate government authorities should be able to seek access to otherwise private information when a court or independent authority has authorized such access based on established legal standards.

The statement goes on to set out that technology companies have a mutual responsibility with government authorities to enable this process. At the end of the statement, it describes how technology companies should provide government authorities access to private information:

The Governments of the Five Eyes encourage information and communications technology service providers to voluntarily establish lawful access solutions to their products and services that they create or operate in our countries. Governments should not favor a particular technology; instead, providers may create customized solutions, tailored to their individual system architectures that are capable of meeting lawful access requirements. Such solutions can be a constructive approach to current challenges.

Should governments continue to encounter impediments to lawful access to information necessary to aid the protection of the citizens of our countries, we may pursue technological, enforcement, legislative or other measures to achieve lawful access solutions.

Their position effectively boils down to requiring technology companies to provide a technical means to fulfil court warrants that require them to hand over private data of certain individuals, but the implementation for doing so is open to the technology company.

Continue reading New threat models in the face of British intelligence and the Five Eyes’ new end-to-end encryption interception strategy

UCL runs a digital security training event aimed at domestic abuse support services

In late November, UCL’s “Gender and IoT” (G-IoT) research team ran a “CryptoParty” (digital security training event) followed by a panel discussion which brought together frontline workers, support organisations, as well as policy and tech representatives to discuss the risk of emerging technologies for domestic violence and abuse. The event coincided with the International Day for the Elimination of Violence against Women, taking place annually on the 25th of November.

Technologies such as smartphones or platforms such as social media websites and apps are increasingly used as tools for harassment and stalking. Adding to the existing challenges and complexities are evolving “smart”, Internet-connected devices that are progressively populating public and private spaces. These systems, due to their functionalities, create further opportunities to monitor, control, and coerce individuals. The G-IoT project is studying the implications of IoT-facilitated “tech abuse” for victims and survivors of domestic violence and abuse.

CryptoParty

The evening represented an opportunity for frontline workers and support organisations to upskill in digital security. Attendees had the chance to learn about various topics including phone, communication, Internet browser and data security. They were trained by a group of so-called “crypto angels”, meaning volunteers who provide technical guidance and support. Many of the trainers are affiliated with the global “CryptoParty” movement and the CryptoParty London specifically, as well as Privacy International, and the National Cyber Security Centre.

G-IoT’s lead researcher, Dr Leonie Tanczer, highlighted the importance of this event in light of the socio-technical research that the team pursued so far: “Since January 2018, we worked closely with the statutory and voluntary support sector. We identified various shortcomings in the delivery of tech abuse provisions, including practice-oriented, policy, and technical limitations. We set up the CryptoParty to bring together different communities to holistically tackle tech abuse and increase the technical security awareness of the support sector.”

Continue reading UCL runs a digital security training event aimed at domestic abuse support services

Justice for victims of bank fraud – learning from the Post Office trial

In London, this week, a trial is being held over a dispute between the Justice for Subpostmasters Alliance (JFSA) and the Post Office, but the result will have far-reaching repercussions for anyone disputing computer evidence. The trial currently focuses on whether the legal agreements and processes set up by the Post Office are a fair basis for managing its relationship with the subpostmasters who operate branches on its behalf. Later, the court will assess whether the fact that the Post Office computer system – Horizon – indicates that a subpostmaster is in debt to the Post Office is sufficient evidence for the subpostmaster to be indeed liable to repay the debt, even when the subpostmaster claims the accounts are incorrect due to computer error or fraud.

Disputes over Horizon have led to subpostmasters being bankrupted, losing their homes, or even being jailed but these cases also echo the broader issues at the heart of the many phantom withdrawals disputes I see between a bank and its customers. Customers claim that money was taken from their accounts without their permission. The bank claims that their computer system shows that either the customer authorised the withdrawal or was grossly negligent and so the customer is liable. The customer may also claim that the bank’s handling of the dispute is poor and the contract with the bank protects the bank’s interests more than that of the customer so is an unfair basis for managing disputes.

There are several lessons the Post Office trial will have for the victims of phantom withdrawals, particularly for cases of push payment fraud, but in this post, I’m going to explore why these issues are being dealt with first in a trial initiated by subpostmasters and not by the (far more numerous) bank customers. In later posts, I’ll look more into the specific details that are being disclosed as a result of this trial.

Continue reading Justice for victims of bank fraud – learning from the Post Office trial