Thoughts on the Libra blockchain: too centralised, not private, and won’t help the unbanked

Facebook recently announced a new project, Libra, whose mission is to be “a simple global currency and financial infrastructure that empowers billions of people”. The announcement has predictably been met with scepticism by organisations like Privacy International, regulators in the U.S. and Europe, and the media at large. This is wholly justified given the look of the project’s website, which features claims of poverty reduction, job creation, and more generally empowering billions of people, wrapped in a dubious marketing package.

To start off, there is the (at least for now) permissioned aspect of the system. One appealing aspect of cryptocurrencies is their potential for decentralisation and censorship resistance. It wasn’t uncommon to see the story of PayPal freezing Wikileak’s account in the first few slides of a cryptocurrency talk motivating its purpose. Now, PayPal and other well-known providers of payment services are the ones operating nodes in Libra.

There is some valid criticism to be made about the permissioned aspect of a system that describes itself as a public good when other cryptocurrencies are permissionless. These are essentially centralised, however, with inefficient energy wasting mechanisms like Proof-of-Work requiring large investments for any party wishing to contribute.

There is a roadmap towards decentralisation, but it is vague. Achieving decentralisation, whether at the network or governance level, hasn’t been done even in a priori decentralised cryptocurrencies. In this sense, Libra hasn’t really done worse so far. It already involves more members than there are important Bitcoin or Ethereum miners, for example, and they are also more diverse. However, this is more of a fault in existing cryptocurrencies rather than a quality of Libra.

Continue reading Thoughts on the Libra blockchain: too centralised, not private, and won’t help the unbanked

Digital Exclusion and Fraud – the Dark Side of Payments Authentication

Today, the Which? consumer rights organisation released the results from its study of how people are excluded from financial services as a result of banks changing their rules to mandate that customers use new technology. The research particularly focuses on banks now requiring that customers register a mobile phone number and be able to receive security codes in SMS messages while doing online banking or shopping. Not only does this change result in digital exclusion – customers without mobile phones or good network coverage will struggle to make payments – but as I discuss in this post, it’s also bad for security.

SMS-based security codes are being introduced to help banks meet their September 2019 deadline to comply with the Strong Customer Authentication requirements of the EU Payment Services Directive 2. These rules state that before making a payment from a customer’s account, the bank must independently verify that the customer really intended to make this payment. UK banks almost universally have decided to meet their obligation by sending a security code in an SMS message to the customer’s mobile phone and asking the customer to type this code into their web browser.

The problem that Which? identified is that some customers don’t have mobile phones, some that do have mobile phones don’t trust their bank with the number, and even those who are willing to share their mobile phone number with the bank might not have network coverage when they need to make a payment. A survey of Which? members found that nearly 1 in 5 said they would struggle to receive the security code they need to perform online banking transactions or online card payments. Remote locations have poorer network coverage than average and it is these areas that are likely to be disproportionately affected by the ongoing bank branch closure programmes.

Outsourcing security

The aspect of this scenario that I’m particularly interested in is why banks chose SMS messages as a security technology in the first place, rather than say sending out dedicated authentication devices to their customers or making a smartphone app. SMS has the advantage that customers don’t need to install an app or have the inconvenience of having to carry around an extra authentication device. The bank also saves the cost of setting up new infrastructure, other than hooking up their payment systems to the phone network. However, SMS has disadvantages – not only does it exclude customers in areas of poor network coverage, but it also effectively outsources security from the bank to the phone networks.

Continue reading Digital Exclusion and Fraud – the Dark Side of Payments Authentication

Efficient Cryptographic Arguments and Proofs – Or How I Became a Fractional Monetary Unit

In 2008, unfortunate investors found their life savings in Bernie Madoff’s hedge fund swindled away in a $65 billion Ponzi scheme. Imagine yourself back in time with an opportunity to invest in his fund that had for years delivered stable returns and pondering Madoff’s assurance that the fund was solvent and doing well. Unfortunately, neither Madoff nor any other hedge fund manager would take kindly to your suggestion of opening their books to demonstrate the veracity of the claim. And even if you somehow got access to all the internal data, it might take an inordinate effort to go through the documents.

Modern day computers share your predicament. When a computer receives the result of a computation from another machine, it can be critical whether the data is correct or not. If the computer had feelings, it would wish for the data to come with evidence of correctness attached. But the sender may not wish to reveal confidential or private information used in the computation. And even if the sender is willing to share everything, the cost of recomputation can be prohibitive.

In 1985, Goldwasser, Micali and Rackoff proposed zero-knowledge proofs as a means to give privacy-preserving evidence. Zero-knowledge proofs are convincing only if the statement they prove is true, e.g. a computation is correct; yet reveal no information except for the veracity of the statement. Their seminal work shows verification is possible without having to sacrifice privacy.

In the following three decades, cryptographers have worked tirelessly at reducing the cost of zero-knowledge proofs. Six years ago, we began the ERC funded project Efficient Cryptographic Argument and Proofs aimed at improving the efficiency of zero-knowledge proofs. In September 2018 the project came to its conclusion and throwing usual academic modesty aside, we have made remarkable progress, and several of our proof systems are provably optimal (up to a constant multiplicative factor).

As described in an earlier post, we improved the efficiency of generalised Sigma-protocols, reducing both the number of rounds in which the prover and verifier interact and the communication, with a proof size around 7 kB even for large and complex statements. Our proof techniques have been optimised and implemented in the Bulletproof system, which is now seeing widespread adoption.

We also developed highly efficient pairing-based non-interactive zero-knowledge proofs (aka zk-SNARKs). Here the communication cost is even lower in practice, enabling proofs to be just a few hundred bytes regardless of the size of the statement being proved. Their compactness and ease of verification make them useful in privacy-preserving cryptocurrencies and blockchain compression.

Continue reading Efficient Cryptographic Arguments and Proofs – Or How I Became a Fractional Monetary Unit

How Accidental Data Breaches can be Facilitated by Windows 10 and macOS Mojave

Inadequate user interface designs in Windows 10 and macOS Mojave can cause accidental data breaches through inconsistent language, insecure default options, and unclear or incomprehensible information. Users could accidentally leak sensitive personal data. Data controllers in companies might be unknowingly non-compliant with the GDPR’s legal obligations for data erasure.

At the upcoming Annual Privacy Forum 2019 in Rome, I will be presenting the results of a recent study conducted with my colleague Mark Warner, exploring the inadequate design of user interfaces (UI) as a contributing factor in accidental data breaches from USB memory sticks. The paper titled “Fight to be Forgotten: Exploring the Efficacy of Data Erasure in Popular Operating Systems” will be published in the conference proceedings at a later date but the accepted version is available now.

Privacy and security risks from decommissioned memory chips

The process of decommissioning memory chips (e.g. USB sticks, hard drives, and memory cards) can create risks for data protection. Researchers have repeatedly found sensitive data on devices they acquired from second-hand markets. Sometimes this data was from the previous owners, other times from third persons. In some cases, highly sensitive data from vulnerable people were found, e.g. Jones et al. found videos of children at a high school in the UK on a second-hand USB stick.

Data found this way had frequently been deleted but not erased, creating the risk that any tech-savvy future owner could access it using legally available, free to download software (e.g., FTK Imager Lite 3.4.3.3). Findings from these studies also indicate the previous owners’ intentions to erase these files and prevent future access by unauthorised individuals, and their failure to sufficiently do so. Moreover, these risks likely extend from the second-hand market to recycled memory chips – a practice encouraged under Directive 2012/19/EU on ‘waste electrical and electronic equipment’.

The implications for data security and data protection are substantial. End-users and companies alike could accidentally cause breaches of sensitive personal data of themselves or their customers. The protection of personal data is enshrined in Article 8 of the Charter of Fundamental Rights of the European Union, and the General Data Protection Regulation (GDPR) lays down rules and regulation for the protection of this fundamental right. For example, data processors could find themselves inadvertently in violation of Article 17 GDPR Right to Erasure (‘right to be forgotten’) despite their best intentions if they failed to erase a customer’s personal data – independent of whether that data was breached or not.

Seemingly minor design choices, the potential for major implications

The indication that people might fail to properly erase files from storage, despite their apparent intention to do so, is a strong sign of system failure. We know since more than twenty years that unintentional failure of users at a task is often caused by the way in which [these] mechanisms are implemented, and users’ lack of knowledge. In our case, these mechanisms are – for most users – the UI of Windows and macOS. When investigating these mechanisms, we found seemingly minor design choices that might facilitate unintentional data breaches. A few examples are shown below and are expanded upon in the full publication of our work.

Continue reading How Accidental Data Breaches can be Facilitated by Windows 10 and macOS Mojave

Science “of” or “for” security?

The choice of preposition – science of security versus science for security – marks an important difference in mental orientation. This post grew out of a conversation last year with Roy Maxion, Angela Sasse and David Pym. Clarifying this small preposition will help us set expectations, understand goals, and ultimately give appropriately targeted advice on how to do better security research.

These small words (for vs. of) unpack into some big differences. Science for security seems to mean taking any scientific discipline or results and using that to make decisions about information security. Thus, “for” is agnostic as to whether there is any work within security that looks like science. Like the trend for evidence-based medicine, science for security would advocate for evidence-based security decisions. This view is advocated by RISCS here in the UK and is probably consistent with approaches like the New School of Information Security.

Science for security does not say security is not science. More accurately, it seems not to care. The view is agnostic and seems to say it does not matter whether security is science. The point seems to be there is enough difficulty in adapting other sciences for use by security, and that applying the methods of other sciences to security-relevant problems is what matters. There are many examples of this approach, in different flavours. We can see at least three: porting concepts, re-situating approaches, and borrowing methods. We’re adapting these first two from Morgan (2014).

Porting concepts

Economics of infosec is its own discipline (WEIS). The way Anderson (2001) applies economics is to take established principles in economics to shed light on established difficulties in infosec.

Re-situating approaches

This is when some other science understands something, and we generalise from that instance and try to make a concrete application to security. We might argue that program verification takes this approach, re-situating understanding from mathematics and logic. Studies on keystroke dynamics also re-situate the understanding of human psychology and physical forensics.

Borrowing methods

We might study a security phenomenon according to the methods of an established discipline. Usable security largely applies psychology- and sociology-based methods, for example. Of course, there are specific challenges that might arise in studying a new area such as security (Krol et al., 2016), but the approach is science for security because the challenges result in minor tweaks to the method of the home discipline.

Continue reading Science “of” or “for” security?

The Government published its draft domestic abuse bill, but risks ignoring the growing threat of tech abuse

Dr Leonie Tanczer, who leads UCL’s “Gender and IoT” research team, reflects on the release of the draft Domestic Abuse Bill and points out that in its current form, it misses emphasis on emerging forms of technology-facilitated abuse.

On the 21st of January, the UK Government published its long-awaited Domestic Abuse Bill. The 196-page long document focuses on a wide range of issues from providing a first statutory definition of domestic abuse to the recognition of economic abuse as well as controlling and coercive non-physical behaviour. In recent years, abuse facilitated through information and communication technologies (ICT) has been growing. Efforts to mitigate these forms of abuse (e.g. social media abuse or cyberstalking) are already underway, but we expect new forms of “technology-facilitated abuse” (“tech abuse”) to become more commonplace amongst abusive perpetrators.

We are currently seeing an explosion in the number of Internet-connected devices on the market, from gadgets like Amazon’s Alexa and Google’s Home hub, to “smart” home heating, lighting, and security systems as well as wearable devices such as smartwatches. What these products have in common is their networked capability, and many also include features such as remote, video, and voice control as well as GPS location tracking. While these capabilities are intended to make modern life easier, they also create new means to facilitate psychological, physical, sexual, economic, and emotional abuse as well as controlling and manipulating behaviour.

Although so-called “Internet of Things” (IoT) usage is not yet widespread (there were 7.5 billion total connections worldwide in 2017), GSMA expects there to be 25 billion devices globally by 2025. Sadly, we have already started to see examples of these technologies being misused. An investigation last year by the New York Times showed how perpetrators of domestic abuse could use apps on their smartphones to remotely control household appliances like air conditioning or digital locks in order to monitor and frighten their victims. In 2018, we saw a husband convicted of stalking after spying on his estranged wife by hacking into their wall-mounted iPad.

The risk of being a victim of tech abuse falls predominantly on women and especially migrant women. This is a result of men still being primarily in charge of the purchase and maintenance of technical systems as well as women and girls being over-proportionally affected by domestic abuse.

The absence of ‘tech abuse’ in the draft bill

While the four objectives of the draft Bill (promote awareness, protect and support, transform the justice process, improve performance) are to be welcomed, the absence of sufficient reference to the growing rise of tech abuse is a significant omission and missed opportunity.

Continue reading The Government published its draft domestic abuse bill, but risks ignoring the growing threat of tech abuse

TESSERACT’s evaluation framework and its use of MaMaDroid

In this blog post, we will describe and comment on TESSERACT, a system introduced in a paper to appear at USENIX Security 2019, and previously published as a pre-print. TESSERACT is a publicly available framework for the evaluation and comparison of systems based on statistical classifiers, with a particular focus on Android malware classification. The authors used DREBIN and our MaMaDroid paper as examples of this evaluation. Their choice is because these are two of the most important state-of-the-art papers, tackling the challenge from different angles, using different models, and different machine learning algorithms. Moreover, DREBIN has already been reproduced by researchers even though the code is not available anymore; MaMaDroid’s code is publicly available (the parsed data and the list of samples are available under request). I am one of MaMaDroid’s authors, and I am particularly interested in projects like TESSERACT. Therefore, I will go through this interesting framework and attempt to clarify a few misinterpretations made by the authors about MaMaDroid.

The need for evaluation frameworks

The information security community and, in particular, the systems part of it, feels that papers are often rejected based on questionable decisions or, on the other hand, that papers should be more rigorous, trying to respect certain important characteristics. Researchers from Dutch universities published a survey of papers published to top venues in 2010 and 2015 where they evaluated if these works were presenting “crimes” affecting completeness, relevancy, soundness, and reproducibility of the work. They have shown how the newest publications present more flaws. Even though the authors included their works in the analyzed ones and did not word the paper as a wall of shame by pointing the finger against specific articles, the paper has been seen as an attack to the community rather than an incitement to produce more complete papers. To the best of my knowledge, unfortunately, the paper has not yet been accepted for publication. TESSERACT is another example of researchers’ effort in trying to make the community work more rigorous: most system papers present accuracies that are close to 100% in all the tests done; however, when some of them have been tested on different datasets, their accuracy was worse than a coin toss.

These two works are part of a trend that I personally find important for our community, to allow works that are following other ones on the chronological aspects to be evaluated in a more fair way. I explain with a personal example: I recall when my supervisor told me that at the beginning he was not optimistic about MaMaDroid being accepted at the first attempt (NDSS 2017) because most of the previous literature shows results always over 98% accuracy and that gap of a few percentage points can be enough for some reviewers to reject. When we asked an opinion of a colleague about the paper, before we submitted it for peer-review, this was his comment on the ML part: “I actually think the ML part is super solid, and I’ve never seen a paper with so many experiments on this topic.” We can see completely different reactions over the same specific part of the work.

TESSERACT

The goal of this post is to show TESSERACT’s potential while pointing out the small misinterpretations of MaMaDroid present in the current version of the paper. The authors contacted us to let us read the paper and see whether there has been any misinterpretation. I had a constructive meeting with the authors where we also had the opportunity to exchange opinions on the work. Following the TESSERACT description, there will be a section related to MaMaDroid’s misinterpretations in the paper. The authors told me that the newest versions would be updated according to what we discussed.

Continue reading TESSERACT’s evaluation framework and its use of MaMaDroid

Introducing Sonic: A Practical zk-SNARK with a Nearly Trustless Setup

In this post, we discuss a new zk-SNARK, Sonic, developed by Mary Maller, Sean Bowe, Markulf Kohlweiss and Sarah Meiklejohn. Unlike other SNARKs, Sonic does not require a trusted setup for each circuit, but only a single setup for all circuits. Further, the setup for Sonic never has to end, so it can be continuously secured by accumulating more contributions. This property makes it ideal for any system where there is not a trusted party, and there is a need to validate data without leaking confidential information. For example, a company might wish to show solvency to an auditor without revealing which products they have invested in. The construction is highly practical.

More about zk-SNARKs

Like all other zero-knowledge proofs, zk-SNARKs are a tool used to build applications where users must prove the validity of their data, such as in verifiable computation or anonymous credentials. Additionally, zk-SNARKs have the smallest proof sizes and verifier time out of all other known techniques for building zero-knowledge proofs. However, they typically require a trusted setup process, introducing the possibility of fraudulent data being input by the actors that implemented the system. For example, Zcash uses zk-SNARKs to send private cryptocurrency transactions, and if their setup was compromised then a small number of users could generate an unlimited supply of currency without detection.

Characteristics of zk-SNARKs
🙂 Can be used to build many cryptographic protocols
🙂 Very small proof sizes
🙂 Very fast verifier time
😐 Average prover time
☹️ Requires a trusted setup
☹️ Security assumes non-standard cryptographic assumptions

In 2018, Groth et al. introduced a zk-SNARK that could be built from an updatable and universal setup. We describe these properties below and claim that these properties help mitigate the security concerns around trusted setup. However, unlike Sonic, Groth et al.’s setup outputs a large set of global parameters (in the order of terabytes), which would be unwieldy to store, update and verify.

Updatability

Updatability means that any user, at any time, can update the parameters, including after the system goes live. After a single honest user has participated, no party can prove fraudulent data. This property means that a distrustful user could update the parameters themselves and have personal confidence in the parameters from that point forward. The update proofs are short and quick to verify.

Universality

Universality means that the same parameters can be used for any application using this zk-SNARK. Thus one can imagine including the global parameters in an open-source implementation, or one could use the same parameters for all smart contracts in Ethereum.

Why Use Sonic?

Sonic is universal, updatable, and has a small set of global parameters (in the order of megabytes). Proof sizes are small (256 bytes) and verifier time is competitive with the fastest zk-SNARKs in the literature. It is especially well suited to systems where the same zk-SNARK is run by many different provers and verified by many different parties. This is exactly the situation for many blockchain systems.

Continue reading Introducing Sonic: A Practical zk-SNARK with a Nearly Trustless Setup

Protecting human rights by avoiding regulatory capture within surveillance oversight

Regulation is in the news again as a result of the Home Office blocking surveillance expert Eric Kind from taking up his role as Head of Investigation at the Investigatory Powers Commissioner’s Office (IPCO) – the newly created agency responsible for regulating organisations managing surveillance, including the Home Office. Ordinarily, it would be unheard of for a regulated organisation to be able to veto the appointment of staff to their regulator, particularly one established through statute as being independent. However, the Home Office was able to do so here by refusing to issue the security clearance required for Kind to do his job. The Investigatory Powers Commissioner, therefore, can’t override this decision, the Home Office doesn’t have to explain their reasoning, nor is there an appeal process.

Behaviour like this can lead to regulatory capture – where the influence of the regulated organisation changes the effect of regulation to direct away from the public interest and toward the interests of the organisations being regulated. The mechanism of blocking security clearances is specific to activities relating to the military and intelligence, but the phenomenon of regulatory capture is more widespread. Consequently, regulatory capture has been well studied, and there’s a body of work describing tried and tested ways to resist it. If the organisations responsible for surveillance regulation were to apply these recommendations, it would improve both the privacy of the public and the trust in agencies carrying out surveillance. When we combine these techniques with advanced cryptography, we can do better still.

Regulatory capture is also a problem in finance – likely contributing to high-profile scandals like Libor manipulation, and payment-protection-insurance misselling. In previous articles, we’ve discussed how regulators’ sluggish response to new fraud techniques has led to their victims unfairly footing the bill. Such behaviour by regulators is rarely the result of clear corruption – regulatory capture is often more subtle. For example, the skills needed by the regulator may only be available by hiring staff from the regulated organisations, bringing their culture and mindset along with them. Regulators’ staff often find career opportunities within the regulator limited and so are reluctant to take a hard-line against the regulated organisation and so close off the option of getting a job there later – likely at a much higher salary. Regulatory capture resulting from sharing of staff and their corresponding culture is, I think, a key reason for surveillance oversight bodies having insufficient regard for the public interest.

Continue reading Protecting human rights by avoiding regulatory capture within surveillance oversight

Memes are taking the alt-right’s message of hate mainstream

Unless you live under the proverbial rock, you surely have come across Internet memes a few times. Memes are basically viral images, videos, slogans, etc., which might morph and evolve but eventually enter popular culture. When thinking about memes, most people associate them with ironic or irreverent images, from Bad Luck Brian to classics like Grumpy Cats.

Bad Luck Brian (left) and Grumpy Cat (right) memes.

Unfortunately, not all memes are funny. Some might even look as innocuous as a frog but are in fact well-known symbols of hate. Ever since the 2016 US Presidential Election, memes have been increasingly associated with politics.

Pepe The Frog meme used in a Brexit-related context (left), Trump as Perseus beheading Hillary as Medusa (center), meme posted by Trump Jr. on Instagram (right).

But how exactly do memes originate, spread, and gain influence on mainstream media? To answer this question, our recent paper (“On the Origins of Memes by Means of Fringe Web Communities”) presents the largest scientific study of memes to date, using a dataset of 160 million images from various social networks. We show how “fringe” Web communities like 4chan’s “politically incorrect board” (/pol/) and certain “subreddits” like The_Donald are successful in generating and pushing a wide variety of racist, hateful, and politically charged memes.

Continue reading Memes are taking the alt-right’s message of hate mainstream