Efficient Cryptographic Arguments and Proofs – Or How I Became a Fractional Monetary Unit

In 2008, unfortunate investors found their life savings in Bernie Madoff’s hedge fund swindled away in a $65 billion Ponzi scheme. Imagine yourself back in time with an opportunity to invest in his fund that had for years delivered stable returns and pondering Madoff’s assurance that the fund was solvent and doing well. Unfortunately, neither Madoff nor any other hedge fund manager would take kindly to your suggestion of opening their books to demonstrate the veracity of the claim. And even if you somehow got access to all the internal data, it might take an inordinate effort to go through the documents.

Modern day computers share your predicament. When a computer receives the result of a computation from another machine, it can be critical whether the data is correct or not. If the computer had feelings, it would wish for the data to come with evidence of correctness attached. But the sender may not wish to reveal confidential or private information used in the computation. And even if the sender is willing to share everything, the cost of recomputation can be prohibitive.

In 1985, Goldwasser, Micali and Rackoff proposed zero-knowledge proofs as a means to give privacy-preserving evidence. Zero-knowledge proofs are convincing only if the statement they prove is true, e.g. a computation is correct; yet reveal no information except for the veracity of the statement. Their seminal work shows verification is possible without having to sacrifice privacy.

In the following three decades, cryptographers have worked tirelessly at reducing the cost of zero-knowledge proofs. Six years ago, we began the ERC funded project Efficient Cryptographic Argument and Proofs aimed at improving the efficiency of zero-knowledge proofs. In September 2018 the project came to its conclusion and throwing usual academic modesty aside, we have made remarkable progress, and several of our proof systems are provably optimal (up to a constant multiplicative factor).

As described in an earlier post, we improved the efficiency of generalised Sigma-protocols, reducing both the number of rounds in which the prover and verifier interact and the communication, with a proof size around 7 kB even for large and complex statements. Our proof techniques have been optimised and implemented in the Bulletproof system, which is now seeing widespread adoption.

We also developed highly efficient pairing-based non-interactive zero-knowledge proofs (aka zk-SNARKs). Here the communication cost is even lower in practice, enabling proofs to be just a few hundred bytes regardless of the size of the statement being proved. Their compactness and ease of verification make them useful in privacy-preserving cryptocurrencies and blockchain compression.

Continue reading Efficient Cryptographic Arguments and Proofs – Or How I Became a Fractional Monetary Unit

Can Games Fix What’s Wrong with Computer Security Education?

We had the pleasure of Zachary Peterson visiting UCL on a Cyber Security Fulbright Scholarship. The title is from his presentation given at our annual ACE-CSR event in November 2016.

Zachary Peterson is an associate professor of computer science at Cal Poly, San Luis Obispo. The key problem he is trying to solve is that the educational system is producing many fewer computer security professionals than are needed; an article he’d seen just two days before the ACE meeting noted a 73% rise in job vacancies in the last year despite a salary premium of 9% over other IT jobs. This information is backed up by the 2014 Taulbee survey, which found that the number of computer security PhDs has declined to 4% of the US total. Lack of diversity, which sees security dominated by white and Asian males, is a key contributing factor. Peterson believes that diversity is not only important as a matter of fairness, but essential because white males are increasingly a demographic minority in the US and because monocultures create perceptual blindness. New perspectives are especially needed in computer security as present approaches are not solving the problem on their own.

Peterson believes that the numbers are so bad because security is under-represented in both the computer science curriculum and in curriculum standards. The ACM 2013 curriculum guidelines recommend only three contact hours (also known as credit hours) in computer security in an entire undergraduate computer science degree. These are typically relegated to an upper-level elective class, and subject to a long chain of prerequisites, so they are only ever seen by a self-selected group who have survived years of attrition – which disproportionately affects women. The result is to create a limited number of specialists, unnecessarily constrain the student body, and limit the time students have to practice before joining the workforce. In addition, the self-selected group who do study security late in their academic careers have developed both set habits and their mind set before encountering an engineering task. Changing security into a core competency and teaching it as early as secondary school is essential but has challenges: security can be hard, and pushing it to the forefront may worsen existing problems seen in computer science more broadly, such as the solitary, anti-social, creativity-deficient image perception of the discipline.

Peterson believes games can help improve this situation. CTFTime, which tracks games events, reports a recent explosion in cyber security games to over 56 games events per year since 2013. These games, if done correctly, can teach core security skills in an entertaining – and social – way, with an element of competition. Strategic thinking, understanding an adversary’s motivation, rule interpretation, and rule-breaking are essential for both game-playing and security engineering.

Continue reading Can Games Fix What’s Wrong with Computer Security Education?

Double-Fetch Situations Turn into Double-Fetch Vulnerabilities: A Study of Double Fetches in the Linux Kernel

We held our annual ACE-CSR event on 2 November 2016. This post and the title is drawn from the second talk by Dr Jens Krinke, a senior lecturer in the Centre for Research on Evolution, Search and Testing, and the Software Systems Engineering group at UCL, who outlined work he conducted with Pengfei Wang that found flaws in the Linux kernel.

The work Krinke presented, completed over the last year, revolves around double-fetch problems, which got their name in 2008 in a study that found exploitable vulnerabilities in the Windows kernel. In an updated study in 2013, Mateusz “j00ru” Jurczyk and Gynvael Coldwind produced a paper, which examined the Windows kernel and found 89 potential issues, 36 of which were highly exploitable. Most of the paper focused on how to exploit these bugs; today, if you find such a vulnerability you can download an exploit from Github.

Many of these vulnerabilities have now been fixed, but Jurczyk and Gynvael’s analysis was extremely slow, because they essentially simulated Windows over and over again, with the result that it took them 15 hours just to get to the boot screen. Because the analysis was dynamic, they were only able to cover a subset of the drivers present in the system, and they said outright that they knew there were other vulnerabilities present. Krinke was sure there were vulnerabilities of this type in Linux, but no one had audited the Linux kernel in detail.

Krinke and Wang wanted to audit Linux, looking at the source code directly and including all the drivers – which was essential, because, as it turned out, roughly 80% of the problems they found were in the drivers.

Double-fetch problems are memory access conflicts that arise in the interface between the operating system kernel and an application when data is changed between the time it’s checked and the time it’s actually used. It works this way: every process a user starts has its own virtual memory space that is isolated from other user memory spaces; only the kernel can see all memory spaces. However, there must be interaction, and this is done through system calls. So, a user selects some data and asks the system to operate on it. The data remains in the user space but the system copies it to the kernel and checks the data is valid before using it. The upshot is that the kernel may fetch the data twice: once for checking and once for use. A malicious application that intercedes between those two fetches and changes the data after it’s been checked but before it’s used can cause system crashes.

If an error like this arises outside of the operating system, it causes software errors, but this affects only you; when it happens in the operating system, an attacker can take over the complete system.

Analysing this process is extremely hard, especially because exactly what’s occurring is unknown beforehand. Krinke and Wang instead created a two-step analysis. First, a pattern-based double-fetch analysis based on Coccinelle examined the source code to find occurrences; these were then manually analysed to determine how the data was used and what the consequences were. The results were divided into three categories: size checking; type selection; and shallow copy. In the 40,000 source code files comprising Linux, they found 90 such situations, of which 57 were in drivers, five were true bugs. All of them have been confirmed and three are exploitable vulnerabilities. In checking back, Krinke and Wang found that some of these bugs are at least ten years old.

Krinke and Wang also applied their analysis also to Android (a variant of Linux) and FreeBSD. The latter, which is more security-oriented in design, had no problems. Android had only three bugs, but they also found a problem that had been fixed in Linux (but not in Android). After studying these three operating systems with Coccinelle, they can say that after these bugs have been fixed, that all three are likely to be free of double-fetch vulnerabilities. As the bug was in the file system, Android apps with native binary code that interacts with the phone at the file system level can also exploit it.

Inaugural Lecture: Zero-Knowledge Proofs

We held our annual ACE-CSR event in November 2016. The last talk was my inaugural lecture to full professor. I did not write the summary below myself, hence the use of third person, not because I now consider myself royalty! 🙂

In introducing Jens Groth, professor of cryptology, George Danezis, head of the information security group, commented that Groth’s work provided the only viable solutions to many of the hard privacy problems he himself was tackling. To most qualified engineers, he said, the concept of zero-knowledge proofs seems impossible: the idea is to show the properties of a secret without revealing them. A zero-knowledge proof could, for example, verify the result of a computation on some data without revealing the data itself. Most engineers believe that you must choose between integrity and confidentiality; Groth has proved this is not true. In addition, Danezis praised Groth’s work as highly creative, characterised by great mathematical depth and subtlety, and admired Groth’ willingness to speak his mind fearlessly even to government funders. Angela Sasse, head of the department, called Groth’s work “security tools we’re going to need for future generations”, and noted that simultaneously with these other accomplishments Groth helped put in place the foundation for the group as it is today.

Groth, jokingly opted to structure his talk around papers he’s had rejected to illustrate how hard it can be to publish innovative research. The concept of zero-knowledge proofs originated with a 1985 paper by Shafi Goldwasser, Silvio Micali, and Charles Rackoff. Zero-knowledge proofs have three characteristics: completeness (the prover can convince the verifier that the statement is true); soundness (the claiming prover cannot convince the verifier when the statement is false); and secrecy (no information other than the truth of the statement is leaked, even when the prover is interacting with a verifier who cheats). Groth illustrated the latter idea with a simple card trick: he asked an audience member to choose a card and then say whether the card was a heart or not. If the respondent shows all the cards that are not hearts, counting these proves that the selected card must be a heart without revealing what it is.

Zero-knowledge proofs can be extended to think about more complicated statements. Groth listed some examples:

  • Assert that a logical formula has an assignment to the variables that makes it true
  • Verify that a graph is Hamiltonian – that is, there is a path that touches each vertex exactly once
  • That a set of inputs into a Boolean circuit will produce an output of 1
  • Any statement of the general form U belongs to some NP-language L

Groth could see many possible applications for these proofs: signatures, encryption, electronic cash, electronic auctions, internet voting, multiparty computation, and verifiable cloud computing. His overall career has focused on building versatile and efficient proofs with the goal of moving them from being expensive and slow to being just a fraction of the cost of the task that’s being executed so that people would stop thinking about the cost and just toss them in as a standard part of any transaction.

Continue reading Inaugural Lecture: Zero-Knowledge Proofs

ACE-CSR opening event 2015/16: talks on malware, location privacy and wiretap law

The opening event for the UCL Academic Centre of Excellence for Cyber Security Research in the 2015–2016 academic term featured three speakers: Earl Barr, whose work on approximating program equivalence has won several ACM distinguished paper awards; Mirco Musolesi from the Department of Geography, whose background includes a degree in computer science and an interest in analysing myriad types of data while protecting privacy; and Susan Landau, a professor at Worcester Polytechnic Institute and a visiting professor at UCL and an expert on cyber security policy whose books include Privacy On the Line: the Politics of Wiretapping and Encryption (with Whitfield Diffie) and Surveillance or Security? The Risks Posed by New Wiretapping Technologies.

Detecting malware and IP theft through program similarity

Earl Barr is a member of the software systems engineering group and the Centre for Research on Evolution, Search, and Testing. His talk outlined his work using program similarity to determine whether two arbitrary programs have the same behaviour in two areas relevant to cyber security: malware and intellectual property theft in binaries (that is, code reused in violation of its licence).

Barr began by outlining his work on detecting malware, comparing the problem to that facing airport security personnel trying to find a terrorist among millions of passengers. The work begins with profiling: collect two zoos, and then ask if the program under consideration is more likely to belong to the benign zoo or the malware zoo.

Rather than study the structure of the binary, Barr works by viewing the program as strings of 0s and 1s, which may not coincide with the program’s instructions, and using information theory to create a measure of dissimilarity, the normalised compression distance (NCD). The NCD serves as an approximation of the Kolmogorov Complexity, a mathematical measure of the complexity of the shortest description of an object, which is then normalised using a compression algorithm that ignores the details of the instruction set architecture for which the binary is written.

Using these techniques to analyse a malware zoo collected from sources such as Virus Watch, Barr was able to achieve a 95.7% accuracy rate. He believes that although this technique isn’t suitable for contemporary desktop anti-virus software, it opens a new front in the malware detection arms race. Still, Barr is aware that malware writers will rapidly develop countermeasures and his group is already investigating counter-countermeasures.

Malware writers have three avenues for blocking detection: injecting new content that looks benign; encryption; and obfuscation. Adding new content threatens the malware’s viability: raising the NCD by 50% requires doubling the size of the malware. Encryption can be used against the malware writer: applying a language model across the program reveals a distinctive saw-toothed pattern of regions with low surprise and low entropy alternating with regions of high surprise and high entropy (that is, regions with ciphertext). Obfuscation is still under study: the group is using three obfuscation engines available for Java and applying them repeatedly to Java malware. Measuring the NCD after each application shows that after 100 iterations the NCD approaches 1 (that is, the two items being compared are dissimilar), but that two of the three engines make errors after 200 applications. Unfortunately for malware writers, this technique also causes the program to grow in size. The cost of obfuscation to malware writers may therefore be greater than that imposed upon white hats.

Continue reading ACE-CSR opening event 2015/16: talks on malware, location privacy and wiretap law

Program obfuscation

I had the pleasure of visiting the Simons Institute for the Theory of Computing at UC Berkeley over the Summer. One of the main themes of the programme was obfuscation.

Recently there has been a lot of exciting research on developing cryptographic techniques for program obfuscation. Obfuscation is of course not a new thing, you may already be familiar with the obfuscated C contest. But the hope underlying this research effort is to replace ad hoc obfuscation, which may or may not be possible to reverse engineer, with general techniques that can be applied to obfuscate any program and that satisfy a rigorous definition of what it means for a program to be obfuscated.

Even defining what it means for a program to be obfuscated is not trivial. Ideally, we would like an obfuscator to be a compiler that turns any computer program into a virtual black-box. By a virtual black-box we mean that the obfuscated program should preserve functionality while not leaking any other information about the original code, i.e., it should have the same input-output behaviour as the original program and you should not be able to learn anything beyond what you could learn by executing the original program. Unfortunately it turns out that this is too ambitious a goal: there are special programs, which are impossible to virtual black-box obfuscate.

Instead cryptographers have been working towards developing something called indistinguishability obfuscation. Here the goal is that given two functionally equivalent programs P1 and P2 of the same size and an obfuscation of one of them O(Pi), it should not be possible to tell which one has been obfuscated. Interestingly, even though this is a weaker notion than virtual black-box obfuscation it has already found many applications in the construction of cryptographic schemes. Furthermore, it has been proved that indistinguishability obfuscation is the best possible obfuscation in the sense that any information leaked by an obfuscated program is also leaked by any other program computing the same functionality.

So, when will you see obfuscation algorithms that you can use to obfuscate your code? Well, current obfuscation algorithms have horrible efficiency and are far from practical applicability so there is still a lot of research to be done on improving them. Moreover, current obfuscation proposals are based on something called graded encoding schemes. At the moment, there is a tug of war going on between cryptographers proposing graded encoding schemes and cryptanalysts breaking them. Breaking graded encoding schemes does not necessarily break the obfuscation algorithms building on them but it is fair to say that right now the situation is a mess. Which from a researcher’s perspective makes the field very exciting since there is still a lot to discover!

If you want to learn more about obfuscation I recommend the watching some of the excellent talks from the programme.

One-out-of-Many Proofs: Or How to Leak a Secret and Spend a Coin

I’m going to EUROCRYPT 2015 to present a new zero-knowledge proof that I’ve developed together with Markulf Kohlweiss from Microsoft Research. Zero-knowledge proofs enable you to demonstrate that a particular statement is true without revealing anything else than the fact it is true. In our case the statements are one-out-of-many statements, intuitively that out of a number of items one of them has a special property, and we greatly reduce the size of the proofs compared to previous works in the area. Two applications where one-out-of-many proofs come in handy are ring signatures and Zerocoin.

Ring signatures can be used to sign a message anonymously as a member of a group of people, i.e., all a ring signature says is that somebody from the group signed the message but not who it was. Consider for instance a whistleblower who wants to leak her company is dumping dangerous chemicals in the ocean, yet wants to remain anonymous due to the risk of being fired. By using a ring signature she can demonstrate that she works for the company, which makes the claim more convincing, without revealing which employee she is. Our one-out-of-many proofs can be used to construct very efficient ring signatures by giving a one-out-of-many proof that the signer holds a secret key corresponding to a public key for one of the people in the ring.

Zerocoin is a new virtual currency proposal where coins gain value once they’ve been accepted on a public bulletin board. Each coin contains a commitment to a secret random serial number that only the owner knows. To anonymously spend a coin the owner publishes the serial number and gives a one-out-of-many proof that the serial number corresponds to one of the public coins. The serial number prevents double spending of a coin; nobody will accept a transaction with a previously used serial number. The zero-knowledge property of the one-out-of-many proof provides anonymity; it is not disclosed which coin the serial number corresponds to. Zerocoin has been suggested as a privacy enhancing add-on to Bitcoin.

The full research paper is available on the Cryptology ePrint Archive.