What the CIA hack and leak teaches us about the bankruptcy of current “Cyber” doctrines

Wikileaks just published a trove of documents resulting from a hack of the CIA Engineering Development Group, the part of the spying agency that is in charge of developing hacking tools. The documents seem genuine and catalog, among other things, a number of exploits against widely deployed commodity devices and systems, including Android, iPhone, OS X and Windows. Also smart TVs. This hack, with appropriate background, teaches us a lesson or two about the direction of public policy related to “cyber” in the US and the UK.

Routine proliferation of weaponry and tactics

The CIA hack is in many ways extraordinary, in that it allowed the attackers to gain access to the source code of the hacking tools of the agency – an extraordinary act of proliferation of attack technologies. In other ways, it is mundane in that it is neither the first, nor probably the last hack or leak of catastrophic proportions to occur to a US/UK government department in charge of offensive cyber operations.

This list of leaks of government attack technologies, illustrates that when it comes to cyber-weaponry the risk of proliferation is not merely theoretical, but very real. In fact it seems to be happening all the time.

I find it particularly amusing – and those in charge of those agencies should probably find it embarrassing – that NSA and GCHQ go around presenting themselves as national technical authorities in assurance; they provide advice to others on how to not get hacked; they keep asserting that they can be trusted to operate extremely dangerous spying infrastructures; and handle in secret extremely dangerous zero-day exploits. Yet, they seem to be routinely hacked and have their secret documents leaked. Instead of chasing whistleblowers and journalists, policy makers should probably take note that there is not a high-enough level of assurance to secure cyber-weaponry, and for sure it is not to be found within those agencies.

In fact the risk of proliferation is at the very heart of cyber attack, and integral to it, even without hacking or leaking from inside government. Many of us quietly laughed at the bureaucratic nightmare discussed in the recent CIA leak, describing the difficulty of classifying the cyber attack techniques while at the same time deploying them on target system. As the press release summarizes:

To attack its targets, the CIA usually requires that its implants communicate with their control programs over the internet. If CIA implants, Command & Control and Listening Post software were classified, then CIA officers could be prosecuted or dismissed for violating rules that prohibit placing classified information onto the Internet. Consequently the CIA has secretly made most of its cyber spying/war code unclassified.

This illustrates very clearly a key dynamic in hacking: once a hacker uses an exploit against an adversary system, there is a very real risk the exploit is captured by monitoring and intrusion detection systems of the target, and then weponized to hack other computers, at a low cost. This is very well established and researched, and such “honey pot” infrastructures have been used in the academic and commercial community for some time to detect and study potentially new attacks. This is not the premise of sophisticated defenders, the explanation of how honeypots work is on Wikipedia! The Flame malware, and Stuxnet before, were in fact found in the wild.

In that respect cyber-war is not like war at all. The weapons you use will be turned against you immediately, and your effective use of weapons relies on your very own infrastructures being utterly vulnerable to them.

What “Cyber” doctrine?

The constant leaks and hacks, leading to proliferation of exploits and hacking tools from the heart of government, as well through operations, should deeply inform policy makers when making choices about “cyber” doctrines. First, it is probably time to ditch the awkward term “Cyber”.

Continue reading What the CIA hack and leak teaches us about the bankruptcy of current “Cyber” doctrines

Smart contracts beyond the age of innocence

Why have Bitcoin, with its distributed consistent ledger, and now Ethereum with its support for fully fledged “smart contracts,” captured the imagination of so many people, both within and beyond the tech industry? The promise to replace obscure stores of information and arcane contract rules – with their inefficient, ambiguous, and primitive human interpretations – with publicly visible decentralized ledgers reflects the growing technological zeitgeist in their guarantee that all participants would know and be able to foresee the consequences of both their own actions and the actions of all others. The precise specification of contracts as code, with clauses automatically executed depending on certain sets of events and permissible user actions, represents for some a true state of utopia.

Regardless of one’s views on the potential for distributed ledgers, one of the most notable innovations that smart contracts have enabled thus far is the idea of a DAO (Decentralized Autonomous Organization), which is a specific type of investment contract, by which members individually contribute value that then gets collectively invested under some governance model.  In truly transparent fashion, the details of this governance model, including who can vote and how many votes are required for a successful proposal, are all encoded in a smart contract that is published (and thus globally visible) on the distributed ledger.

Today, this vision met a serious stumbling block: a “bug” in the contract of the first majorly successful DAO (which broke records by raising 11 million ether, the equivalent of 150 million USD, in its first two weeks of operation) allowed third parties to start draining its funds, and to eventually make off with 4% of all ether. The immediate response of the Ethereum and DAO community was to suspend activity – seemingly an anathema for a ledger designed to provide high resiliency and availability – and propose two potential solutions: a “soft-fork” that would impose additional rules on miners in order to exclude all future transactions that try to use the stolen ether, or, more drastically (and running directly contrary to the immutability of the ledger),  a “hard-fork” that would roll back the transactions in which the attack took place, in addition to the many legitimate transactions that took place concurrently.  Interestingly, a variant of the bug that enabled the hack was known to and dismissed by the creators of the DAO (and the wider Ethereum community).

While some may be surprised by this series of events, Maurice Wilkes, designer of the EDSAC, one of the first computers, reflected that “[…] the realization came over me with full force that a good part of the remainder of my life was going to be spent in finding errors in my own programs.” It is not the case that because a program is precisely defined it is easy to foresee what it will do once executed on its own under the control of users.  In fact, Rice’s theorem explicitly states that it is not possible in general to show that the result of programs, and thus smart contracts, will have any specific non-trivial property.

This forms the basis on which modern verification techniques operate: they try to define subsets of programs for which it is possible to prove some properties (e.g., through typing), or attempt to prove properties in a post-hoc way (e.g., through verification), but under the understanding that they may fail in general.  There is thus no scientific basis on which one can assert generally that smart contracts can easily provide clarity into and foresight of their consequences.

The unfolding story of the DAO and its consequences for the Ethereum community offers two interesting insights. First, as a sign that the field is maturing, there is an explicit call for understanding the computational space of safe contracts, and contracts with foreseeable consequences. Second, it suggests the need for smart contracts protecting significant assets to include external, possibly social, mechanisms in order to unlock significant value transfers. The willingness of exchanges to suspend trading and of the Ethereum developers to suggest a hard-fork is a last-resort example of such a social mechanism. Thus, politics – the discipline of collective management – reasserts itself as having primacy over human affairs.

Our contributions to the UK Distributed Ledger Technology report

The UK Government Office for Science, has published its report on “Distributed ledger technology: beyond block chain” to which UCL’s Sarah Meiklejohn, Angela Sasse and myself (George Danezis) contributed parts of the security and privacy material. The review, looks largely at economic, innovation and social aspects of these technologies. Our part discusses potential threats to ledgers, as well as opportunities to build robust security systems using ledgers (Certificate Transparency & CONIKS), and overcome privacy challenges, including a mention of the z.cash technology.

You can listen to the podcast interview Sarah gave on the report’s use cases, recommendations, but also more broadly future research directions for distributed ledgers, such as better privacy protection.

In terms of recommendation, I personally welcome the call for the Government Digital Services, and other innovation bodies to building capacity around distributed ledger technologies. The call for more research for efficient and secure ledgers (and the specific mention of cryptography research) is also a good idea, and an obvious need. When it comes to the specific security and privacy recommendation, it simply calls for standards to be established and followed. Sadly this is mildly vague: a standards based approach to designing secure and privacy-friendly systems has not led to major successes. Instead openness in the design, a clear focus on key end-to-end security properties, and the involvement of a wide community of experts might be more productive (and less susceptible to subversion).

The report is well timed: our paper on “Centrally Banked Crypto-Currencies” will be presented in February at a leading security conference, NDSS 2016, by Sarah Meiklejohn, largely inspired by the research agenda published by the Bank of England. It provides some answers to the problems of scalability and eco-friendliness of current proof-of-work based ledger design.

Teaching Privacy Enhancing Technologies at UCL

Last term I had the opportunity and pleasure to prepare and teach the first course on Privacy Enhancing Technologies (PETs) at University College London, as part of the MSc in Information Security.

The course covers principally, and in some detail, engineering aspects of PETs and caters for an audience of CS / engineering students that already understands the basics of information security and cryptography (although these are not hard prerequisites). Students were also provided with a working understanding of legal and compliance aspects of data protection regimes, by guest lecturer Prof. Eleni Kosta (Tilburg); as well as a world class introduction to human aspects of computing and privacy, by Prof. Angela Sasse (UCL). This security & cryptographic engineering focus sets this course apart from related courses.

The taught part of the course runs for 20 hours over 10 weeks, split in 10 topics:

Continue reading Teaching Privacy Enhancing Technologies at UCL

On-line lecture: DP5 Private Presence @ 31C3

During the break I attended the 31st Chaos Communications Congress (31C3) in Hamburg, Germany. There I had the pleasure of giving a presentation on “DP5: PIR for Privacy-preserving Presence” along with my colleague from Waterloo, Ian Goldberg. The Audio/Video Chaos Angels did a nice job of capturing the event, and making it available for all to view (I come in at 26:23).

Other resources around DP5 include:

  • Technical Report (pdf)
  • Talk Slides (pdf)
  • Event Page (html)
  • Git code repository (git)

Introducing the expanded UCL Information Security Group

It takes quite a bit of institutional commitment and vision to build a strong computer security group. For this reason I am delighted to share here that UCL computer science has in 2014 hired three amazing new faculty members into the Information Security group, bringing the total to nine. Here is the line-up of the UCL Information Security group and teaching the MSc in Information Security:

  • Prof. M. Angela Sasse is the head of the Information Security Group and a world expert on usable security and privacy. Her research touches upon the intersection of security mechanisms or security policies and humans — mental models they have, the mistakes they make, and their accurate or false perceptions that lead to security systems working or failing.
  • Dr Jens Groth is a cryptographer renowned for his work on novel zero-knowledge proof systems (affectionately known as Groth-Sahai), robust mix systems for anonymous communications and electronic voting and succinct proofs of knowledge. These are crucial building blocks of modern privacy-friendly authentication and private computation protocols.
  • Dr Nicolas Courtois is a symmetric key cryptographer, known for pioneering work on algebraic cryptanalysis, extraordinary hacker of real-world cryptographic embedded systems, who has recently developed a keen interest in digital distributed currencies such as Bitcoin.
  • Prof. David Pym is both an expert on logic and verification, and also applies methods from economics to understand complex security systems and the decision making in organizations that deploy them. He uses stochastic processes, modeling and utility theory to understand the macro-economics of information security.
  • Dr Emiliano de Cristofaro researches privacy and applied cryptography. He has worked on very fast secure set intersection protocols, that are key ingredients of privacy technologies, and is one of the leading experts on protocols for privacy friendly genomics.
  • Dr George Danezis (me) researches privacy technologies, anonymous communications, traffic analysis, peer-to-peer security and smart metering security. I have lately developed an interest in applying machine learning techniques to problems in security such as anomaly detection and malware analysis.
  • Dr Steven Murdoch (new!) is an world expert on anonymous communications, through his association with the Tor project, banking security and designer of fielded banking authentication mechanisms. He is a media darling when it comes to explaining the problems of real-world deployed cryptographic systems in banking.
  • Dr Gianluca Stringhini (new!) is a rising star in network security, with a focus on the technical aspects of cyber-crime and cyber-criminal operations. He studies honest and malicious uses of major online services, such as social networks, email services and blogs, and develops techniques to detect and suppress malicious behavior.
  • Dr Sarah Meiklejohn (new!) has an amazing dual expertise in theoretical cryptography on the one hand, and digital currencies and security measurements on the other. She has developed techniques to trace stolen bitcoins, built cryptographic compilers, and contributed to fundamental advances in cryptography such as malleable proof systems.

One key difficulty when building a security group is balancing cohesion, to achieve critical mass, with diversity to cover a broad range of areas and ensuring wide expertise to benefit our students and research. I updated an interactive graph illustrating the structure of collaborations amongst the members of the Information Security Group, as well as their joint collaborators and publication venues. It is clear that all nine faculty members both share enough interest, and are complementary enough, to support each other.

Besides the nine full-time faculty members with a core focus on security, a number of other excellent colleagues at UCL have a track record of contributions in security, supporting teaching and research. Here is just a handful:

  • Prof. Brad Karp is an expert in networking and systems and has made seminal contributions to automatic worm detection and containment.
  • Dr David Clark specializes in software engineering with a core interest in information flow techniques for confidentiality, software security and lately malware.
  • Dr Earl Barr researches software engineering, and has researched security bugs, and malware as well as ideas for simple key management.
  • Prof. Ingemar Cox (part-time at UCL) is a world expert in multimedia security, watermarking and information hiding.
  • Prof. Yvo Desmedt (part-time at UCL) is a renowned cryptographer with key contributions in group key exchange, zero-knowledge and all fields of symmetric and asymmetric cryptography.

The full list of other colleagues working in security, including visiting researchers, post-doctoral researchers and research students list many more people – making UCL one of the largest research groups in Information Security in Europe.

 

This post originally appeared on Conspicuous Chatter, the blog of George Danezis.