Outside the front gate of GCHQ.

New threat models in the face of British intelligence and the Five Eyes’ new end-to-end encryption interception strategy

Due to more and more services and messaging applications implementing end-to-end encryption, law enforcement organisations and intelligence agencies have become increasingly concerned about the prospect of “going dark”. This is when law enforcement has the legal right to access a communication (i.e. through a warrant) but doesn’t have the technical capability to do so, because the communication may be end-to-end encrypted.

Earlier proposals from politicians have taken the approach of outright banning end-to-end encryption, which was met with fierce criticism by experts and the tech industry. The intelligence community had been slightly more nuanced, promoting protocols that allow for key escrow, where messages would also be encrypted under an additional key (e.g. controlled by the government). Such protocols have been promoted by intelligence agencies as recently as 2016 and early as the 1990s but were also met with fierce criticism.

More recently, there has been a new set of legislation in the UK, statements from the Five Eyes and proposals from intelligence officials that propose a “different” way of defeating end-to-end encryption, that is akin to key escrow but is enabled on a “per-warrant” basis rather than by default. Let’s look at how this may effect threat models in applications that use end-to-end encryption in the future.

Legislation

On the 31st of August 2018, the governments of the United States, the United Kingdom, Canada, Australia and New Zealand (collectively known as the “Five Eyes”) released a “Statement of Principles on Access to Evidence and Encryption”, where they outlined their position on encryption.

In the statement, it says:

Privacy laws must prevent arbitrary or unlawful interference, but privacy is not absolute. It is an established principle that appropriate government authorities should be able to seek access to otherwise private information when a court or independent authority has authorized such access based on established legal standards.

The statement goes on to set out that technology companies have a mutual responsibility with government authorities to enable this process. At the end of the statement, it describes how technology companies should provide government authorities access to private information:

The Governments of the Five Eyes encourage information and communications technology service providers to voluntarily establish lawful access solutions to their products and services that they create or operate in our countries. Governments should not favor a particular technology; instead, providers may create customized solutions, tailored to their individual system architectures that are capable of meeting lawful access requirements. Such solutions can be a constructive approach to current challenges.

Should governments continue to encounter impediments to lawful access to information necessary to aid the protection of the citizens of our countries, we may pursue technological, enforcement, legislative or other measures to achieve lawful access solutions.

Their position effectively boils down to requiring technology companies to provide a technical means to fulfil court warrants that require them to hand over private data of certain individuals, but the implementation for doing so is open to the technology company.

In the UK, we already have such legislation, called the Investigatory Powers Act, which provides a special type of order called a “technical capability notice” which can legally require a technology company to do just that.

A technical capability notice can be served on a technology company to require them to modify their technology or software to provide the technical capability to bypass end-to-end encryption when a warrant is served on one of their users. In 2018, the UK passed legislation which set out these obligations as part of the Investigatory Powers (Technical Capability) Regulations 2018:

As part of maintaining a technical capability, the Act specifies that a telecommunications operator may be required to maintain the capability to remove encryption from communications that it has applied or that has been applied on its behalf.

Today the Australian government has also passed their own bill modelled after the Investigatory Powers Act, called the Assistance and Access Bill 2018, that also allows for similar types of “technical capability notices” with similar wording to the UK’s Investigatory Powers Act. There appears to be a coordinated multi-national effort to implement these principles in legislation.

Possible implementations

So what are the ways technology companies are expected to implement these technical capability notices? Two weeks before the Five Eyes released their “Statement of Principles on Access to Evidence and Encryption”, Ian Levy, the technical director of the National Cyber Security Centre, a part of GCHQ participated in a panel on this topic at the Encryption and Surveillance workshop in Santa Barbara.

In an essay that followed from the event, co-written by Crispin Robinson, technical director for cryptanalysis at GCHQ, they wrote:

It’s relatively easy for a service provider to silently add a law enforcement participant to a group chat or call. The service provider usually controls the identity system and so really decides who’s who and which devices are involved – they’re usually involved in introducing the parties to a chat or call. You end up with everything still being end-to-end encrypted, but there’s an extra ‘end’ on this particular communication. This sort of solution seems to be no more intrusive than the virtual crocodile clips that our democratically elected representatives and judiciary authorise today in traditional voice intercept solutions and certainly doesn’t give any government power they shouldn’t have.

We’re not talking about weakening encryption or defeating the end-to-end nature of the service. In a solution like this, we’re normally talking about suppressing a notification on a target’s device, and only on the device of the target and possibly those they communicate with. That’s a very different proposition to discuss and you don’t even have to touch the encryption.

They’re effectively talking about modifying the software of end-to-end encrypted messaging applications so that the application additionally encrypts that message to another party (i.e. the government) without their knowledge, which is still key escrow, even though the way it’s phrased as “silently adding a law enforcement participant to a group chat” doesn’t sound like it. While suppressing a notification on the target’s device may sound different than adding explicit code to silently add an additional party to any conversation, in practice the net result is the same.

Levy argues here that it’s still end-to-end encrypted, but this is only true in the most technical sense. While it’s technically still end-to-end encrypted because your messages are sent end-to-end encrypted to a government authority, it’s defeating the purpose of the end-to-end encryption in the first place: that no party can read your messages except you and the recipient.

In May 2018 Ian Levy gave a speech in Cambridge where he also suggested the following proposal:

So what if the WhatsApp “I’ve lost my phone” feature is actually usable by law enforcement? It causes a set of things to happen that are completely transparent to the user that re-encrypt all the messages that have been sent by people who are still online back to the new key. That’s not breaking end-to-end crypto, everything’s still encrypted.

The reality is most of these services control the identity meta-system that you’re working with and so control your view of the world. So if they tell you I’ve got two phones, I’ve got two phones. If they tell you I’ve got three phones, I’ve got three. If one of those is sat in a police force somewhere, how the hell do you know?

Generally, users of end-to-end encryption messaging applications trust that server that they’re connected to is providing the real public keys of the people that the user is communicating with and not some fake public keys that are controlled by an adversary who is conducting a man-in-the-middle attack. Users can verify this in WhatsApp and Signal by comparing their public key fingerprints or “safety codes”, so if the server misbehaved or there was a man-in-the-middle attack it can be detected by the users if they did this manual out-of-band process. However, Apple’s iMessage doesn’t support key verification.

Note that even though WhatsApp supports key verification, if a recipient goes offline and returns online with a different key, undelivered messages to that recipient will be re-encrypted to the new key before the user even verifies the recipient’s new key. This is a client-side design tradeoff that WhatsApp has made.

Additionally, to support multiple devices in iMessage, Apple sends your recipients a different public key for each device so that Apple can add a “ghost” device, just like Levy described. In WhatsApp and Signal, this wouldn’t be possible because even though they support multiple devices, there is one public key per user rather than per device, and each new device added must effectively be authenticated by the device that controls the user’s public key and is added by scanning a QR code.

Based on these statements, it would appear there are two broad categories of implementations of technical capability notices to bypass end-to-end encryption:

Option 1: corrupting the server

The first option would be to modify how the server behaves so that it, for example, provides government’s public keys to users instead of the real public keys.

Examples:

  • The server simply sends the user a government-controlled public key for the recipients that they communicate with.
  • The server adds “ghost devices” controlled by the government to user’s identity.
  • The server exploits client-side design tradeoffs (i.e. in WhatsApp) which means that if a recipient’s key gets updated, all undelivered messages to that recipient will be re-encrypted to the new key before the sender can verify the new key. The scope of such an attack would be limited to undelivered messages, so its usefulness to law enforcement is questionable.

Mitigations

This is already mitigated to a degree by applications such as WhatsApp and Signal (but not iMessage), which provides a means for users to manually verify that they have the correct public keys for the people they are communicating with. Even though the vast majority of users don’t do this, it is arguable that even a small chance of ongoing interception by law enforcement being detected is too great a risk.

However, there are initiatives to use verifiable logs (sometimes known as a “blockchain”) – such as Google’s Key Transparency project – to make it so that all the keys that the server claims to belong to users are fully transparent to the public. This means if the server misbehaves, it would be immediately detectable by the user that the server is trying to impersonate (without the need for an out-of-band process with the recipient) because they would see that a new key has been added to their identity that they didn’t authorise. This would also make exploiting the design tradeoff made by WhatsApp detectable.

Option 2: corrupting the client

If just corrupting the server isn’t an option, then a technical capability notice could require a change in the end-user software so that end-to-end encryption is bypassed.

Examples:

  • Simply additionally encrypting messages to an extra government-controlled escrow key.
  • Silently adding the government as a participant to a group chat, and suppressing the notification.

Mitigations

Once an adversary can modify the software that you are using, then they can basically do anything. The threat model of end-to-end encryption which assumes you can trust the code running on your machine is broken. There’s not much you can practically do to eliminate that assumption, aside from having every user audit every line of the code themselves to make sure there are no spooky skeletons, and compile it themselves.

However, with any client-side mechanisms that allow a government authority to bypass encryption, it would always be theoretically possible to detect if such a mechanism exists. For example, suppose that a government authority compelled a messaging application developer to add a special authenticated message that the client receives from the server that would cause it to suppress notifications when a certain party is added to a group chat, as Levy suggests. This is not exactly a “backdoor”, but more like a “frontdoor” because it is authenticated, rather than a security bug.

If the code for the software is open source, then it would be trivial to spot given that it is a “frontdoor”, rather than an obscure security vulnerability. Note that the legislative and political approach of the Five Eyes described above is to require companies to provide technical solutions for “lawful access” rather than carte blanche backdoors or introducing bugs that anyone can exploit. If the code is closed source, it would still be theoretically possible (but more difficult) to detect through binary or network analysis. Either way, law enforcement authorities have to take the risk that if a company widely distributes a version of the software with such a frontdoor, they run the risk of being caught. Additionally, it would be possible to create tools that alert you if such a frontdoor has been enabled on your device if the mechanism is known.

Note that however, under the Investigatory Powers (Technical Capability) Regulations 2018, the recipient of a technical capability notice would be legally required to minimise the risk that the target of a warrant finds out about it:

12.—(1) To comply with the other obligations imposed by a technical capability notice in such a manner that the risk of any unauthorised persons becoming aware of any matter within section 57(4) of the Act is minimised, in particular by ensuring that apparatus, systems or other facilities or services, as well as procedures and policies, are developed and maintained in accordance with security standards or guidance specified in the notice.

To reduce that risk, it’s possible that law enforcement authorities may target the distribution mechanism of the software, making it so that only their intercept targets will be served a bad version of the software, rather than everyone, to reduce the chance of being caught. With the Investigatory Powers Act, technical capability orders can only be served on the company that is providing the communication service, so this may not work for example with WhatsApp and Google Play, but it may work for iMessage and the App Store because Apple controls both of those things.

However what you can do is make it so that if an adversary ever tried to serve you a different version of the software than everyone else, for example, a version that had a front/backdoor, they would be caught. This is what “binary transparency” attempts to achieve: by logging the hash of all published software binaries into a publicly auditable verifiable log based on a Merkle tree, it would be possible to audit all of the binaries an update server ever served, including front/backdoored ones. Users would only run the binaries if they were logged, so if they were served a special version of the binary, everyone else would be able to see that in the log.

This would also help with the FBI-Apple encryption dispute, where FBI wanted Apple to sign a backdoored version of their operating system. With binary transparency, the FBI wouldn’t be able to do this secretly, and the hash of the backdoored binary would be known by all.

This is not to be confused with reproducible builds, which solve the different problem of making sure that the binary matches a specific source code. Reproducible builds would still be important here to make sure that the source code behind a binary is available if the software is open source, but without binary transparency, it would still be possible to serve different users different versions of the code.

Binary transparency is an inherently different problem than for example, key or certificate transparency because privileged malicious binaries can disable client-side “gossiping” mechanisms that make misbehaviour detectable. Instead, “proactive” transparency is needed, which means the client needs to be sure that a binary is publicly logged before running it. Two proposals for proactive binary transparency mechanisms include ours at UCL and EPFL’s. There is also Mozilla’s proposal, but it’s based on Certificate Transparency and thus doesn’t provide proactive transparency. Bryan Ford has a good blog post explaining the challenges of binary transparency.

There is also the possibility of using a warrant canary to indirectly let users know if they have ever been served with a technical capability notice. That may be enough to discourage law enforcement authorities to pursue such orders because they would run the risk of their interception targets switching to a different platform. However, companies themselves may be too reluctant to admit that they received such orders because it would cause their users to lose confidence. On the other hand, binary transparency would be implemented by those who control the software distribution mechanism (e.g. Google Play or Apple’s App Store), rather than necessarily the companies providing the end-to-end encryption applications. This approach would force law enforcement authorities to take a risk that their operation would be detected even if the company being targeted would prefer to keep it secret.

Conclusion

We may need to adjust the threat model of software that provides end-to-end encryption to provide greater assurances in scenarios where there is a global active adversary willing to manipulate packets server-side (most applications already do this by allowing key verification, but key transparency will become more important) and where there is a legal authority that can compel companies to modify their software. In the latter scenario, technologies such as reproducible builds and binary transparency, as well as making the source code open and auditable for explicit “frontdoors”, will help with giving users assurances that if a company was indeed compelled by a legal authority, it would at least be known to the world. This may discourage such authorities from pursuing such tactics, as they would run the risk of interception targets losing confidence in the platform that they are using to communicate.

 

Thanks to Patrick Gray, John Cowan, Lauri Love, Richard Tynan and Alexander Hicks for feedback and comments on this post.

2 thoughts on “New threat models in the face of British intelligence and the Five Eyes’ new end-to-end encryption interception strategy”

  1. I don’t quite understand why the discussion involves only one government, as if we had a world government that set the rules for everyone and is accepted by everyone. The discussion becomes interesting when considering that many governments may claim jurisdiction over the same manufacturer at the same time.
    Lawful interception exhibits all the key points of multilateral security at its best.

    And: isn’t this subversion of the product’s security what the US is claiming Huawei has to do for the Chinese government, which Huawei is going to great expenses to refute.

  2. The thing that screams for comment is this quoted sentence:

    “It is an established principle that appropriate government authorities should be able to seek access to otherwise private information when a court or independent authority has authorized such access based on established legal standards.”

    This sounds a superficially reasonable summary of rule of law applied to surveillance. But it is subtly ambiguous and amounts to a remarkable claim in fact. It does NOT imply case-by-case authorisation by an independent authority. It can be read to say that it is fine and accepted that they can interfere with privacy in any case they wish, if this is *the sort of thing* they have been generally authorised to do.

Leave a Reply

Your email address will not be published. Required fields are marked *