Incentives in Security Protocols

The 2018 edition of the International Security Protocols Workshop took place last week. The theme this year was “fail-safe and fail-deadly concepts in protocol design”.

One common theme at this year’s workshop is that of threat models and incentives, which is covered by the majority of accepted papers. One of these is our (Sarah Azouvi, Alexander Hicks and Steven Murdoch) submission – Incentives in Security Protocols. The aim of the paper is to discuss how incentives can be considered and incorporated in the security of systems. In line with the given theme, the focus is on fail-safe and fail-deadly cases which we look at for the cases of the EMV protocol, consensus in cryptocurrencies, and non-economic systems such as Tor. This post will summarise the main ideas laid out in the paper.

Fail safe, fail deadly and people

Systems can fail, which requires some thought by system designers to account for these failures. From this setting comes the idea behind fail safe protocols which are such that even if the protocol fails, the failure can be dealt with or the protocol can be aborted to limit damage. The idea of a fail deadly setting is an extension of this where failure is defended against through deterrence, as in the case of nuclear deterrence (sometimes a realistic case).

Human input often plays a role in the use of the system, particularly when decisions are required as in fail safe and fail deadly instances. These decisions are then made according to incentives which can aligned to make the system robust to failure. For a fail deadly alignment, this means that a person in position to prevent system failure will be harmed by the failure. In the fail safe case, the innocent parties should be protected from the consequences of system failure. The two concepts are really two sides of the same coin that assigns liability.

It is often said that people are the weakest link in security, but that is an easy excuse for broken protocols. If security incentives are aligned properly, then humans are the strongest link.

The EMV protocol, adding incentives after the fact

As a first example, we consider the case of the EMV protocol, which is used for the majority of smart card payments worldwide, as well as smartphone and card-based contactless payment. Over the years, many vulnerabilities have been identified and removed. Fraud still exists however, due not to unexpected protocol vulnerabilities but to decisions made by banks (e.g., omitting the ability for cards to produce digital signatures), merchants (e.g., omitting PIN verification) and payment networks not sending transactions details back to banks. These are intentional choices, aiming to saves costs and cut transaction times but make fraud harder to detect.

The payment industry has tried to deal with this by introducing economic incentives to use more secure methods, reducing fees for transactions using them and assigning the liability for fraud to the party which causes the security level to be reduced. This provides a fail-safe overlay on top of a protocol optimised for compatibility rather than security. Over time, parties will be encouraged to either adopt more secure methods or mitigate fraud risks through other means like machine-learning based risk analysis. Nonetheless, it is clear that the EMV protocol was not designed with the understanding that incentives would be later added that play a central role in the security of the system.

This omission becomes clear during disputes where there is a disagreement as to who should be liable. This is because communication between participants is designed to establish whether the transaction should proceed, rather than which party makes which decision. Policies on how participants should act are not of the part of the EMV specifications, and even assuming that participants are honest it can be hard to reverse engineer decisions from the limited details available.

The issue stems from the fact that the incentives put in a place are entirely separate from the protocol. Given that the incentives are based on enticing the use of more secure methods and assigning liability, a protocol taking these into account should produce evidence of the final system state, and how it was arrived at. Of course, the evidence producing mechanism should be resilient to dishonest participants.

Only a small proportion of the protocol exchange currently has end-to-end security, but as payment communication flows are only between participants with a written contract (for mostly historical reasons), the deficiency is somewhat mitigated. But could the system benefit from a formalisation that would indicate if evidence produced by the system is sufficient to properly allocate incentives?

Consensus in cryptocurrencies, lacking an integral view of incentives

Unlike many systems, cryptocurrencies were designed with some specific incentives in mind as part of the system’s security. Miners are incentivised to be honest by the mining regards and transaction fees defined by the protocol.

This has had some success, but mining pools, selfish mining and other attacks have since been described and observed. This clearly suggests that the Nakamoto consensus protocol (and it’s variants) may not be incentive-compatible, as it fails to capture all possible behaviours. Sadly, it is still rare to see papers focusing on incentives, and those that do tend to consider them separately from the security protocols they describe.

Another important aspect of blockchains is the forking mechanism. Again, incentives play a role in making these fail-safe and fail-deadly instances of the protocol. Soft forks, which are backwards compatible and allow users to choose which version of the software to run without splitting the network, are fail-safe. Even if there is a disagreement amongst the network, a split will not have to happen. On the other hand, some changes may not be backwards compatible and result into hard forks, where the network splits. This was the case when, following the DAO hack, part of the network had clear incentives to reverse the hack, leading to a split into Ethereum and Ethereum Classic. Both are now valued very differently (see ETH and ETC), showing that miners risk losing valuable rewards if their fork is not supported, making them a fail-deadly instance.

Protocols can also be added on top of existing cryptocurrencies, as in the case of the Lightning off-chain payment channel system. The goal here is to allow parties to transact offline with only an initial deposit and final balance appearing on-chain. The security of the scheme relies on cryptography, but is partially based on incentives as parties are discouraged from cheating as the honest party could submit evidence to the network to receive the deposit of the cheating party. Evidence then appears as the mechanism that connects incentives to the protocol, as suggested in the EMV section.

Incentives for non-economic systems like Tor

The above examples illustrate systems involving transactions with clear financial values, and economic incentives. What about systems which do not involve transactions or assets?

The anonymity system Tor is a good example of a network whose security could be increased with suitable incentives. There may be incentives for many (perhaps not all) users to connect to the network, but there is much less incentive to run a Tor node. Despite this, there still a few million users and a few thousand servers.

The lack of incentives (excluding transacting on onion websites) clearly does not prevent the existence of Tor, but motivating participation would better the network. Although the economics of anonymity have been studied and proposals have been made to reward hosting servers, these have not been implemented. This suggests that it is not an issue of just economics.

In fact, adding incentives to a system may just produce unexpected results.

The Israeli nursery study, which looked at the effect of adding fines for late parents, found that the fines only reinforced the behaviour they meant to punish. The change was also not reverted once fines were removed, so adding incentives without first evaluating their effects in practice may not be advisable. However, it may not always be possible properly simulate those, leading to uncertainty about their potential effects.

How to better capture incentives as part of security properties

Incentives and how we handle them can be split into three parts: incentive types, enabling mechanisms and models to reason about incentives.

Incentive types are divided into economic and non-economic, external and internal, explicit and implicit, rewards and punishments. Clearly, economic incentives are easier to define but non-economic incentives may also sometimes be required. In order to evaluate an incentive, it should also be explicit, implicit incentives are more likely to be exploited, as in attacks on mining. Internal incentives may also be easier to abuse, but don’t have to really on being externally enforced. Rewards and punishments can also be considered, to incentivise honest behaviour, or disincentivise dishonest behaviour depending on the context.

For incentives to work, they must be reliable so that users can trust the pay-offs. Evidence is a recurring mechanism that helps ensure this in the examples we considered. However, evidence must be unambiguous, tamper resistant and interpretable. Consensus protocols (i.e., the network) and trusted third parties (e.g., the justice system) are other examples of enabling mechanisms. It would be interesting to know if other mechanisms could be used.

Then there are models, without which we cannot reason about incentives properly. The main challenge is obtaining a framework where standard notions of security can be discussed alongside incentives, on equal footing. One problem is the mismatch between security and Game Theory which is the standard tool to evaluate behaviour in the face of incentives. Distributed Systems for example like to consider good and arbitrarily bad participants in the context of properties like Byzantine fault tolerance. On the other hand, Game Theory is concerned with rational players and does not account with machines failing for arbitrary (or external) reasons and players making unintentional mistakes or acting irrationally.

Nash equilibria are a good example of game theoretic concepts that often appear in security papers although they may not be well suited. They exist for finite actions games involving a finite set of players, whereas blockchain consensus protocols involve a theoretically unlimited set of actions (for example a miner may decide to publish a block they mined at any moment in time). A Nash equilibrium also only considers one participant deviating, which is clearly unrealistic.

There are already existing extensions that come closer to modelling what we want. For example, (k,t)-robust equilibria which tolerate k participants deviating and ensure players who do not deviate are not worse off for up to t participants deviating, come closer to modelling fail-safe cases. For fail-deadly cases, there are (k,t)-punishment strategies where the threat of t participants enforcing a punishment stops a coalition of k participants from deviating. Other work in the direction of bridging Distributed Systems and Game Theory can be found in the BAR model, whilst Rational Cryptographylooks at combining Game Theory with Cryptography.

The above ideas were presented at the workshop before being debated with the other attendees. Proceedings, including transcripts of the discussion will be published at a later date.

2 thoughts on “Incentives in Security Protocols”

  1. Thanks for this. Very useful.
    Didn’t know about BAR models, and was always wondering about how selfish mining fit into the BFT model.

    Surprised that there’s no blockchain systems that adopt BAR. Or is there?

    1. Hi Anh,

      Thanks for your comment. As far as we are aware there are only a couple of blockchain papers that consider the BAR Model (e.g “SmartCast: An incentive compatible consensus protocol using smart contracts” and “Solidus: An Incentive-compatible Cryptocurrency Based on Permissionless Byzantine Consensus”).

Leave a Reply to Anh Cancel reply

Your email address will not be published. Required fields are marked *