(Information) security can, pretty strongly arguably, be defined as being the process by which it is ensured that just the right agents have just the right access to just the right (information) resources at just the right time. Of course, one can refine this rather pithy definition somewhat, and apply tailored versions of it to one’s favourite applications and scenarios.
A convenient taxonomy for information security is determined by the concepts of confidentiality, integrity, and availability, or CIA; informally:
- the property that just the right agents have access to specified information or systems;
- the property that specified information or systems are as they should be;
- the property that specified information or systems can be accessed or used when required.
Alternatives to confidentiality, integrity, and availability are sensitivity and criticality, in which sensitivity amounts to confidentiality together with some aspects of integrity and criticality amounts to availability together with some aspects of integrity.
But the key point about these categories of phenomena is that they are declarative; that is, they provide a statement of what is required. For example, that all documents marked ‘company private’ be accessible only to the company’s employees (confidentiality), or that all passengers on the aircraft be free of weapons (integrity), or that the company’s servers be up and running 99.99% of the time (availability).
It’s all very well stating, declaratively, one’s security objectives, but how are they to be achieved? Declarative concepts should not be confused with operational concepts; that is, ones that describe how something is done. For example, passwords and encryption are used to ensure that documents remain confidential, or security searches ensure that passengers do not carry weapons onto an aircraft, or RAID servers are employed to ensure adequate system availability. So, along with each declarative aim there is a collection of operational tools that can be used to achieve it.
There many books and papers in the security literature which argue that CIA is an inadequate set of concepts. For one example, they might argue that one also needs ‘authentication’. But this, of course, is a category error – it assigns to something a quality or action that can only properly be assigned to things of another category. Authentication is an operational tool that, essentially, is used to help deliver confidentiality. For another example, sometimes it is argued that non-repudiation should also be added. That’s a bit more subtle, but, again, it’s a category error. First, what is really meant, I think, is ‘non-repudiability’, which is an example of an integrity property: ensuring that, for example, credentials remain valid once they have been issued until such time as they are properly withdrawn: your access card should remain valid until such time as it expires or you report it lost or stolen. Then ‘non-repudiation’ is an operational (e.g., policy) tool to ensure that expiration dates are enforced and losses are reported, and so on; that is, that policies, or contracts, are enforced.
Of course, one might argue that the busy security professional shouldn’t need to be concerned with these possibly arcane philosophical distinctions. Maybe so, but many security professionals would agree that, as things stand, security management remains too much of craft skill and is not yet enough an engineering science. How can we nudge things in the right direction?
Here at UCL we do a lot of work in developing mathematical tools for systems and security modelling. To borrow from Mark Watney, we ‘science the s**t out of it’.
At the heart of the way we go about things is the distinction between declarative and operational concepts, and the way we do that starts from logic. The two key concepts in logic are truth and proof. Truth is a declarative concept: a proposition is true just in case the situation that it describes holds in the world it is supposed to be about. For example, as I write this piece, the proposition ‘it is raining today’ is true here in central London. Proof, on the other hand, provides a way of constructing an argument that establishes that something is true, starting from ‘axioms’ and proceeding by step-by-step logical inferences to establish (the truth of) the desired conclusion. It is an operational tool for establishing a declarative concept. [Note: I have secretly talked about what is called ‘intuitionistic logic’ here, but that is quite natural when working with ideas about information and computation.]
The logical theory I’ve alluded to here is deep and subtle, but all we need for now is the idea that when we model a system we are describing the world about which we want to make assertions. Those assertions may, for example, be properties about the correctness or security (CIA) of the system. Then we write something like
S ⊨ φ
which means that the system model S has logical property denoted by the logical formula φ; that is, that the property described by φ is true in the world modelled by S, and is described in terms of things called ‘process algebra’ and ‘resource semantics’. So, the operational stuff lives in S and is used to describe the world. The declarative stuff lives in φ and asserts what should be true. For example, S might describe an access control system, based on password authentication or ID cards, and φ might assert that only individuals in a specified group may access the filestore or building.
Then we can pull some logicians’ stunts. Logic allows to describe the compositional structure of the systems (i.e., how it’s built from its component parts), how it evolves over time, how it uses resources, and more [lots more scientific detail].
These logical ideas can, I think, shed some light on a debate in (information) security, started perhaps by Donn Parker in his book ‘Fighting Computer Crime’ but with similar suggestions and issues arising all over the place. Parker employs the concepts of availability, utility, integrity, authenticity, confidentiality, and possession for his analysis. And the debate is the following: is this a better organization than CIA?
I think our modelling framework provides a convenient way understand how all these things fit together. Parker adds to CIA authenticity, possession, and utility. Let’s consider first ‘possession’. Here the idea is that an agent may have control of an item of information without necessarily breaching its confidentiality — for example, a thief may possess an envelope that contains a security code, but may not have opened the envelope. Our framework suggests there is no need to take this as a new primitive concept: we would assert, using ideas from mathematical logic, that the system is in a state such that if a certain action were to be taken (i.e., if the envelope be opened), then the confidentiality of the security code would no longer hold.
Authenticity is concerned with provenance. Is the claimed authorship of a document true? For us, this is rather straightforward: we make a logical assertion about the origin of a document within a system. Roughly speaking, either it’s true or it isn’t. We don’t need a new primitive concept.
Utility, a concept from economics and in no way tied to security, is much more interesting. Clearly, it adds expressivity to the concepts of security. What’s the right way to handle it? Parker treated it as a new security concept and, in so doing, might be considered to have initiated the field we now, in the light of Ross Anderson’s seminal paper at ACSAC in 2001, call information security economics.
Utility is measure of usefulness. It applies naturally to operational tools. How reliable is the card reader? Have I lost the encryption key? It’s no use if I have. But it can also be applied to declarative concepts: what value do I assign to protecting the confidentiality of my documents or to maintaining the availability of my website? How do they trade off against each other?
But what is the right way to incorporate utility into our approach to security management? In a range of recent papers, we show how the concept of utility can be added to the logical tools — for modelling and reasoning about systems — that we use for analysing security. Again, we need no new security concepts. Just new ways of reasoning about them.
One thought on “Category errors in (information) security: how logic can help”
I find this quite interesting. Certainly there has been great muddling together of concepts over many years (how many statements have you seen that include the phrase “privacy and security” as though they were a single thing?) I’m not a logician or philosopher, but I’m not entirely convinced that all your examples, including what you attribute to Parker et al are category errors in the Ryle sense. For example, while it may be reasonable to treat “keep your password secret” as a mere operational concept that does not properly belong with the CIA pillars, I don’t see that it can’t stand alone as a declarative. Perhaps a minor one, but still it seems a valid goal. So what do you do with “lesser” declaratives like this other than deeming them not to be such? Once you start shifting them down the tree, they inherently look more and more operational, and less fundamental. Mission accomplished, perhaps. But how do you ever add anything to your pre-chosen handful of declaratives?
In passing, I think in this context of helping others reason through these things it’s a bad and unnecessary idea to use the phrase “just in case” with the meaning you do, i.e. “if and only if”. I appreciate that participants in any field use common English words and phrases with a meaning specialized to their field, e.g. you’ll get chuckles from your lawyer if you think the phrase “quiet enjoyment” in your lease has much to do with noisy neighbours. But really, here the common English usage is “I’ll take my umbrella today just in case it rains.”, which has nothing to do with your usage. However clunky sounding, “if and only if” is clear and unambiguous.