Science “of” or “for” security?

The choice of preposition – science of security versus science for security – marks an important difference in mental orientation. This post grew out of a conversation last year with Roy Maxion, Angela Sasse and David Pym. Clarifying this small preposition will help us set expectations, understand goals, and ultimately give appropriately targeted advice on how to do better security research.

These small words (for vs. of) unpack into some big differences. Science for security seems to mean taking any scientific discipline or results and using that to make decisions about information security. Thus, “for” is agnostic as to whether there is any work within security that looks like science. Like the trend for evidence-based medicine, science for security would advocate for evidence-based security decisions. This view is advocated by RISCS here in the UK and is probably consistent with approaches like the New School of Information Security.

Science for security does not say security is not science. More accurately, it seems not to care. The view is agnostic and seems to say it does not matter whether security is science. The point seems to be there is enough difficulty in adapting other sciences for use by security, and that applying the methods of other sciences to security-relevant problems is what matters. There are many examples of this approach, in different flavours. We can see at least three: porting concepts, re-situating approaches, and borrowing methods. We’re adapting these first two from Morgan (2014).

Porting concepts

Economics of infosec is its own discipline (WEIS). The way Anderson (2001) applies economics is to take established principles in economics to shed light on established difficulties in infosec.

Re-situating approaches

This is when some other science understands something, and we generalise from that instance and try to make a concrete application to security. We might argue that program verification takes this approach, re-situating understanding from mathematics and logic. Studies on keystroke dynamics also re-situate the understanding of human psychology and physical forensics.

Borrowing methods

We might study a security phenomenon according to the methods of an established discipline. Usable security largely applies psychology- and sociology-based methods, for example. Of course, there are specific challenges that might arise in studying a new area such as security (Krol et al., 2016), but the approach is science for security because the challenges result in minor tweaks to the method of the home discipline.

Continue reading Science “of” or “for” security?

What can infosec learn from strategic theory?

Antonio Roque, of MIT Lincoln Labs, has published some provocative papers to arXiv over the last year. These include one on cybersecurity meta-methodology and one on making predictions in cybersecurity. These papers ask some good questions. The one I want to focus on in this short space is what cybersecurity can learn from Carl von Clausewitz’s treatise On War.

This might seem a bit odd to modern computer scientists, but I think it’s a plausible question. Cybersecurity is about winning conflicts, at least sometimes. And as I and others have written, one of the interesting challenges about generating knowledge with a science of security is the fact we have active adversaries. As Roque tells us, generating knowledge in the face of adversaries is also one of the things On War is about.

One important question for me is whether Clausewitz interestingly presaged our current problems (and has since been overtaken), or if On War makes contributions to thinking about cybersecurity that are new and comparable to those from the fields of economics, mathematics, philosophy of science, etc. After a close reading of these papers, my stance is: I have more questions that need answers.

Continue reading What can infosec learn from strategic theory?

An untapped resource to reproduce studies

Science is generally accepted to operate by conducting specially-designed structured observations (such as experiments and case studies) and then interpreting the results to build generalised knowledge (sometimes called theories or models). An important, nay necessary, feature of the social operation of science is transparency in the design, conduct, and interpretation of these structured observations. We’re going to work from the view that security research is science just like any other, though of course as its own discipline it has its own tools, topics, and challenges. This means that studies in security should be replicable, reproducible, or at least able to be corroborated. Spring and Hatleback argue that transparency is just as important for computer science as it is for experimental biology. Rossow et al. also persuasively argue that transparency is a key feature for malware research in particular. But how can we judge whether a paper is transparent enough? The natural answer would seem to be if it is possible to make a replication attempt from the materials and information in the paper. Forget how often the replications succeed for now, although we know that there are publication biases and other factors that mess with that.

So how many security papers published in major conferences contain enough information to attempt a reproduction? In short, we don’t know. From anecdotal evidence, Jono and a couple students looked through the IEEE S&P 2012 proceedings in 2013, and the results were pretty grim. But heroic effort from a few interested parties is not a sustainable answer to this question. We’re here to propose a slightly more robust solution. Master’s students in security should attempt to reproduce published papers as their capstone thesis work. This has several benefits, and several challenges. In the following we hope to convince you that the challenges can be mitigated and the benefits are worth it.

This should be a choice, but one that master’s students should want to make. If anyone has a great new idea to pursue, they should be encouraged to do so. However, here in the UK, the dissertation process is compressed into the summer and there’s not always time to prototype and pilot study designs. Selecting a paper to reproduce, with a documented methodology in place, lets the student get to work faster. There is still a start-up cost; students will likely have to read several abstracts to shortlist a few workable papers, and then read these few papers in detail to select a good candidate. But learning to read, shortlist, and study academic papers is an important skill that all master’s students should be attempting to, well, master. This style of project would provide them with an opportunity to practice these skills.

Briefly, let’s be clear what we mean by reproduction of published work.
Reproduction isn’t just one thing. There’s reproduce and replicate and corroborate and controlled variation (see Feitelson for details). Not everything is amenable to reproduction. For example, case studies (such as attack papers) or natural experiments are often interesting because they are unique. Corroborating some aspect of the case may be possible with a new study, and such study is also valuable. But this not the sort of reproduction we have in mind to advocate here.

Continue reading An untapped resource to reproduce studies

Attack papers are case studies

We should treat attack papers like case studies. When we read them, review them, use them for evidence, and learn from them. This claim is not derogatory. Case studies are useful. But like anything, to be useful case studies need to be done and used appropriately.

Let’s be clear what I mean by attack paper. Any paper that reports how to attack some system. Any paper that includes details of an exploit, discloses a vulnerability, or demonstrates a proof-of-concept for breaching the security of a system. The efail paper that Steven discussed recently is an example. Security conferences are full of these; the ratio of attack papers to total papers varies per conference. USENIX Security tends to contain a fair few.

Let’s be clear what I mean by case study. I mean a scientific report that details a specific occurrence of interest as observed by the author. Case studies can be active, and include interviews or other questioning. They can be solely passive observation. Case studies can follow just one case in isolation, or might follow a series of related cases in similar ways for comparison. Case studies usually do not involve a planned intervention by the observer, otherwise we start to call them experiments. But they may track changes as the result of interventions outside the observer’s control.

What might change if we think about attack papers as case studies? We can apply our scientific experience from other disciplines. I’ve argued before that security is a science. We need to adapt scientific techniques, and other sciences might learn from what we do in security. But we need to be in a dialogue there. Calling attack papers what they are opens up this dialogue in several ways.

Continue reading Attack papers are case studies

Thinking about fake news – As a security incident?

In Tristan and David’s Philosophy, Politics and Economics of Security and Privacy class, Jono gave a little information about incident response.  As a result, we have been thinking about the recent furor over fake news. There are some big questions circling this topic, and we’re going to try to focus on a part we have some competence in: what an understanding of fake news as a security incident can contribute to the wider debate. Our goal here is mostly to highlight some lessons from security research that should be applicable, so we can help constrain the solution space. Ultimately, any solution will need to engage with wider civil society.

The lessons we will argue for in the following are:

  • Solutions need to support the elector’s primary task. Education to avoid cognitive biases is not a short- or medium-term solution.
  • Focus on aligning the incentives of the media companies and the voters. Reduce the return on investment for the adversary.
  • Any blocking should be strategically useful, and not merely reactionary.

First, we want a more specific term, as well as a less charged one. Fake news includes politically or financially motivated stories presented as factual reports on the world that are fictional in material ways, and usually are intended to stir strong feelings. This definition is hardly complete. Furthermore, similar to the term “post-truth” as discussed by Jasanoff and Simmet, the term “fake news” makes several value judgement we’d like to avoid. “Fake news” carries a strong suggestion that we, the speakers, know what is true and what isn’t, and it also indicates some condescension by the speaker for anyone who believes an item of fake news. We want to avoid such insults. Instead, let’s say we want to focus on the following hypothetical security policy: democratic elections should be free from foreign interference.

Grounding out this policy definition hangs on the term “interference.” This is hard. Ultimately, the will of an elector in a free and fair election needs to be respected. This makes it particularly challenging to agree on constraints to what information an elector has access to. In practice, no elector is omniscient, so some constraints de facto exist. But weighing in on this issue is outside our competence. Let’s assume for now that public policy will provide an assessment of “interference” eventually. The UK recently announced a “dedicated national security communications unit” would be charged with “combating disinformation by state actors and others.” In France, Emmanuel Macron plans legislation to fight interference from foreign sources during elections. Various social media platforms have likewise announced attempted fixes, which means they have some functional definition of what “interference” they’re seeking to remove. Unfortunately, “none of the tech giants claim to be ready” for the November 2018 elections in the US.

Interference in elections is a type of information warfare. An appropriate security policy needs to assess the threat environment and the capabilities of the adversaries. In particular, the Russian Federation has been assessed as a highly motivated and well-resourced actor in this space. We should note that Russia, in turn, assesses the intent and capability of the USA similarly. Tools and tactics within information warfare, particularly disinformation campaigns, help define “interference” within our security policy.

In this context, what can the security research community recommend? Well, the main target of the disinformation campaign are usual citizens. They are targetable largely due to inherent cognitive biases in the way humans process and reason about information. In security terms, we could see these biases as vulnerabilities in the system. Classically, we have two options to secure the system: patch the vulnerability, or prevent the adversary from exploiting it by controlling or filtering the attack before it reaches the target.

Patch in this case would mean teaching people to avoid cognitive biases in their day-to-day reasoning. Psychology tells us this is hard. Intelligence analysts train for months or years for this. And the research in usable security has affirmed time and time again that the users are not the enemy. That is, the system must alleviate the burden on the user’s attention and not interfere with their primary task, or else the user will subvert or avoid the protections put in place. Any changes in user culture are slow. This leads us to lesson 1 on preventing disinformation campaigns for election interference: solutions need to support the elector’s primary task. Education to avoid cognitive biases is not a short-term or medium-term solution.

Controlling the attack vectors is more promising, although filtering them is not. A key aspect of any information security policy is aligning the economic incentives of the actors. Economics is a main reason why infosec is hard. It may not be easy to reorganize the incentives in the advertising and news distribution media space. However, as long as organizations profit from more clicks on an article no matter the content, there will be an incentive to drive viewers that is ultimately at cross-purposes with our security goal. Such misaligned incentives often swamp any technical security solutions. And any adversary with an economic incentive to attack usually will. Thus our second lesson: focus on aligning the incentives of the media companies and the voters; reduce the return on investment for the adversary. Exactly how to do these things will require future work.

There are huge issues about human rights and free speech for blocking access to information. However, the technical aspects of blacklisting are worth understanding before even attempting such human-rights debates. Blacklists of internet resources, such as domain names, IP addresses, or web pages, are useful. But they’re not a final solution. Whether blacklists move at the speed of national legislatures or are updated every five minutes, their main impact is to cause the adversary to move around.  Blacklists alone are not enough. We would need to look for suspiciously mobile resources (i.e. fast-flux), and eventually whitelist resources. Blacklists such as implemented by Facebook in response to Congress are helpful. But we should carefully consider how they drive the disinformation campaigns into a place we are better able to counteract them, and be sure we don’t make such campaigns harder to find instead. Lesson 3 is therefore that any blocking should be strategically useful, and not merely reactionary.

We’d be happy for further comments on fake news, disinformation campaigns that interfere with elections, lessons we’ve missed, disagreements about the value of security research to this topic, and other comments you might have! This is a wide open topic, and we’re still sounding it all out.

Practicing a science of security

Recently, at NSPW 2017, Tyler Moore, David Pym, and I presented our work on practicing a science of security. The main argument is that security work – both in academia but also in industry – already looks a lot like other sciences. It’s also an introduction to modern philosophy of science for security, and a survey of the existing science of security discussion within computer science. The goal is to help us ask more useful questions about what we can do better in security research, rather than get distracted by asking whether security can be scientific.

Most people writing about a science of security conclude that security work is not a science, or at best rather hopefully conclude that it is not a science yet but could be. We identify five common reasons people present as to why security is not a science: (1) experiments are untenable; (2) reproducibility is impossible; (3) there are no laws of nature in security; (4) there is no single ontology of terms to discuss security; and (5) security is merely engineering.

Through our introduction to modern philosophy of science, we demonstrate that all five of these complaints are misguided. They rely on an old conception of what counts as science that was largely abandoned in the 1970s, when the features of biology came to be recognized as important and independent from the features of physics. One way to understand what the five complaints actually allege is that security is not physics. But that’s much less impactful than claiming it is not science.

More importantly, we have a positive message on how to overcome these challenges and practice a science of security. Instead of complaining about untenable experiments, we can discuss structured observations of the empirical world. Experiments are just one type of structured observation. We need to know what counts as a useful structure to help us interpret the results as evidence. We provide recommendations for use of randomized control trials as well as references for useful design of experiments that collect qualitative empirical data. Ethical constraints are also important; the Menlo Report provides a good discussion on addressing them when designing structured observations and interventions in security.

Complaints about reproducibility are really targeted at the challenge of interpreting results. Astrophysics and paleontology do not reproduce experiments either, but are clearly still sciences. There are different senses of “reproduce,” from repeat exactly to corroborate by similar observations in a different context. There are also notions of statistical reproducibility, such as using the right tests and having enough observations to justify a statistical claim. The complaint is unfair in essentially demanding all the eight types of reproducibility at once, when realistically any individual study will only be able to probe a couple types at best. Seen with this additional nuance, security has similar challenges in reproducibility and interpreting evidence as other sciences.

A law of nature is a very strange thing to ask for when we have constructed the devices we are studying. The word “law” has had a lot of sticking power within science. The word was perhaps used in the 1600s and 1700s to imply a divine designer, thereby making the Church more comfortable with the work of the early scientists. The intellectual function we really care about is that a so-called “law” lets us generalize from particular observations. Mechanistic explanations of phenomena provide a more useful and approachable goal for our generalizations. A mechanism “for a phenomenon consists of entities (or parts) whose activities and interactions are organized so as to be responsible for the phenomenon” (pg 2).

MITRE wrote the original statement that a single ontology was needed for a science of security. They also happen to have a big research group funded to create such an ontology. We synthesize a more realistic view from Galison, Mitchell, and Craver. Basically, diverse fields contribute to a science of security by collaboratively adding constraints on the available explanations for a phenomenon. We should expect our explanations of complex topics to reflect that complexity, and so complexity may be a mark of maturity, rather than (as is commonly taken) a mark that security has as yet failed to become a science by simplifying everything into one language.

Finally, we address the relationship between science and engineering. In short, people have tried to reduce science to engineering and engineering to science. Neither are convincing. The line between the two is blurry, but it is useful. Engineers generate knowledge, and scientists generate knowledge. Scientists tend to want to explain why, whereas engineers tend to want to predict a change in the future based on something they make.  Knowing why may help us make changes. Making changes may help us understand why. We draw on the work of Dear and Leonelli to bring out this nuanced, mutually supportive relationship between science and engineering.

Security already can accommodate all of these perspectives. There is nothing here that makes it seem any less scientific than life sciences. What we hope to gain from this reorientation is to refocus the question about cybersecurity research from ‘is this process scientific’ to ‘why is this scientific process producing unsatisfactory results’.

Security intrusions as mechanisms

The practice of security often revolves around figuring out what (malicious act) happened to a system. This historical inquiry is the focus of forensics, specifically when the inquiry regards a policy violation (such as a law). The results of forensic investigation might be used to fix the impacted system, attribute the attack to adversaries, or build more resilient systems going forwards. However, to execute any of these purposes, the investigator first must discover the mechanism of the intrusion.

As discussed at an ACE seminar last October, one common framework for this discovery task is the intrusion kill chain. Mechanisms, mechanistic explanation, and mechanism discovery have highly-developed meanings in the biological and social sciences, but the word is not often used in information security. In a recent paper, we argue that incident response and forensics investigators would be well-served to make use of the existing literature on mechanisms, as thinking about intrusion kill chains as mechanisms is a productive and useful way to frame the work.

To some extent, thinking mechanistically is a description of what (certain) scientists do. But the mechanisms literature within philosophy of science is not merely descriptive. The normative benefits extolled include that thinking mechanistically is an effective heuristic for searching out useful explanations; mechanisms provide the most coherent unity to complex fields of study; and that mechanistic explanation is necessary to guide selection among potential studies given limited experimental resources, experiment design decisions, and interpretation of statistical results. I previously argued that capricious use of biological metaphors is bad for information security. We are keenly aware that these benefits of mechanistic explanation need to apply to security as and for security, not merely because they work in other sciences.

Our paper demonstrates how we can cast the intrusion kill chain, the diamond model, and other models of security intrusions as mechanistic models. This casting begins to demonstrate the mosaic unity of information security. Campaigns are made up of attacks. Attacks, as modeled by the kill chain, have multiple steps. In a specific attack, the delivery step might be accomplished by a drive-by-download. So we demonstrate how drive-by-downloads are a mechanism, one among many possible delivery mechanisms. This description is a schema to be filled in during a particular drive-by download incident with a specific URL and specific javascript, etc. The mechanistic schema of the delivery mechanism informs the investigator because it indicates what types of network addresses to look for, and how to fit them into the explanation quickly. This process is what Lindley Darden calls schema instantiation in the mechanism discovery literature.

Our argument is not that good forensics investigators do not do such mechanism discovery strategies. Rather, it is precisely that good investigators do do them. But we need to describe what it is good investigators in fact do. We do not currently, and that lack makes teaching new investigators particularly difficult. Thinking about intrusions as mechanisms unlocks an expansive literature on good ways to do mechanism discovery. This literature will make it easier to codify what good investigators do, which among other benefits allows us to better teach sound methodological practices to incoming investigators.

Our paper on this topic was published in the open-access Journal of Cybersecurity, as Thinking about intrusion kill chains as mechanisms, by Jonathan M. Spring and Eric Hatleback.