Should you phish your own employees?

No. Please don’t. It does little for security but harms productivity (because staff spend ages pondering emails, and not answering legitimate ones), upsets staff and destroys trust within an organisation.

Why is phishing a problem?

Phishing is one of the more common ways by which criminals gain access to companies’ passwords and other security credentials. The criminal sends a fake email to trick employees into opening a malware-containing attachment, clicking on a link to a malicious website that solicits passwords, or carrying out a dangerous action like transferring funds to the wrong person. If the attack is successful, criminals could impersonate staff, gain access to confidential information, steal money, or disrupt systems. It’s therefore understandable that companies want to block phishing attacks.

Perimeter protection, such as blocking suspicious emails, can never be 100% accurate. Therefore companies often tell employees not to click on links or open attachments in suspicious emails.

The problem with this advice is that it conflicts with how technology works and employees getting their job done. Links are meant to be clicked on, attachments are meant to be opened. For many employees their job consists almost entirely of opening attachments from strangers, and clicking on links in emails. Even a moderately well targeted phishing email will almost certainly succeed in getting some employees to click on it.

Companies try to deal with this problem through more aggressive training, particularly sending out mock phishing emails that exhibit some of the characteristics of phishing emails but actually come from the IT staff at the company. The company then records which employees click on the link in the email, open the attachment, or provide passwords to a fake website, as appropriate.

The problem is that mock-phishing causes more harm than good.

What harm does mock-phishing cause?

I hope no company would publicly name and shame employees that open mock-phishing emails, but effectively telling your staff that they failed a test and need remedial training will make them feel ashamed despite best intentions. If, as often recommended, employees who repeatedly open mock-phishing emails will even be subject to disciplinary procedures, not only will mock phishing lead to stress and consequent loss of productivity, but it will make it less likely that employees will report when they have clicked on a real phishing email.

Alienating your employees in this way is really the last thing a company should do if it wants to be secure – something Adams & Sasse pointed out as early as 1999. It is extremely important that companies learn when a phishing email has been opened, because there is a lot that can be done to prevent or limit harm. Contrary to popular belief, attacks don’t generally happen “at the speed of light” (it took three weeks for the Target hackers to steal data, from the point of the initial breach). Promptly cleaning potentially infected computers, revoking compromised credentials, and analysing network logs, is extremely effective, but works only if employees feel that they are on the same side as IT staff.

More generally, mock-phishing conflicts with and harms the trust relationship between the company and employees (because the company is continually probing them for weakness) and between employees (because mock-phishing normally impersonates fellow employees). Kirlappos and Sasse showed that trust is essential for maintaining employee satisfaction and for creating organisational resilience, including ability to comply with security policies. If unchecked, prolonged resentment within organisation achieves exactly the opposite – it increases the risk of insider attacks, which in the vast majority of cases start with disgruntlement.

There are however ways to achieve the same goals as mock phishing without the resulting harm.

Measuring resilience against phishing

Companies are right to want to understand how vulnerable they are to attack, and mock-phishing seems to offer this. One problem however is that the likelihood of opening a phishing email depends mainly on how well it is written, and so mock-phishing campaigns tell you more about the campaign than the organisation.

Instead, because every organisation inevitably receives many phishing emails, companies don’t need to send out their own. Use “genuine” phishing emails to collect the data needed, but be careful not to deter reporting. Realistically, however, phishing emails are going to be opened regardless of what steps are taken (short of cutting off Internet email completely). So organisations’ security strategy should accommodate this.

Reducing vulnerability to phishing

Following mock-phishing with training seems like the perfect time to get employees’ attention, but is this actually an ineffective way to reduce an organisations’ vulnerability to phishing. Caputo et. al tried this out and found that training had no significant effect, regardless of how it was phrased (using the latest nudging techniques from behavioural economists, an idea many security practitioners find very attractive). In this study, the organisation’s help desk staff was overwhelmed by calls from panicked employees – and when told it was a “training exercise”, many expressed frustration and resentment towards the security staff that had tricked them. Even if phishing prevention training could be made to work, because the activity of opening a malicious email is so close to what people do as part of their job, it would disrupt business by causing employees to delete legitimate email or spend too long deciding whether to open them.

A strong, unambiguous, and reliable cue that distinguishes phishing emails from legitimate ones would help, but until we have secure end-to-end encrypted and authenticated email, this isn’t possible. We are left with the task of designing security systems accepting that some phishing emails will be opened, rather than pretending they won’t be and blaming breaches on employees that fail to meet an unachievable bar. If employees are consistently being told that their behaviour is not good enough but not being given realistic and actionable advice on how to do better, it creates learned helplessness, with all the negative psychological consequences.

Comply with industry “best-practice”

Something must be done to protect the company; mock-phishing is something, therefore must must be done. This perverse logic is the root cause of much poor security, where organisations think they must comply with so-called “best practice” – seldom more than out-of-date folk tradition – or face penalties when there is a breach. It’s for this reason that bad security guidance persists long after it has been shown to be ineffective, such as password complexity rules.

Compliance culture, where rules are blindly followed without there being evidence of effectiveness, is one of the worst reasons to adopt a security practice. We need more research on how to develop technology that is secure and that supports an organisation’s overall goals. We know that mock-phishing is not effective, but what’s the right combination of security advice and technology that will give adequate protection, and how do we adapt these to the unique situation of each company?

What to do instead?

The security industry should take the lead of the aerospace industry and recognise the “blame and train” isn’t an effective or acceptable strategy. The attraction of mock phishing exercises to security staff is that they can say they are “doing something”, and like the idea of being able to measure behaviour change as a result of it – even though research points the other way. If vendors claim they have examples of mock phishing training reducing clicks on links, it is usually because employees have been trained to recognise only the vendor’s mock phishing emails or are frightened into not clicking on any links – and nobody measures the losses that occur because emails from actual or potential customers or suppliers are not answered. “If security doesn’t work for people, it doesn’t work.

When the CIO of a merchant bank found that mock phishing caused much anger and resentment from highly paid traders, but no reduction in clicking on links, he started to listen to what it looked like from their side. “Your security specialists can’t tell if it is a phishing email or not – why are you expecting me to be able to do that?” After seeing the problem from their perspective, he added a button to the corporate mail client labeled “I’m not sure” instead, and asked staff to use the button to forward emails they were not sure about to the security department. The security department then let the employee know, plus list all identified malicious emails on a web site employees could check before forwarding emails. Clicking on phishing links dropped to virtually zero – plus staff started talking to each other about phishing emails they had seen, and what the attacker was trying to do.

Security should deal with the problems that actually face the company; preventing phishing wouldn’t have stopped recent ransomware attacks. Assuming phishing is a concern then, where possible to do so with adequate accuracy, phishing emails should be blocked. Some will get through, but with well engineered and promptly patched systems, harm can be limited. Phishing-resistant authentication credentials, such as FIDO U2F, means that stolen passwords are worthless. Common processes should be designed so that the easy option is the secure one, giving people time to think carefully about whether a request for an exception is legitimate. Finally, if malware does get onto company computers, compartmentalisation will limit damage, effective monitoring facilitates detection, and good backups allow rapid recovery.

 

An earlier version of this article was previously published by the New Statesman.

Online security won’t improve until companies stop passing the buck to the customer

It’s normally in the final seconds of a TV or radio interview that security experts get asked for advice for the general public – something simple, unambiguous, and universally applicable. It’s a fair question, and what the public want. But simple answers are usually wrong, and can do more harm than good.

For example, take the UK government’s Cyber Aware scheme to educate the public in cybersecurity. It recommends individuals choose long and complex passwords made out of three words. The problem with this advice is that the resulting passwords are hard to remember, especially as people have many passwords and use some infrequently. Consequently, they will be tempted to use the same password on multiple websites.

Password re-use is far more of a security problem than insufficiently complex passwords, so advice that doesn’t help people manage multiple passwords does more harm than good. Instead, I would recommend remembering your most important passwords (like banking and email), and store the rest in a password manager. This approach isn’t perfect or suitable for everyone, but for most people, it will improve their security.

Advice unfit for the real world

Cyber Aware also tells people not to write down their passwords, or let anyone else know them – banks require the same thing. But we know that people commonly share their banking credentials with family, for legitimate reasons. People also realise that writing down passwords is a pretty good approach if you’re only worried about internet hackers, rather than people who can get close to you to see the written notes. Security advice that doesn’t stand up to scrutiny or doesn’t fit with people’s lives will be ignored – and will discredit the organisation offering it.

Because everyone’s situation is different, good security advice should include helping people to understand what risks they should be worried about, and to take steps that mitigate these risks. This advice doesn’t have to be complicated. Teen Vogue published a tutorial on how to select and configure a secure messaging tool, which very sensibly explains that if you are more worried about invasions of privacy from people who can get their hands on your phone, you should make different choices than if you are just concerned about, for example, companies spying on you.

The Teen Vogue article was widely praised by security experts, in stark contrast to an article in The Guardian that made the eye-catching claim that encrypted messaging service WhatsApp is insecure, without making clear that this only applies in an obscure and extremely unlikely set of circumstances.

Zeynep Tufekci, a researcher studying the effects of technology on society, reported that the article was exploited to legitimise misleading advice given by the Turkish government that WhatsApp is unsafe, resulting in human rights activists using SMS instead – which is far easier for the government to censor and monitor.

The Turkish government’s “security advice” to move from WhatsApp to less secure SMS was clearly aimed more at assisting its surveillance efforts than helping the activists to whom the advice was directed. Another case where the advice is more for the benefit of the organisation giving it is that of banks, where the terms and conditions small print gives incomprehensible security advice that isn’t true security advice, instead merely a legal technique to allow the banks wiggle room to refuse to refund victims of fraud.

Continue reading Online security won’t improve until companies stop passing the buck to the customer

Can Games Fix What’s Wrong with Computer Security Education?

We had the pleasure of Zachary Peterson visiting UCL on a Cyber Security Fulbright Scholarship. The title is from his presentation given at our annual ACE-CSR event in November 2016.

Zachary Peterson is an associate professor of computer science at Cal Poly, San Luis Obispo. The key problem he is trying to solve is that the educational system is producing many fewer computer security professionals than are needed; an article he’d seen just two days before the ACE meeting noted a 73% rise in job vacancies in the last year despite a salary premium of 9% over other IT jobs. This information is backed up by the 2014 Taulbee survey, which found that the number of computer security PhDs has declined to 4% of the US total. Lack of diversity, which sees security dominated by white and Asian males, is a key contributing factor. Peterson believes that diversity is not only important as a matter of fairness, but essential because white males are increasingly a demographic minority in the US and because monocultures create perceptual blindness. New perspectives are especially needed in computer security as present approaches are not solving the problem on their own.

Peterson believes that the numbers are so bad because security is under-represented in both the computer science curriculum and in curriculum standards. The ACM 2013 curriculum guidelines recommend only three contact hours (also known as credit hours) in computer security in an entire undergraduate computer science degree. These are typically relegated to an upper-level elective class, and subject to a long chain of prerequisites, so they are only ever seen by a self-selected group who have survived years of attrition – which disproportionately affects women. The result is to create a limited number of specialists, unnecessarily constrain the student body, and limit the time students have to practice before joining the workforce. In addition, the self-selected group who do study security late in their academic careers have developed both set habits and their mind set before encountering an engineering task. Changing security into a core competency and teaching it as early as secondary school is essential but has challenges: security can be hard, and pushing it to the forefront may worsen existing problems seen in computer science more broadly, such as the solitary, anti-social, creativity-deficient image perception of the discipline.

Peterson believes games can help improve this situation. CTFTime, which tracks games events, reports a recent explosion in cyber security games to over 56 games events per year since 2013. These games, if done correctly, can teach core security skills in an entertaining – and social – way, with an element of competition. Strategic thinking, understanding an adversary’s motivation, rule interpretation, and rule-breaking are essential for both game-playing and security engineering.

Continue reading Can Games Fix What’s Wrong with Computer Security Education?

Security intrusions as mechanisms

The practice of security often revolves around figuring out what (malicious act) happened to a system. This historical inquiry is the focus of forensics, specifically when the inquiry regards a policy violation (such as a law). The results of forensic investigation might be used to fix the impacted system, attribute the attack to adversaries, or build more resilient systems going forwards. However, to execute any of these purposes, the investigator first must discover the mechanism of the intrusion.

As discussed at an ACE seminar last October, one common framework for this discovery task is the intrusion kill chain. Mechanisms, mechanistic explanation, and mechanism discovery have highly-developed meanings in the biological and social sciences, but the word is not often used in information security. In a recent paper, we argue that incident response and forensics investigators would be well-served to make use of the existing literature on mechanisms, as thinking about intrusion kill chains as mechanisms is a productive and useful way to frame the work.

To some extent, thinking mechanistically is a description of what (certain) scientists do. But the mechanisms literature within philosophy of science is not merely descriptive. The normative benefits extolled include that thinking mechanistically is an effective heuristic for searching out useful explanations; mechanisms provide the most coherent unity to complex fields of study; and that mechanistic explanation is necessary to guide selection among potential studies given limited experimental resources, experiment design decisions, and interpretation of statistical results. I previously argued that capricious use of biological metaphors is bad for information security. We are keenly aware that these benefits of mechanistic explanation need to apply to security as and for security, not merely because they work in other sciences.

Our paper demonstrates how we can cast the intrusion kill chain, the diamond model, and other models of security intrusions as mechanistic models. This casting begins to demonstrate the mosaic unity of information security. Campaigns are made up of attacks. Attacks, as modeled by the kill chain, have multiple steps. In a specific attack, the delivery step might be accomplished by a drive-by-download. So we demonstrate how drive-by-downloads are a mechanism, one among many possible delivery mechanisms. This description is a schema to be filled in during a particular drive-by download incident with a specific URL and specific javascript, etc. The mechanistic schema of the delivery mechanism informs the investigator because it indicates what types of network addresses to look for, and how to fit them into the explanation quickly. This process is what Lindley Darden calls schema instantiation in the mechanism discovery literature.

Our argument is not that good forensics investigators do not do such mechanism discovery strategies. Rather, it is precisely that good investigators do do them. But we need to describe what it is good investigators in fact do. We do not currently, and that lack makes teaching new investigators particularly difficult. Thinking about intrusions as mechanisms unlocks an expansive literature on good ways to do mechanism discovery. This literature will make it easier to codify what good investigators do, which among other benefits allows us to better teach sound methodological practices to incoming investigators.

Our paper on this topic was published in the open-access Journal of Cybersecurity, as Thinking about intrusion kill chains as mechanisms, by Jonathan M. Spring and Eric Hatleback.

Steven Murdoch – Privacy and Financial Security

Probably not too many academic researchers can say this: some of Steven Murdoch’s research leads have arrived in unmarked envelopes. Murdoch, who has moved to UCL from the University of Cambridge, works primarily in the areas of privacy and financial security, including a rare specialty you might call “crypto for the masses”. It’s the financial security aspect that produces the plain, brown envelopes and also what may be his most satisfying work, “Trying to help individuals when they’re having trouble with huge organisations”.

Murdoch’s work has a twist: “Usability is a security requirement,” he says. As a result, besides writing research papers and appearing as an expert witness, his past includes a successful start-up. Cronto, which developed a usable authentication device, was acquired by VASCO, a market leader in authentication and is now used by banks such as Commerzbank and Rabobank.

Developing the Cronto product was, he says, an iterative process that relied on real-world testing: “In research into privacy, if you build unusable system two things will go wrong,” he says. “One, people won’t use it, so there’s a smaller crowd to hide in.” This issue affects anonymising technologies such as Mixmaster and Mixminion. “In theory they have better security than Tor but no one is using them.” And two, he says, “People make mistakes.” A non-expert user of PGP, for example, can’t always accurately identify which parts of the message are signed and which aren’t.

The start-up experience taught Murdoch how difficult it is to get an idea from research prototype to product, not least because what works in a small case study may not when deployed at scale. “Selling privacy remains difficult,” he says, noting that Cronto had an easier time than some of its forerunners since the business model called for sales to large institutions. The biggest challenge, he says, was not consumer acceptance but making a convincing case that the predicted threats would materialise and that a small company could deliver an acceptable solution.

Continue reading Steven Murdoch – Privacy and Financial Security

User-centred security awareness empowers employees to be the strongest defense

The release of our business whitepaper “Awareness is only the first step” was recently announced by Hewlett Packard Enterprise (HPE). The whitepaper is co-authored by HPE, UCL, and the UK government’s National Technical Authority for Information Assurance (CESG). The whitepaper emphasises how a user-centred approach to security awareness can empower employees to be the strongest link in defending their organisation. As Andrzej Kawalec, HPE’s Security Services CTO, notes in the press release:

“Users remain the first line of defense when faced with a dynamic and relentless threat environment.”

Security communication, education, and training (CET) in organisations is intended to align employee behaviour with the security goals of the organisation. Security managers conduct regular security awareness activities – familiar vehicles for awareness programmes, such as computer-based training (CBT), can cover topics such as password use, social media practices, and phishing. However, there is limited evidence to support the effectiveness or efficiency of CBT, and a lack of reliable indicators means that it is not clear if recommended security behaviour is followed in practice. If the design and delivery of CET programmes does not consider the individual, they can’t be certain of achieving the intended outcomes. As Angela Sasse comments:

“Many companies think that setting up web-based training packages are a cost-effective way of influencing staff behavior and achieving compliance, but research has provided clear evidence that this is not effective – rather, many staff resent it and suffer from ‘compliance fatigue.’

HPE awareness maturity curve

The whitepaper describes a path to guide the involvement of employees in their own security, as shown in the HPE awareness maturity curve above. To change security behaviors, a company needs to invest in the security knowledge and skills of its employees, and respond to employee needs differently at each stage.

Continue reading User-centred security awareness empowers employees to be the strongest defense

First UCL team competes in the International Capture The Flag competition

Team THOR, UCL’s Capture the Flag (CTF) team, took part in its first CTF competition – the UCSB iCTF, on the 4th December 2015. The team comprised of students from the computer science department – Tom Sigler, Chris Park, Jason Papapanagiotakis, Azeem Ilyas, Salman Khalifa, Luke Roberts, Haran Anand, Alexis Enston, Austin Chamberlain, Jaromir Latal, Enrico Mariconti, and Razvan Ragazan. Through Gianluca Stringhini’s hacking seminars and our own experience, we were eager to test our ability to identify, exploit and patch application vulnerabilities.

The THOR team in action

The CTF competition style was “attack and defence” with a slight twist – each participating team had to write a vulnerable application. We were provided with a Linux virtual machine containing all of the applications which we hosted on a locally running server. This server connected to the organiser’s network over a virtual private network (VPN). During the competition, the organiser regularly polled our server to make sure each of the applications were running and whether or not they still had a security vulnerability. We were scored on 3 criteria: how many applications were up and running (and whether or not the vulnerabilities had been patched), how many flags we had managed to obtain through exploiting vulnerabilities and how close our submitted application was to the median in terms of being vulnerable, but not too vulnerable.

The application had to be “balanced” in terms of security i.e. if it was too easy or too difficult to exploit then points would be deducted. Fortunately, the organisers provided sample applications which gave us an excellent starting point. One of the sample applications was a “notes” service written in PHP – it enabled a note (which represented the flag) to be saved against a flag ID with a password. The note could be retrieved by supplying the flag ID and password, but a vulnerable CGI script enabled the note to be retrieved without a password! We customised this application by removing the CGI script (this vulnerability was very easy to identify and exploit) and changing the note insertion code so that a specially crafted token (a hex-encoded Epoch timestamp) was added next to each flag ID, password and note entry. A vulnerability was then introduced whereby note retrieval would be a two-step process – first the flag ID and password would be specified, then if the password was valid, the token would be retrieved and used in combination with the flag ID to retrieve the note. The first step of the process could be bypassed by brute-forcing the token and avoiding the password verification phase. We kept our fingers crossed that this would be exploitable by the other teams, but not too easily!

Attacking involved analysing the various applications written by the other teams for vulnerabilities. As soon as a vulnerability had been identified, we had to write some code to perform the exploit and retrieve the flag for that application. The flag served as evidence that we had successfully exploited an application. To maximise attack points, we had to run the exploit against each team’s server and submit the flags to the organiser every few minutes. Defence involved ensuring that the applications were up and running, keeping the server online and ideally patching any vulnerabilities identified in our copies of the applications.

The competition started at 5pm – we were online with our server and applications shortly afterwards. Fueled by adrenaline, caffeine, and immense enthusiasm, we chose several applications to focus our initial efforts on and got cracking!

A good portion of the applications were web applications written in PHP. This was great news as we had focused on web application vulnerabilities during the hacking seminars. We also identified applications written in Python, Java, C and Bash. Some of them were imaginative and amusing – a dating service for monkeys written in PHP, a pizza order and delivery service written in PHP and a command-line dungeon game written in C.

We managed to exploit and patch an ATM machine application through a SQL injection vulnerability (the same security vulnerability involved in the recent TalkTalk and vTech data breaches). One of the Python applications used a “pickle” function which was exploited to enable arbitrary code execution. A second Python application was vulnerable to a path-traversal bug which enabled flags to be retrieved from other user’s directories. We also were on the cusp of exploiting a buffer-overflow vulnerability in a C application, but ran out of time.

The competition ran for 8 hours and at the end, THOR ranked 14th out of 35. Given that it was THOR’s first time participating in a CTF, being the only team to represent the UK and being up against experienced teams, we felt that it was a great result! We had a huge amount of fun taking part and working as a team, so much so, that we are planning to take part in more CTF competitions in the future! Many thanks again to Gianluca, the organisers and all who participated. Go THOR!

Nicolas Courtois – Algebraic cryptanalysis is not the best way to break something, but sometimes it is the only option

Nicolas Courtois, a mathematician and senior lecturer in computer science at UCL, working with Daniel Hulme and Theodosis Mourouzis, has won the 2012 best paper award from the International Academy, Research, and Industry Association for their work on using SAT solvers to study various problems in algebra and circuit optimization. The research was funded by the European Commission under the FP7 project number 242497, “Resilient Infrastructure and Building Security (RIBS)” and by the UK Technology Strategy Board under project 9626-58525. The paper, Multiplicative Complexity and Solving Generalized Brent Equations with SAT Solvers, was presented at Computation Tools 2012, the third International Conference on Computational Logics, Algebras, Programming, Tools, and Benchmarking, held in Nice, France in July.

SAT (short for “satisfiability”) solvers are algorithms used to analyse logical problems composed of multiple statements such as “A is true OR not-B is true or C is true” for the purpose of determining whether the whole system can be true – that is, whether all the statements it’s composed of can be satisfied. SAT solvers also are used to determine how to assign the variables to make the set of statements true. In 2007, Bard and Courtois realised they could be used to test the security of cryptographic functions and measure their complexity, and today they are important tools in cryptanalysis; they have already been used for a long time in other applications such as verifying hardware and software. In this particular paper, Courtois, Hulme, and Mourouzis focused on optimising S-boxes for industrial block ciphers; the paper reports the results of applying their methodology to the PRESENT and GOST block ciphers. Reducing the complexity and hardware cost of these ciphers is particularly important to build so-called secure implementations of cryptography. These are particularly costly because they need to protect against additional threats such as side-channel attacks, in which the attacker exploits additional information leaked from the physical system – for example, by using an oscilloscope to observe a smart card’s  behaviour.

“It’s more a discovery than an invention,” says Courtois. “One of the amazing things SAT solvers can do is give you proof that something is not true.” The semiconductor industry provides one application of the work in this paper: these techniques promise to provide a way to test whether a circuit has been built with the greatest possible efficiency by proving that the chip design uses the smallest possible number of logic gates.

“You’ll get optimal designs and be able to prove they cannot be done better,” he says.

Classical cryptanalysis proceeds by finding approximations to the way a cipher works. Many successful academic attacks have been mounted using such techniques, but they rely on having a relatively large amount of data available for study. That works for large archives of stored data – such as, for example, the communications stored and kept by the Allies after World War II for later cryptanalysis. But in many real-world applications, it is more common to have only very small amounts of data.

“The more realistic scenario is that you’ll just have one or a few messages,” says Courtois. Bluetooth, for example, encrypts only 1,500 bits with a single key. “Most attacks are useless because they won’t work with this quantity of data.” Algebraic cryptanalysis, which he explained in New Frontier in Symmetric Cryptanalysis, an invited talk at Indocrypt 2008, by contrast, is one of the few techniques that can be hoped to work in such difficult situations.

Continue reading Nicolas Courtois – Algebraic cryptanalysis is not the best way to break something, but sometimes it is the only option

New EU Innovative Training Network project “Privacy & Us”

Last week, “Privacy & Us” — an Innovative Training Network (ITN) project funded by the EU’s Marie Skłodowska-Curie actions — held its kick-off meeting in Munich. Hosted in the nice and modern Wisschenschafts Zentrum campus by Uniscon, one of the project partners, principal investigators from seven different countries set out the plan for the next 48 months.

Privacy & Us really stands for “Privacy and Usability” and aims to conduct privacy research and, over the next 3 years, train thirteen Early Stage Researchers (ESRs) — i.e., PhD students — to be able to reason, design, and develop innovative solutions to privacy research challenges, not only from a technical point of view but also from the “human side”.

The project involves nine “beneficiaries”: Karlstads Universitet (Sweden), Goethe Universitaet Frankfurt (Germany), Tel Aviv University (Israel), Unabhängiges Landeszentrum für Datenschutz (Germany), Uniscon (Germany), University College London (UK), USECON (Austria), VASCO Innovation Center (UK), and Wirtschaft Universitat Wien (Austria), as well as seven partner organizations: the Austrian Data Protection Authority (Austria), Preslmayr Rechtsanwälte OG (Austria), Friedrich-Alexander University Erlangen (Germany), University of Bonn (Germany), the Bavarian Data Protection Authority (Germany), EveryWare Technologies (Italy), and Sentor MSS AB (Sweden).

The people behind Privacy & Us project at the kick-off meeting in Munich, December 2015
The people behind Privacy & Us project at the kick-off meeting in Munich, December 2015

The Innovative Training Networks are interdisciplinary and multidisciplinary in nature and promote, by design, a collaborative approach to research training. Funding is extremely competitive, with acceptance rate as low as 6%, and quite generous for the ESRs who often enjoy higher than usual salaries (exact numbers depend on the hosting country), plus 600 EUR/month mobility allowance and 500 EUR/month family allowance.

The students will start in August 2016 and will be trained to face both current and future challenges in the area of privacy and usability, spending a minimum of six months in secondment to another partner organization, and participating in several training and development activities.

Three studentships will be hosted at UCL,  under the supervision of Dr Emiliano De Cristofaro, Prof. Angela Sasse, Prof. Ann Blandford, and Dr Steven Murdoch. Specifically, one project will investigate how to securely and efficiently store genomic data, design and implementing privacy-preserving genomic testing, as well as support user-centered design of secure personal genomic applications. The second project will aim to better understand and support individuals’ decision-making around healthcare data disclosure, weighing up personal and societal costs and benefits of disclosure, and the third (with the VASCO Innovation Centre) will explore techniques for privacy-preserving authentication, namely, extending these to develop and evaluate innovative solutions for secure and usable authentication that respects user privacy.

Continue reading New EU Innovative Training Network project “Privacy & Us”

Scaling Tor hidden services

Tor hidden services offer several security advantages over normal websites:

  • both the client requesting the webpage and the server returning it can be anonymous;
  • websites’ domain names (.onion addresses) are linked to their public key so are hard to impersonate; and
  • there is mandatory encryption from the client to the server.

However, Tor hidden services as originally implemented did not take full advantage of parallel processing, whether from a single multi-core computer or from load-balancing over multiple computers. Therefore once a single hidden service has hit the limit of vertical scaling (getting faster CPUs) there is not the option of horizontal scaling (adding more CPUs and more computers). There are also bottle-necks in the Tor networks, such as the 3–10 introduction points that help to negotiate the connection between the hidden service and the rendezvous point that actually carries the traffic.

For my MSc Information Security project at UCL, supervised by Steven Murdoch with the assistance of Alec Muffett and other Security Infrastructure engineers at Facebook in London, I explored possible techniques for improving the horizontal scalability of Tor hidden services. More precisely, I was looking at possible load balancing techniques to offer better performance and resiliency against hardware/network failures. The focus of the research was aimed at popular non-anonymous hidden services, where the anonymity of the service provider was not required; an example of this could be Facebook’s .onion address.

One approach I explored was to simply run multiple hidden service instances using the same private key (and hence the same .onion address). Each hidden service periodically uploads its own descriptor, which describes the available introduction points, to six hidden service directories on a distributed hash table. The hidden service instance chosen by the client depends on which hidden service instance most recently uploaded its descriptor. In theory this approach allows an arbitrary number of hidden service instances, where each periodically uploads its own descriptors, overwriting those of others.

This approach can work for popular hidden services because, with the large number of clients, some will be using the descriptor most recently uploaded, while others will have cached older versions and continue to use them. However my experiments showed that the distribution of the clients over the hidden service instances set up in this way is highly non-uniform.

I therefore ran experiments on a private Tor network using the Shadow network simulator running multiple hidden service instances, and measuring the load distribution over time. The experiments were devised such that the instances uploaded their descriptors simultaneously, which resulted in different hidden service directories receiving different descriptors. As a result, clients connecting to a hidden service would be balanced more uniformly over the available instances.

Continue reading Scaling Tor hidden services