In many organisations today, IT security is a battleground: to manage the risks the organisation faces, security specialists devise policies and deploy security mechanisms that they expect staff and customers to comply with. But most of time, staff and customers don’t comply, and attempts to change that by “raising awareness” and “educating” them generally fail. The talk will use the examples of security warnings, access control, and sandboxing to explain the different perspectives and values that security specialists and ‘the rest of us’ apply to security. In conclusion, I will argue that a value-centred design approach is the only way to develop security solutions people want to use.
We had the pleasure of Zachary Peterson visiting UCL on a Cyber Security Fulbright Scholarship. The title is from his presentation given at our annual ACE-CSR event in November 2016.
Zachary Peterson is an associate professor of computer science at Cal Poly, San Luis Obispo. The key problem he is trying to solve is that the educational system is producing many fewer computer security professionals than are needed; an article he’d seen just two days before the ACE meeting noted a 73% rise in job vacancies in the last year despite a salary premium of 9% over other IT jobs. This information is backed up by the 2014 Taulbee survey, which found that the number of computer security PhDs has declined to 4% of the US total. Lack of diversity, which sees security dominated by white and Asian males, is a key contributing factor. Peterson believes that diversity is not only important as a matter of fairness, but essential because white males are increasingly a demographic minority in the US and because monocultures create perceptual blindness. New perspectives are especially needed in computer security as present approaches are not solving the problem on their own.
Peterson believes that the numbers are so bad because security is under-represented in both the computer science curriculum and in curriculum standards. The ACM 2013 curriculum guidelines recommend only three contact hours (also known as credit hours) in computer security in an entire undergraduate computer science degree. These are typically relegated to an upper-level elective class, and subject to a long chain of prerequisites, so they are only ever seen by a self-selected group who have survived years of attrition – which disproportionately affects women. The result is to create a limited number of specialists, unnecessarily constrain the student body, and limit the time students have to practice before joining the workforce. In addition, the self-selected group who do study security late in their academic careers have developed both set habits and their mind set before encountering an engineering task. Changing security into a core competency and teaching it as early as secondary school is essential but has challenges: security can be hard, and pushing it to the forefront may worsen existing problems seen in computer science more broadly, such as the solitary, anti-social, creativity-deficient image perception of the discipline.
Peterson believes games can help improve this situation. CTFTime, which tracks games events, reports a recent explosion in cyber security games to over 56 games events per year since 2013. These games, if done correctly, can teach core security skills in an entertaining – and social – way, with an element of competition. Strategic thinking, understanding an adversary’s motivation, rule interpretation, and rule-breaking are essential for both game-playing and security engineering.
The workshop was organized by CFEM and CTIC, and took place in Aarhus from May 30 until June 3, 2016. The speakers presented both theoretical advancements and practical implementations (e.g., voting, auction systems) of MPC, as well as open problems and future directions.
The first day started with Ivan Damgård presenting TinyTable, a new simple 2-party secure computation protocol. Then Martin Hirt introduced the open problem of general adversary characterization and efficient protocol generation. The last two talks of the day discussed Efficient Constant-Round Multiparty Computation and Privacy-Preserving Outsourcing by Distributed Verifiable Computation.
The first session of the second day included two presentations on theoretical results which introduced a series of three-round secure two-party protocols and their security guarantees, and fast circuit garbling under weak assumptions. On the practical side, Rafael Pass presented formal analysis of the block-chain, and abhi shelat outlined how MPC can enable secure matchings. After the lunch break, probabilistic termination of MPC protocols and low-effort VSS protocols were discussed.
Yuval Ishai and Elette Boyle kicked off the third day by presenting constructions of function secret sharing schemes, and recent developments in the area. After the lunch break, a new hardware design enabling Verifiable ASICs was introduced and the latest progress on “oblivious memories” were discussed.
The fourth day featured presentations on RAMs, Garbled Circuits and a discussion on the computational overhead of MPC under specific adversarial models. Additionally, there was a number of presentations on practical problems, potential solutions and deployed systems. For instance, Aaron Johnson presented a system for private measurements on Tor, and Cybernetica representatives demonstrated Sharemind and their APIs. The rump session of the workshop took place in the evening, where various speakers were given at most 7 minutes to present new problems or their latest research discoveries.
On the final day, Christina Brzuska outlined the connections between different types of obfuscation and one-way functions, and explained why some obfuscators were impossible to construct. Michael Zohner spoke about OT extensions, and how they could be used to improve 2-party computation in conjunction with look-up tables. Claudio Orlandi closed the workshop with his talk on Amortised Garbled Circuits, which explained garbling tricks all the way from Yao’s original work up to the state of the art, and provided a fascinating end to the week.
I had the pleasure of visiting the Simons Institute for the Theory of Computing at UC Berkeley over the Summer. One of the main themes of the programme was obfuscation.
Recently there has been a lot of exciting research on developing cryptographic techniques for program obfuscation. Obfuscation is of course not a new thing, you may already be familiar with the obfuscated C contest. But the hope underlying this research effort is to replace ad hoc obfuscation, which may or may not be possible to reverse engineer, with general techniques that can be applied to obfuscate any program and that satisfy a rigorous definition of what it means for a program to be obfuscated.
Even defining what it means for a program to be obfuscated is not trivial. Ideally, we would like an obfuscator to be a compiler that turns any computer program into a virtual black-box. By a virtual black-box we mean that the obfuscated program should preserve functionality while not leaking any other information about the original code, i.e., it should have the same input-output behaviour as the original program and you should not be able to learn anything beyond what you could learn by executing the original program. Unfortunately it turns out that this is too ambitious a goal: there are special programs, which are impossible to virtual black-box obfuscate.
Instead cryptographers have been working towards developing something called indistinguishability obfuscation. Here the goal is that given two functionally equivalent programs P1 and P2 of the same size and an obfuscation of one of them O(Pi), it should not be possible to tell which one has been obfuscated. Interestingly, even though this is a weaker notion than virtual black-box obfuscation it has already found many applications in the construction of cryptographic schemes. Furthermore, it has been proved that indistinguishability obfuscation is the best possible obfuscation in the sense that any information leaked by an obfuscated program is also leaked by any other program computing the same functionality.
So, when will you see obfuscation algorithms that you can use to obfuscate your code? Well, current obfuscation algorithms have horrible efficiency and are far from practical applicability so there is still a lot of research to be done on improving them. Moreover, current obfuscation proposals are based on something called graded encoding schemes. At the moment, there is a tug of war going on between cryptographers proposing graded encoding schemes and cryptanalysts breaking them. Breaking graded encoding schemes does not necessarily break the obfuscation algorithms building on them but it is fair to say that right now the situation is a mess. Which from a researcher’s perspective makes the field very exciting since there is still a lot to discover!
If you want to learn more about obfuscation I recommend the watching some of the excellent talks from the programme.
As the next programme director of UCL’s MSc in Information Security, I have quickly realized that showcasing a group’s educational and teaching activities is no trivial task.
As academics, we learn over the years to make our research “accessible” to our funders, media outlets, blogs, and the likes. We are asked by the REF to explain why our research outputs should be considered world-leading and outstanding in their impacts. As security, privacy, and cryptography researchers, we repeatedly test our ability to talk to lawyers, bankers, entrepreneurs, and policy makers.
But how do you do good outreach when it comes to postgraduate education? Well, that’s a long-standing controversy. The Economist recently dedicated a long report on tertiary education and also discussed misaligned incentives in strategic decisions involving admissions, marketing, and rankings. Personally, I am particularly interested in exploring ways one can (attempt to) explain the value and relevance of a specialist masters programme in information security. What outlets can we rely on and how do we effectively engage, at the same time, current undergraduate students, young engineers, experienced professionals, and aspiring researchers? How can we shed light on our vision & mission to educate and train future information security experts?
So, together with my colleagues of UCL’s Information Security Group, I started toying with the idea of organizing events — both in the digital and the analog “world” — that could provide a better understanding of both our research and teaching activities. And I realized that, while difficult at first and certainly time-consuming, this is a noble, crucial, and exciting endeavor that deserves a broad discussion.
Information Security: Trends and Challenges
Thanks to the great work of Steve Marchant, Sean Taylor, and Samantha Webb (now known as the “S3 team” :-)), on March 31st, we held what I hope is the first of many MSc ISec Open Day events. We asked two of our friends in industry — Alec Muffet (Facebook Security Evangelist) and Dr Richard Gold (Lead Security Analyst at Digital Shadows and former Cisco cloud web security expert) — and two of our colleagues — Prof. Angela Sasse and Dr David Clark — to give short, provocative talks about what they believe trends and challenges in Information Security are. In fact, we even gave it a catchy name to the event: Information Security: Trends and Challenges.
I attended two privacy events over the past couple of weeks. The first was at the Royal Society, chaired by Prof Jon Crowcroft.
All panelists talked about why privacy is necessary in a free, democratic society, but also noted that individuals are ill equipped to achieve this given the increasing number of technologies collecting data about us, and the commercial and government interests in using those.
During the question & answer session, one audience member asked if we needed a Digital Charter to protect rights to privacy. I agreed, but pointed out that citizens and consumers would need to express this desire more clearly, and be prepared to take collective action to stop the gradual encroachment.
The second panel – In the Digital Era – Do We Still Have Privacy? – organised in London by Lancaster University this week as part of its 50th Anniversary celebrations, chaired by Sir Edmund Burton.
One of the panelists – Dr Mike Short from Telefonica O2 – stated that it does not make commercial sense for a company to use data in a way that goes against their customer’s privacy preferences.
But there are service providers that force users to allow data collection – you cannot have the service unless you agree to your data being collected (which goes against the OECD principles for informed consent) or the terms & conditions so long that users don’t want to read them – and even if they were prepared to read them, they would not understand them without a legal interpreter.
We have found in our research at UCL (e.g. Would You Sell Your Mother’s Data, Fairly Truthful) that consumers have a keen sense of ‘fairness’ about how their data is used – and they definitely do not think it ‘fair’ for them to be used against their express preferences and life choices.
In the Q & A after the panel the question of what can be done to ensure fair treatment for consumers, and the idea of a Digital Charter, was raised again. The evening’s venue was a CD’s throw away from the British Library, where the Magna Carta is exhibited to celebrate its 800th anniversary. The panelists reminded us that last year, Sir Tim Berners-Lee called for a ‘Digital Magna Carta’ – I think this is the perfect time for citizens and consumers to back him up, and unite behind his idea.
During the break I attended the 31st Chaos Communications Congress (31C3) in Hamburg, Germany. There I had the pleasure of giving a presentation on “DP5: PIR for Privacy-preserving Presence” along with my colleague from Waterloo, Ian Goldberg. The Audio/Video Chaos Angels did a nice job of capturing the event, and making it available for all to view (I come in at 26:23).
Other resources around DP5 include: