Observing the WannaCry fallout: confusing advice and playing the blame game

As researchers who strive to develop effective measures that help individuals and organisations to stay secure, we have observed the public communications that followed the Wannacry ransomware attack of May 2017 with increasing concern. As in previous incidents, many descriptions of the attack are inaccurate – something colleagues have pointed out elsewhere. Our concern here is the advice being disseminated, and the fact that various stakeholders seem to be more concerned with blaming each other than with working together to prevent further attacks affecting organisations and individuals.

Countries initially affected in WannaCry ransomware attack (source Wikipedia, User:Roke)

Let’s start with the advice that is being handed out. Much of it is unhelpful at best, and downright wrong at worst – a repeat of what happened after Heartbleed, when people were advised to change their passwords before the affected organisations had patched their SSL code. Here is a sample of real advice sent out to staff in major organisation post-WannaCry:

“We urge you to be vigilant and not to open emails that are unexpected, unusual or suspicious in any way. If you experience any unusual computer behaviour, especially any warning messages, please contact your IT support immediately and do not use your computer further until advised to do so.”

Useful advice has to be correct and actionable. Users have to cope with dozens, maybe hundreds, of unexpected emails every day, most containing links and many accompanied by attachments, cannot take ten minutes to ponder each email before deciding whether to respond. Such instructions also implicitly and unfairly suggest that users’ ordinary behaviour plays a major role in causing major incidents like this one. RISCS advocates enlisting users as part of frontline defence. Well-targeted, automated blocking of malicious emails lessen the burden on individual users, and build resilience for the organisation in general.

In an example of how to confuse users, The Register reports that City of London Police sent out its “advice” via email in an attachment entitled “ransomware.pdf”. So users are simultaneously exhorted to be “vigilant” and not open emails and required to open an email in order to get that advice. The confusion resulting from contradictory advice is worse than the direct consequences of the attack: it enables future attacks. Why play Keystone Cyber Cops when UK National Technical Authority for such matters, the National Centre for Cyber Security, offers authoritative and well-presented advice on their website?

Our other concern is the unedifying squabbling between spokespeople for governments and suppliers blaming each other for running unsupported software, not paying for support, charging to support unsupported software, and so on, with and security experts weighing in on all sides. To a general public already alarmed by media headlines, finger-pointing creates little confidence that either party is competent or motivated to keep secure the technology on which our lives all now depend. When the supposed “good guys” expend their energy fighting each other, instead of working together to defeat the attackers, it’s hard to avoid the conclusion that we are most definitely doomed. As Columbia University professor Steve Bellovin writes, the question of who should pay to support old software requires broader collaborative thought; in avoiding that debate we are choosing to pay as a society for such security failures.

We would refer those looking for specific advice on dealing with ransomware to the NCSC guidance, which is offered in separate parts for SMEs and home users and enterprise administrators.

Much of NCSC’s advice is made up of things we all know: we should back up our data, patch our systems, and run anti-virus software. Part of RISCS’ remit is to understand why users often don’t follow this advice. Ensuring backups remain uninfected is, unfortunately, trickier than it should be. Ransomware will infect – that is, encrypt – not only the machine it’s installed on but any permanently-connected physical or network drive. This problem ought to be solved by cloud storage, but it can be difficult to find out whether cloud backups will be affected by ransomware, and technical support documentation often simply refers individuals to “your IT support”, even though vendors know few individuals have any. Dropbox is unusually helpful, and provides advice on how to recover from a ransomware attack and how far it can help. Users should be encouraged to read such advice in advance and factor it into backup plans.

There are many reasons why people do not update their software. They may, for example, have had bad experiences in the past that lead them to worry that security updates will fail or leave their system damaged, or incorporate unwanted changes in functionality. Software vendors can help here by rigorously testing updates and resisting the temptation to bundle in new features. IT support staff can help by doing their own tests that allow them to reassure their users that they will help resolve any resulting problems in a timely manner.

In some cases, there are no updates to install. The WannaCry ransomware attack highlighted the continuing use of desktop Windows XP, which Microsoft stopped supporting with security updates in 2014. A few organisations still pay for special support contracts, and Microsoft made an exception for WannaCry by releasing a security patch more widely. Organisations that still have XP-based systems should now investigate to understand why equipment using an unsafe, outdated operating system is still in use. Ideally, the software should be replaced with a more modern system; if that’s not possible the machine should be isolated from network connections. No amount of reminding users to patch their systems or telling them to “be vigilant” will be effective in such cases.

 

This article also appears on the Research Institute in Science of Cyber Security (RISCS) blog.

The politics of the NHS WannaCrypt ransomware outbreak

You know you live in 2017 when the top headline on national newspapers relates to a ransomware attack on the National Heath Service, the UK Prime minister comments on the matter, and the the security researchers dealing with the outbreak are presented as heroic figures. As ever, The Register, has the most detailed and sophisticated technical article on the matter. But also strangely the most informative in terms of public policy. As if somehow, in our days, technical sophistication is a prerequisite also for sophisticated political comment on those matters. Other news outlets present a caricature, of the bad malware authors, the good security researcher and vendors working around the clock, the valiant government defenders, and a united humanity trying to beat the virus. I want to break that narrative open in this article, and discuss the actual political and social lessons we should be learning. In part to avoid similar disasters in the future.

First off, I am always surprised when such massive systemic outbreaks of malware, are blamed squarely on the author(s) of the malware itself, and the blame game ends there. It is without doubt that the malware author has a great share of responsibility. I personally think it is immoral to deploy ransomware in the wild, deny people access to their data, and seek to benefit from this. It is also a crime in the UK and elsewhere.

However, it is strange that a single author, or a small group of authors, without any major resources can have such a deep and widespread effect on major technological infrastructures. The absurdity becomes clear if we transpose the situation into the world of traditional engineering. Imagine all skyscrapers in major cities had to be evacuated, because a couple of teenagers with rocks were trying to blackmail business owners to pay up, to protect their precious glass windows. The fragility of software and IT systems seems to have no parallel in any other large scale engineering infrastructure — and this is not inherent, but the result of very specific micro-political, geo-political and economic decisions.

Lets take the WannaCrypt outbreak and look at the political and other social decisions that lead to the disaster — besides the agency of the malware authors:

  • The disaster was possible in part, and foremost, because IT systems within the UK critical NHS infrastructure are outdated — and for example rely on Windows XP that is not any more being maintained by Microsoft. Well, actually this is not strictly true: Microsoft does make security updates for Windows XP, but does not provide them for free — and instead Microsoft expects organizations that are locked in the OS to pay up to get patches and stay safe. So two key questions need to be asked …
  • Why is the NHS not upgrading to a new versions of Windows, or any other modern operating system? The answer is simple: line of business applications (LOB: from heath record management, specialist analysis and imaging software, to payroll) may not be compatible with new operating systems. On top of that a number of modern medical devices, such as large X-ray scanners or heart monitors, come with embedded computers running Windows XP — and only Windows XP. There is no way of upgrading them. The MEDJACK cyber-attacks were leveraging this to rampage through hospitals in 2015.
  • Is having LOB software tying you to an outdated OS, or medical devices costing millions that are not upgradeable, a fact of nature? No. It is down to a combination of terrible and naive procurement processes in health organizations, that do not take into account the need and costs if IT and security maintenance — and do not entrench it into the requirements and contracts for services, software and devices. It is also the result of the health software and devices industries being immature and unsophisticated as to the needs to secure IT. They reap the benefits of IT to make money, but without expending much of it to provide quality and security. The tragic state of security of medical devices has built the illustrious career of my friend Prof. Kevin Fu, who has found systemic attacks against implanted heart devices that could kill you, noob security bugs in medical device software, and has written extensively on the poor strategy to tackle these problem. So today’s attacks were a disaster waiting to happen — and expect more unless we learn the right lessons.
  • So given the terrible state of IT that prevents upgrading the OS, why is the NHS not paying up Microsoft to get security patches? That is because the government, and Jeremy Hunt in particular, back in 2014 decided to not pay up the money necessary to keep receiving security updates for Windows XP, despite being aware of the absolute reliance of the NHS on the outdated software. So in effect, a deliberate political decision was taken, at the highest level of the government to leave the NHS open to cyber attack. This is unlikely to be the last Windows XP security bug, so more are presumably to come.
  • Then there is the question of how malware authors, managed to get access to security bugs for windows XP? How did they get the tools necessary to attack such a mature, and rather common system, about 15 years after Windows XP was released, and only after it went out of maintenance? It turn out that the vulnerabilities they used, were in fact hoarded by the NSA as a cyber weapon — which was lost or stolen by hackers or leakers, and released into the wild! (The tool was codenamed EternalBlue). For may years, the computer security research community has been warning that stockpiling vulnerabilities in very common software for cyber-offense purposes, is dangerous. When those cyber weapons are lost, leaked, or even just used, there is proliferation of the technology necessary to attack, which criminals and foreign states can turn against critical infrastructure. This blog commented on the matter as recently as 8 March 2017 in a post entitled “What the CIA hack and leak teaches us about the bankruptcy of current “Cyber” doctrines”. This now feels like an unfortunately fulfilled prophesy, but the NHS attack was just the expected outcome of the US/UK and now common place doctrine around cyber — that contributes to, and leverages insecurity rather than security. Alternative public policy options exist of course.

So to summarize, besides the author of the malware, a number of other social and systemic factors contribute to making such cyber attacks possible: from poor security standards in heath informatics industries; poor procurement processes in heath organizations; lack of liability on any of the software vendors (incl. Microsoft) for providing insecure software or devices; cost-cutting from the government on NHS cyber security with no constructive alternatives to mitigate risks; and finally the UK/US cyber-offense doctrine that inevitably leads to proliferation of cyber-weapons and their use on civilian critical infrastructures.

It it those systemic factors that need to change to avoid future failures. Bad people wishing to make money from ransomware, or other badness, will always exist. There is a discipline devoted to preventing this, and it is called security engineering. It is time industry and goverment start taking its advice seriously.

 

This was originally posted on Conspicuous Chatter, the blog of Prof. George Danezis.

Battery Status Not Included: Assessing Privacy in W3C Web Standards

Designing standards with privacy in mind should be a standard in itself. Historically this was not always the case and the idea of designing systems with privacy is relatively new – it dates from the beginning of this decade. One of the milestones is accepting this view on the IETF level, dating 2013.

This note and the referenced research work focusses on designing standards with privacy in mind. The World Wide Web Consortium (W3C) is one of the most important standardization bodies. W3C is standardizing something everyone – billions of people – use every day for entertainment or work, the web and how web browsers work. It employs a focused, lengthy, but very open and consensus-driven environment in order to make sure that community voice is always heard during the drafting of new specifications. The actual inner workings of W3C are surprisingly complex. One interesting observation is that the W3C Process Document document does not actually mention privacy reviews at all. Although in practice this kind of reviews are always the case.

However, as the web along with its complexity and web browser features constantly grow, and as browsers and the web are expected to be designed in an ever-rapid manner, considering privacy becomes a challenge. But this is what is needed in order to obtain a good specification and well-thought browser features, with good threat models, and well designed privacy strategies.

My recent work, together with the Princeton team, Arvind Narayanan and Steven Englehardt, is providing insight into the standardization process at W3C, specifically the privacy areas of standardization. We point out a number of issues and room for improvements. We also provide recommendations as well as broad evidence of a wide misuse and abuse (in tracking and fingerprinting) of a browser mechanism called Battery Status API.

Battery Status API is a browser feature that was meant to allow websites access the information concerning the battery state of a user device. This seemingly innocuous mechanism initially had no identified privacy concerns. However, following my previous research work in collaboration with Gunes Acar this view has changed (Leaking Battery and a later note).

The work I authored in 2015 identifies a number or privacy risks of the API. Specifically – the issue of potential exploitation of the API to tracking, as well as the potential possibility of recovering the raw value of the battery capacity – something that was not foreseen to be accessed by web sites. Ultimately, this work has led to a fix in Firefox browser and an update to W3C specification. But the matter was not over yet.

Continue reading Battery Status Not Included: Assessing Privacy in W3C Web Standards

What the CIA hack and leak teaches us about the bankruptcy of current “Cyber” doctrines

Wikileaks just published a trove of documents resulting from a hack of the CIA Engineering Development Group, the part of the spying agency that is in charge of developing hacking tools. The documents seem genuine and catalog, among other things, a number of exploits against widely deployed commodity devices and systems, including Android, iPhone, OS X and Windows. Also smart TVs. This hack, with appropriate background, teaches us a lesson or two about the direction of public policy related to “cyber” in the US and the UK.

Routine proliferation of weaponry and tactics

The CIA hack is in many ways extraordinary, in that it allowed the attackers to gain access to the source code of the hacking tools of the agency – an extraordinary act of proliferation of attack technologies. In other ways, it is mundane in that it is neither the first, nor probably the last hack or leak of catastrophic proportions to occur to a US/UK government department in charge of offensive cyber operations.

This list of leaks of government attack technologies, illustrates that when it comes to cyber-weaponry the risk of proliferation is not merely theoretical, but very real. In fact it seems to be happening all the time.

I find it particularly amusing – and those in charge of those agencies should probably find it embarrassing – that NSA and GCHQ go around presenting themselves as national technical authorities in assurance; they provide advice to others on how to not get hacked; they keep asserting that they can be trusted to operate extremely dangerous spying infrastructures; and handle in secret extremely dangerous zero-day exploits. Yet, they seem to be routinely hacked and have their secret documents leaked. Instead of chasing whistleblowers and journalists, policy makers should probably take note that there is not a high-enough level of assurance to secure cyber-weaponry, and for sure it is not to be found within those agencies.

In fact the risk of proliferation is at the very heart of cyber attack, and integral to it, even without hacking or leaking from inside government. Many of us quietly laughed at the bureaucratic nightmare discussed in the recent CIA leak, describing the difficulty of classifying the cyber attack techniques while at the same time deploying them on target system. As the press release summarizes:

To attack its targets, the CIA usually requires that its implants communicate with their control programs over the internet. If CIA implants, Command & Control and Listening Post software were classified, then CIA officers could be prosecuted or dismissed for violating rules that prohibit placing classified information onto the Internet. Consequently the CIA has secretly made most of its cyber spying/war code unclassified.

This illustrates very clearly a key dynamic in hacking: once a hacker uses an exploit against an adversary system, there is a very real risk the exploit is captured by monitoring and intrusion detection systems of the target, and then weponized to hack other computers, at a low cost. This is very well established and researched, and such “honey pot” infrastructures have been used in the academic and commercial community for some time to detect and study potentially new attacks. This is not the premise of sophisticated defenders, the explanation of how honeypots work is on Wikipedia! The Flame malware, and Stuxnet before, were in fact found in the wild.

In that respect cyber-war is not like war at all. The weapons you use will be turned against you immediately, and your effective use of weapons relies on your very own infrastructures being utterly vulnerable to them.

What “Cyber” doctrine?

The constant leaks and hacks, leading to proliferation of exploits and hacking tools from the heart of government, as well through operations, should deeply inform policy makers when making choices about “cyber” doctrines. First, it is probably time to ditch the awkward term “Cyber”.

Continue reading What the CIA hack and leak teaches us about the bankruptcy of current “Cyber” doctrines

Security intrusions as mechanisms

The practice of security often revolves around figuring out what (malicious act) happened to a system. This historical inquiry is the focus of forensics, specifically when the inquiry regards a policy violation (such as a law). The results of forensic investigation might be used to fix the impacted system, attribute the attack to adversaries, or build more resilient systems going forwards. However, to execute any of these purposes, the investigator first must discover the mechanism of the intrusion.

As discussed at an ACE seminar last October, one common framework for this discovery task is the intrusion kill chain. Mechanisms, mechanistic explanation, and mechanism discovery have highly-developed meanings in the biological and social sciences, but the word is not often used in information security. In a recent paper, we argue that incident response and forensics investigators would be well-served to make use of the existing literature on mechanisms, as thinking about intrusion kill chains as mechanisms is a productive and useful way to frame the work.

To some extent, thinking mechanistically is a description of what (certain) scientists do. But the mechanisms literature within philosophy of science is not merely descriptive. The normative benefits extolled include that thinking mechanistically is an effective heuristic for searching out useful explanations; mechanisms provide the most coherent unity to complex fields of study; and that mechanistic explanation is necessary to guide selection among potential studies given limited experimental resources, experiment design decisions, and interpretation of statistical results. I previously argued that capricious use of biological metaphors is bad for information security. We are keenly aware that these benefits of mechanistic explanation need to apply to security as and for security, not merely because they work in other sciences.

Our paper demonstrates how we can cast the intrusion kill chain, the diamond model, and other models of security intrusions as mechanistic models. This casting begins to demonstrate the mosaic unity of information security. Campaigns are made up of attacks. Attacks, as modeled by the kill chain, have multiple steps. In a specific attack, the delivery step might be accomplished by a drive-by-download. So we demonstrate how drive-by-downloads are a mechanism, one among many possible delivery mechanisms. This description is a schema to be filled in during a particular drive-by download incident with a specific URL and specific javascript, etc. The mechanistic schema of the delivery mechanism informs the investigator because it indicates what types of network addresses to look for, and how to fit them into the explanation quickly. This process is what Lindley Darden calls schema instantiation in the mechanism discovery literature.

Our argument is not that good forensics investigators do not do such mechanism discovery strategies. Rather, it is precisely that good investigators do do them. But we need to describe what it is good investigators in fact do. We do not currently, and that lack makes teaching new investigators particularly difficult. Thinking about intrusions as mechanisms unlocks an expansive literature on good ways to do mechanism discovery. This literature will make it easier to codify what good investigators do, which among other benefits allows us to better teach sound methodological practices to incoming investigators.

Our paper on this topic was published in the open-access Journal of Cybersecurity, as Thinking about intrusion kill chains as mechanisms, by Jonathan M. Spring and Eric Hatleback.

Battery Status readout as a privacy risk

Privacy risks and threats arise even in seemingly innocuous mechanisms. It is a fairly regular issue.

Over a year ago, I was researching the risk of the W3C Battery Status API. The mechanism allows a web site to read the battery level of a device (smartphone, laptop, etc.). One of the positive use cases may be, for example, stopping the execution of intensive operations if the battery is running low.

Our privacy analysis of Battery Status API revealed interesting results.

Privacy analysis of Battery API

The battery status provides the following information:

  • the current level of battery (format: 0.0–1.0, for empty and full battery, respectively)
  • time to a full discharge of battery (in seconds)
  • time to a full charge of battery, if connected to a charger (in seconds)

These items are updated whenever a new value is supplied by the operating system

It turns out that privacy risks may surface even in this kind of – seemingly innocuous – data and access mechanisms.

Frequency of changes

The frequency of changes in the reported readouts from Battery Status API potentially allow the monitoring of users’ computer use habits; for example, potentially enabling analyzing of how frequently the user’s device is under heavy use. This could lead to behavioral analysis.

Additionally, identical installations of computer deployments in standard environments (e.g. at schools, work offices, etc.) are often are behind a NAT. In simple terms, NAT allows a number of users to browse the Internet with an – externally seen – single IP address. The ability of observing any differences between otherwise identical computer installations – potentially allows particular users to be identified (and targeted?).

Battery readouts as identifiers

The information provided by the Battery Status API is not always subject to rapid changes. In other words, this information may be static for a period of time; this in turn may give rise to a short-lived identifier. The situation gets especially interesting when we consider a scenario of users sometimes clearing standard web identifiers (such as cookies). In such a case, a web script could potentially analyse identifiers provided by Battery Status API, and this information then could possibly even lead to re-creation of other identifiers. A simple sketch follows.

Continue reading Battery Status readout as a privacy risk

Analyzing privacy aspects of the W3C Vibration API

When making web standards, multiple scenarios possibly affecting privacy are considered. This includes even extreme ones; and this is a good thing. It’s best to predict the creative use and abuse of web features, before they are exploited.

Vibration API

The mechanism allowing websites to utilize a device’s vibration motor is called the Vibration API. The mechanism allows a device to be vibrated in particular patterns. The argument to the vibration() function is a list called a pattern. The list’s odd indices cause a vibration for a specific length of time, and even values are the still periods. For example, a web designer can make the device to vibrate for a specific duration, say 50 ms and follow that with a still period of 100 ms using the following call:

navigator.vibration([50,100])

In certain circumstances this can create several interesting potential privacy risks. Let’s look at the Vibration API from a privacy point of view. I will consider a number of scenarios on various technical levels.

Toy de-anonymisation scenario

One potential risk is the identification of a particular person in real life. Imagine several people in the same room placing their devices on a table. At some point, one person’s device vibrates in specific patterns. This individual might then become marked to a potential observer.

How could such a script be delivered? One possibility is though web advertising infrastructures. These offer capabilities of targeting individuals with a considerable accuracy (with respect to their location).

Continue reading Analyzing privacy aspects of the W3C Vibration API

Exceptional access provisions in the Investigatory Powers Bill

The Investigatory Powers Bill, being debated in Parliament this week, proposes the first wide-scale update in 15 years to the surveillance powers of the UK law-enforcement and intelligence agencies.

The Bill has several goals: to consolidate some existing surveillance powers currently either scattered throughout other legislation or not even publicly disclosed, to create a wide range of new surveillance powers, and to change the process of authorisation and oversight surrounding the use of surveillance powers. The Bill is complex and, at 245 pages long, makes scrutiny challenging.

The Bill has had its first and second readings in the House of Commons, and has been examined by relevant committees in the Commons. The Bill will now be debated in the ‘report stage’, where MPs will have the chance to propose amendments following committee scrutiny. After this it will progress to a third reading, and then to the House of Lords for further debate, followed by final agreement by both Houses.

So far, four committee reports have been published examining the draft Bill, from the Intelligence and Security Committee of Parliament, the joint House of Lords/House of Commons committee specifically set up to examine the draft Bill, the House of Commons Science and Technology committee (to which I served as technical advisor) and the Joint Committee on Human Rights.

These committees were faced with a difficult task of meeting an accelerated timetable for the Bill, with the government aiming to have it become law by the end of 2016. The reason for the haste is that the Bill would re-instate and extend the ability of the government to compel companies to collect data about their users, even without there being any suspicion of wrongdoing, known as “data retention”. This power was previously set out in the EU Data Retention Directive, but in 2014 the European Court of Justice found it be unlawful.

Emergency legislation passed to temporarily permit the government to continue their activities will expire in December 2016 (but may be repealed earlier if an appeal to the European Court of Justice succeeds).

The four committees which examined the Bill together made 130 recommendations but since the draft was published, the government only slightly changed the Bill, and only a few minor amendments were accepted by the Public Bills committee.

Many questions remain about whether the powers granted by the Bill are justifiable and subject to adequate oversight, but where insights from computer security research are particularly relevant is on the powers to grant law enforcement the ability to bypass normal security mechanisms, sometimes termed “exceptional access”.

Continue reading Exceptional access provisions in the Investigatory Powers Bill

Adblocking and Counter-Blocking: A Slice of the Arms Race

anti-adblocking message from WIRED
If you use an adblocker, you are probably familiar with messages of the kind shown above, asking you to either disable your adblocker, or to consider supporting the host website via a donation or subscription. This is the battle du jour in the ongoing adblocking arms race — and it’s one we explore in our new report Adblocking and Counter-Blocking: A Slice of the Arms Race.

The reasons for the rising popularity of adblockers include improved browsing experience, better privacy, and protection against malvertising. As a result, online advertising revenue is gravely threatened by adblockers, prompting publishers to actively detect adblock users, and subsequently block them or otherwise coerce the user to disable the adblocker — practices we refer to as anti-adblocking. While there has been a degree of sound and fury on the topic, until now we haven’t been able to understand the scale, mechanism and dynamics of anti-adblocking. This is the gap we have started to address, together with researchers from the University of Cambridge, Stony Brook University, University College London, University of California Berkeley, Queen Mary University of London and International Computer Science Institute (Berkeley). We address some of these questions by leveraging a novel approach for identifying third-party services shared across multiple websites to present a first characterization of anti-adblocking across the Alexa Top-5K websites.

We find that at least 6.7% of Alexa Top-5K websites employ anti-adblocking, with the practices finding adoption across a diverse mix of publishers; particularly publishers of “General News”, “Blogs/Wiki”, and “Entertainment” categories. It turns out that these websites owe their anti-adblocking capabilities to 14 unique scripts pulled from 12 unique domains. Unsurprisingly, the most popular domains are those that have skin in the game — Google, Taboola, Outbrain, Ensighten and Pagefair — the latter being a company that specialises in anti-adblocking services. Then there are in-house anti-adblocking solutions that are distributed by a domain to client websites belonging to the same organisation: TripAdvisor distributes an anti-adblocking script to its eight websites with different country code top-level domains, while adult websites (all hosted by MindGeek) turn to DoublePimp. Finally, we visited a sample website for each anti-adblocking script via AdBlock Plus, Ghostery and Privacy Badger, and discovered that half of the 12 anti-adblocking suppliers are counter-blocked by at least one adblocker — suggesting that the arms race has already entered the next level.

It is hard to say how many levels deeper the adblocking arms race might go. While anti-adblocking may provide temporary relief to publishers, it is essentially band-aid solution to mask a deeper issue — the disequilibrium between ads (and, particularly, their behavioural tracking back-end) and information. Any long term solution must address the reasons that brought users to adblockers in the first place. In the meantime, as the arms race continues to escalate, we hope that studies such as ours will bring transparency to this opaque subject, and inform policy that moves us out of the current deadlock.

 

“Ad-Blocking and Counter Blocking: A Slice of the Arms Races” by Rishab Nithyanand, Sheharbano Khattak, Mobin Javed, Narseo Vallina-Rodriguez, Marjan Falahrastegar, Julia E. Powles, Emiliano De Cristofaro, Hamed Haddadi, and Steven J. Murdoch. arXiv:1605.05077v1 [cs.CR], May 2016.

This post also appears on the University of Cambridge Computer Laboratory Security Group blog, Light Blue Touchpaper.

On the hunt for Facebook’s army of fake likes

As social networks are increasingly relied upon to engage with people worldwide, it is crucial to understand and counter fraudulent activities. One of these is “like farming” – the process of artificially inflating the number of Facebook page likes. To counter them, researchers worldwide have designed detection algorithms to distinguish between genuine likes and artificial ones generated by farm-controlled accounts. However, it turns out that more sophisticated farms can often evade detection tools, including those deployed by Facebook.

What is Like Farming?

Facebook pages allow their owners to publicize products and events and in general to get in touch with customers and fans. They can also promote them via targeted ads – in fact, more than 40 million small businesses reportedly have active pages, and almost 2 million of them use Facebook’s advertising platform.

At the same time, as the number of likes attracted by a Facebook page is considered a measure of its popularity, an ecosystem of so-called “like farms” has emerged that inflate the number of page likes. Farms typically do so either to later sell these pages to scammers at an increased resale/marketing value or as a paid service to page owners. Costs for like farms’ services are quite volatile, but they typically range between $10 and $100 per 100 likes, also depending on whether one wants to target specific regions — e.g., likes from US users are usually more expensive.

Screenshot from http://www.getmesomelikes.co.uk/
Screenshot from http://www.getmesomelikes.co.uk/

How do farms operate?

There are a number of possible way farms can operate, and ultimately this dramatically influences not only their cost but also how hard it is to detect them. One obvious way is to instruct fake accounts, however, opening a fake account is somewhat cumbersome, since Facebook now requires users to solve a CAPTCHA and/or enter a code received via SMS. Another strategy is to rely on compromised accounts, i.e., by controlling real accounts whose credentials have been illegally obtained from password leaks or through malware. For instance, fraudsters could obtain Facebook passwords through a malicious browser extension on the victim’s computer, by hijacking a Facebook app, via social engineering attacks, or finding credentials leaked from other websites (and dumped on underground forums) that are also valid on Facebook.

Continue reading On the hunt for Facebook’s army of fake likes