The Acropalypse vulnerability in Windows Snip and Sketch, lessons for developer-centered security

Acropalypse is a vulnerability first identified in the Google Pixel phone screenshot tool, where after cropping an image, the original would be recoverable. Since the part of the image cropped out might contain sensitive information, this was a serious security issue. The problem occurred because the Android API changed behaviour from truncating files by default to leaving existing content in place. Consequently, the beginning of the resulting image file contains the cropped content, but the end of the original file is still present. Image viewers ignore this data and open the file as usual, but with some clever analysis of the compression algorithm used, the original image can (partially) be recovered.

Shortly after the vulnerability was announced, someone noticed that the Windows default screenshot tool, Snip and Sketch, appeared to have the same problem, despite being an entirely unrelated application on a different operating system. I also found a similar problem back in 2004 relating to JPEG thumbnail images. When the same vulnerability keeps re-occurring, it suggests a systemic problem in how we build software, so I set out to understand more about the reasons for the vulnerability existing in Windows Snip and Sketch.

A flawed API

The first problem I found is that the modern Windows API for saving files had a very similar problem to that in Android. Specifically, existing files would not be truncated by default. Arguably the vulnerability was worse because, unlike Android, there is no option to truncate files. The Windows documentation is, at best, unclear on the need to truncate files and what code is needed to achieve the desired result.

This wasn’t always the case. The old Win32 API for saving a file was (roughly) to show a file picker, get the filename the user selected, and then open the file. To open a file, the programmer must specify whether to overwrite the file or not, and example code usually does overwrite the file. However, the new “more secure” Universal Windows Platform (UWP) sandboxes the file picker in a separate process, allowing neat features like capability-based access control. It creates the file if needed and returns a handle which, if the selected file exists, will not overwrite the existing content.

However, from the documentation, a programmer would understandably assume, however, that the file would be empty.

“The file name, extension, and location of this storageFile match those specified by the user, but the file has no content.”

Continue reading The Acropalypse vulnerability in Windows Snip and Sketch, lessons for developer-centered security

“I am yet to meet a young person that has not experienced some form of abuse via tech”

Technology-facilitated abuse describes the misuse of digital systems such as smartphones or other Internet-connected devices to monitor, control and harm individuals. In recent years increasing attention has been given to this phenomenon in school settings and the criminal justice system. Yet, an awareness in the healthcare sector is lacking. To address this gap, Dr Isabel Straw and Dr Leonie Tanczer from University College London (UCL) have been leading a new research project that examines technology-facilitated abuse in medical settings.

Technology-facilitated forms of abuse are on the rise, with perpetrators adapting digital technologies such as smartphones and drones, trackers such as AirTags, and spyware tools including parental control software, to cause harm. The impact of technology-facilitated abuse on patients may not always be immediately obvious to healthcare professionals. For instance, smart, Internet-connected devices have been showcased to be misused in domestic abuse cases to inflict physical harm. Smart locks have been used to trap individuals inside their homes, smart thermostats have been used to inflict extremes of temperature on victims, and remotely controlled lighting and sound systems have been manipulated to cause psychological distress. COVID-19 catalyzed the proliferation of these technologies within our environment, with sales of smart devices increasing 30% on last year. Yet, while these tools are advertised for their proposed safety and convenience, they are also providing new avenues for violence, harassment, and abuse.

The impact of technology-facilitated abuse is especially notable on young people. In recent years, pediatric safeguarding guidelines have been amended in response to increasing rates of knife crime, gang violence and drug trafficking in the UK. However, technology-facilitated abuse has evolved at a parallel rate and has not received the same level of attention. The impact of technology-facilitated abuse on children and teenagers may manifest as emotional distress, anxiety, suicidal ideation. Koubel reports the exacerbation of mental health risks born from websites that encourage self-harm, eating disorders, and suicide. Furthermore, technology-facilitated dating abuse and sextortion is increasing amongst adolescent populations. With 10% of children being affected by sexual solicitation online, the problem is widespread and under-investigated. As reported by Stonard et al. in “They’ll Always Find a Way to Get to You, digital devices are playing an increasing role in relationship abuse amongst young people.

Vulnerable individuals frequently perceive medical settings as a place of safety. Healthcare professionals, thus, have a role in providing both medical and psychosocial care to ensure their wellbeing. At present, existing clinical and patient management protocols are outdated and do not address the emerging threats of technology-facilitated abuse. For clinicians to provide effective care to patients affected by technological elements of abuse and violence, clinical safeguarding protocols need a radical update if they are to assist professionals navigating high risk scenarios.

Continue reading “I am yet to meet a young person that has not experienced some form of abuse via tech”

US proposes to protect bank customers from Authorised Push Payment fraud

This week, at the US House Financial Services Committee hearing, Representative Stephen F. Lynch announced a draft of the Protecting Consumers From Payment Scams Act. If enacted, this would expand the existing protection for US customers (Regulation E) who have funds transferred out of their account without their consent, to also cover when the customer is tricked into performing the fraudulent transfer themselves. This development is happening in parallel with efforts in the UK and elsewhere to reduce fraud and better protect victims. However, the draft act’s approach is notably different from the UK approach – it’s simpler, gives stronger protection to customers, and shifts liability to the bank receiving fraudulent transfers. In this post, I’ll discuss these differences and what the implications might be.

The type of fraud the proposed law deals with, where criminals coerce victims into making payment under false pretences, is known as Authorised Push Payment (APP) fraud and is a problem worldwide. In the UK, APP fraud is now by far the most common type of payment fraud, with losses of £355 million in the first half of 2021, more than all types of card fraud put together (£261 million).

APP fraud falls outside of existing consumer protection, so victims are commonly held liable for the losses. The effects can be life-changing, with people losing 6-figure sums within minutes. It’s therefore welcome to see moves to better consumer protection. The UK was one of the first to tackle this problem, with a voluntary code of practice being put in place following years of campaigning by consumer rights organisations, particularly Which.

Continue reading US proposes to protect bank customers from Authorised Push Payment fraud

Pre-loading HSTS for sibling domains through this one weird trick

The vast majority of websites now support encrypted connections over HTTPS. This prevents eavesdroppers from monitoring or tampering with people’s web activity and is great for privacy. However, HTTPS is optional, and all browsers still support plain unsecured HTTP for when a website doesn’t support encryption. HTTP is commonly the default, and even when it’s not, there’s often no warning when access to a site falls back to using HTTP.

The optional nature of HTTPS is its weakness and can be exploited through tools, like sslstrip, which force browsers to fall back to HTTP, allowing the attacker to eavesdrop or tamper with the connection. In response to this weakness, HTTP Strict Transport Security (HSTS) was created. HSTS allows a website to tell the browser that only HTTPS should be used in future. As long as someone visits an HSTS-enabled website one time over a trustworthy Internet connection, their browser will refuse any attempt to fall back to HTTP. If that person then uses a malicious Internet connection, the worst that can happen is access to that website will be blocked; tampering and eavesdropping are prevented.

Still, someone needs to visit the website once before an HSTS setting is recorded, leaving a window of opportunity for an attacker. The sooner a website can get its HSTS setting recorded, the better. One aspect of HSTS that helps is that a website can indicate that not only should it be HSTS enabled, but that all subdomains are too. For example, planet.wikimedia.org can say that the subdomain en.planet.wikimedia.org is HSTS enabled. However, planet.wikimedia.org can’t say that commons.wikimedia.org is HSTS enabled because they are sibling domains. As a result, someone would need to visit both commons.wikimedia.org and planet.wikimedia.org before both websites would be protected.

What if HSTS could be applied to sibling domains and not just subdomains? That would allow one domain to protect accesses to another. The HSTS specification explicitly excludes this feature, for a good reason: discovering whether two sibling domains are run by the same organisation is fraught with difficulty. However, it turns out there’s a way to “trick” browsers into pre-loading HSTS status for sibling domains.

google chrome hsts warning Continue reading Pre-loading HSTS for sibling domains through this one weird trick

Still treating users as the enemy: entrapment and the escalating nastiness of simulated phishing campaigns

Three years ago, we made the case against phishing your own employees through simulated phishing campaigns. They do little to improve security: click rates tend to be reduced (temporarily) but not to zero – and each remaining click can enable an attack. They also have a hidden cost in terms of productivity – employees have to spend time processing more emails that are not relevant to their work, and then spend more time pondering whether to act on emails. In a recent paper, Melanie Volkamer and colleagues provided a detailed listing of the pros and cons from the perspectives of security, human factors and law. One of the legal risks was finding yourself in court with one of the 600-pound digital enterprise gorillas for trademark infringement – Facebook objected to their trademark and domain being impersonated. They also likely don’t want their brand to be used in attacks because, contrary to what some vendors tell you, being tricked by your employer is not a pleasant experience. Negative emotions experienced with an event often transfer to anyone or anything associated with it – and negative emotions are not what you want associated with your brand if your business depends on keeping billions of users engaging with your services as often as possible.

Recent tactics employed by the providers of phishing campaigns can only be described as entrapment – to “demonstrate” the need for their services, they create messages that almost everyone will click on. Employees of the Chicago Tribune and GoDaddy, for instance, received emails promising bonuses. Employees had hope of extra pay raised and then cruelly dashed, and on top, were hectored for being careless about phishing. Some employees vented their rage publicly on Twitter, and the companies involved apologised. The negative publicity may eventually be forgotten, but the resentment of employees feeling not only tricked but humiliated and betrayed, will not fade any time soon. The increasing nastiness of entrapment has seen employees targeted with promises of COVID vaccinations from employers – who then find themselves being ridiculed for their gullibility instead of lauded for their willingness to help.

Continue reading Still treating users as the enemy: entrapment and the escalating nastiness of simulated phishing campaigns

The role of usability, power dynamics, and incentives in dispute resolutions around computer evidence

As evidence produced by a computer is often used in court cases, there are necessarily presumptions about the correct operation of the computer that produces it. At present, based on a 1997 paper by the Law Commission, it is assumed that a computer operated correctly unless there is explicit evidence to the contrary.

The recent Post Office trial (previously mentioned on Bentham’s Gaze) has made clear, if previous cases had not, that this assumption is flawed. After all, computers and the software they run are never perfect.

This blog post discusses a recent invited paper published in the Digital Evidence and Electronic Signature Law Review titled The Law Commission presumption concerning the dependability of computer evidence. The authors of the paper, collectively referred to as LLTT, are Peter Bernard Ladkin, Bev Littlewood, Harold Thimbleby and Martyn Thomas.

LLTT examine the basis for the presumption that a computer operated correctly unless there is explicit evidence to the contrary. They explain why the Law Commission’s belief in Colin Tapper’s statement in 1991 that “most computer error is either immediately detectable or results from error in the data entered into the machine” is flawed. Not only can computers be assumed to have bugs (including undiscovered bugs) but the occurrence of a bug may not be noticeable.

LLTT put forward three recommendations. First, a presumption that any particular computer system failure is not caused by software is not justified, even for software that has previously been shown to be very reliable. Second, evidence of previous computer failure undermines a presumption of current proper functioning. Third, the fact that a class of failures has not happened before is not a reason for assuming it cannot occur.

Continue reading The role of usability, power dynamics, and incentives in dispute resolutions around computer evidence

By revisiting security training through economics principles, organisations can navigate how to support effective security behaviour change

Here I describe analysis by myself and colleagues Albesë Demjaha and David Pym at UCL, which originally appeared at the STAST workshop in late 2019 (where it was awarded best paper). The work was the basis for a talk I gave at Cambridge Computer Laboratory earlier this week (I thank Alice Hutchings and the Security Group for hosting the talk, as it was also an opportunity to consider this work alongside themes raised in our recent eCrime 2019 paper).

Secure behaviour in organisations

Both research and practice have shown that security behaviours, encapsulated in policy and advised in organisations, may not be adopted by employees. Employees may not see how advice applies to them, find it difficult to follow, or regard the expectations as unrealistic. Employees may, as a consequence, create their own alternative behaviours as an effort to approximate secure working (rather than totally abandoning security). Organisational support can then be critical to whether secure practices persist. Economics principles can be applied to explain how complex systems such as these behave the way they do, and so here we focus on informing an overarching goal to:

Provide better support for ‘good enough’ security-related decisions, by individuals within an organization, that best approximate secure behaviours under constraints, such as limited time or knowledge.

Traditional economics assumes decision-makers are rational, and that they are equipped with the capabilities and resources to make the decision which will be most beneficial for them. However, people have reasons, motivations, and goals when deciding to do something — whether they do it well or badly, they do engage in thinking and reasoning when making a decision. We must capture how the decision-making process looks for the employee, as a bounded agent with limited resources and knowledge to make the best choice. This process is more realistically represented in behavioural economics. And yet, behaviour intervention programmes mix elements of both of these areas of economics. It is by considering these principles in tandem that we explore a more constructive approach to decision-support in organisations.

Contradictions in current practice

A bounded agent often settles for a satisfactory decision, by satisficing rather than optimising. For example, the agent can turn to ‘rules of thumb’ and make ad-hoc decisions, based on a quick evaluation of perceived probability, costs, gains, and losses. We can already imagine how these restrictions may play out in a busy workplace. This leads us toward identifying those points of engagement at which employees ought to be supported, in order to avoid poor choices.

Continue reading By revisiting security training through economics principles, organisations can navigate how to support effective security behaviour change

UK Parliament on protecting consumers from economic crime

On Friday, the UK House of Commons Treasury Committee published their report on the consumer perspective of economic crime. I’ve frequently addressed this topic in my research, as well as here on Bentham’s Gaze, so I’m pleased to see several recommendations of the committee match what myself and colleagues have proposed. In other respects, the report could have gone further, so as well as discussing the positive aspects of the report, I would also like to suggest what more could be done to reduce economic crime and protect its victims.

Irrevocable payments are the wrong default

Transfers between UK bank accounts will generally use the Faster Payment System (FPS), where money will immediately show up in the recipient account. FPS transfers cannot be revoked, even in the case of fraud. This characteristic protects banks because if fraudulently obtained funds leave the banking system, the bank receiving the transfer has no obligation to reimburse the victim.

In contrast, the clearing system for paper cheques permits payments to be revoked for a few days after the funds appeared in the recipient account, should there have been a fraud. This period allows customers to quickly make use of funds they receive, while still giving a window of opportunity for banks and customers to identify and prevent fraud. There’s no reason why this same revocation window could not be applied to fully electronic payment systems like FPS.

In my submissions to consultations on how to prevent push payment scams, I argued that irrevocable payments are the wrong default, and transfers should be possible to reverse in cases of fraud. The same argument applies to consumer-oriented cryptocurrencies like Libra. I’m pleased to see that the Treasury Committee agrees and they have recommended that when a customer sends money to an account for the first time, that transfer be revocable for 24 hours.

Introducing Confirmation of Payee, finally

The banking industry has been planning on launching the Confirmation of Payee system to check if the name of the recipient of a transfer matches what the customer sending money thinks. The committee is clearly frustrated with delays on deploying this system, first promised for September 2018 but since slipped to March 2020. Confirmation of Payee will be a helpful tool for customers to help avoid certain frauds. Still, I’m pleased the committee also recognise it’s limitations and that the “onus will always be on financial firms to develop further methods and technologies to keep up with fraudsters.” It is for this reason that I argued that a bank showing a customer a Confirmation of Payee mismatch should not be a sufficient condition to hold customers liable for fraud, and the push-payment scam reimbursement scheme is wrong to do so. It doesn’t look like the committee is asking for the situation to be changed though.

Continue reading UK Parliament on protecting consumers from economic crime

Beyond Regulators’ Concerns, Facebook’s Libra Cryptocurrency Faces another Big Challenge: The Risk of Fraud

Facebook has attracted attention through the announcement of their blockchain-based payment network, Libra. This won’t be the first payment system Facebook has launched, but what makes Facebook’s Libra distinctive is that rather than transferring Euros or dollars, the network is designed for a new cryptocurrency, also called Libra. This currency is backed by a reserve of nationally-issued currencies, and so Facebook hopes it will avoid the high volatility of cryptocurrencies like Bitcoin. As a result, Libra won’t be attractive to currency speculators, but Facebook hopes that it will, therefore, be useful for its stated goal – to be a “simple global currency and financial infrastructure that empowers billions of people.”

Reducing currency volatility is only one step towards meeting this goal of scaling cryptocurrencies to billions of users. The Libra blockchain design addresses how the network can maintain the high throughput and low transaction fees needed to compete with existing payment networks like Visa or MasterCard. However, a question that is equally important but as yet unanswered is how Facebook will develop a secure authentication and fraud prevention system that can scale to billions of users while maintaining good usability and low cost.

Facebook designed the Libra network, but in contrast to traditional payment networks, the Libra network is open. Anyone can send transactions through the network, and anyone can write programs (known as “smart contracts”) that control how, and under what conditions, funds can move between Libra accounts. To comply with anti-money-laundering regulations, Know Your Customer (KYC) checks will be performed, but only when Libra enters or leaves the network through exchanges. Transactions moving funds within the network should be accepted if they meet the criteria set out in the applicable smart contract, regardless of who sent them.

The Libra network isn’t even restricted to transactions transferring the Libra currency. Facebook has explicitly designed the Libra blockchain to make it easy for anyone to implement their own currency and benefit from the same technical facilities that Facebook designed for its currency. Other blockchains have tried this. For example, Ethereum has spawned hundreds of special-purpose currencies. But programming a smart contract to implement a new currency is difficult, and errors can be costly. The programming language for smart contracts within the Libra network is designed to help developers avoid some of the most common mistakes.

Facebook’s Libra and Securing the Calibra Wallet

There’s more to setting up an effective currency than just the technology: regulatory compliance, a network of exchanges, and monetary policy are essential. Facebook, through setting up the Libra Association, is focusing its efforts here solely on the Libra currency. The widespread expectation is, therefore, at least initially, the Libra cryptocurrency will be the dominant usage of the network, and most users will send and receive funds through the Calibra wallet smartphone app, developed by a Facebook subsidiary. From the perspective of the vast majority of the world, the Calibra wallet will be synonymous with Facebook’s Libra, and so damage to trust in Calibra will damage the reputation of Libra as a whole.

Continue reading Beyond Regulators’ Concerns, Facebook’s Libra Cryptocurrency Faces another Big Challenge: The Risk of Fraud

Next version of Android might introduce new security risks for online banking, 2FA, and more

Google is preparing new functionality for Android that will allow apps to retrieve and auto-fill security codes from SMS. Last year Apple introduced a similar feature to iOS and macOS, for which we discovered security risks for online banking, two-factor authentication, and other services. Will Google come up with a better design? In this post, we analyse what we know about this feature so far. 


The latest developer beta of Google Play Services (18.7.13 beta) contains code fragments that show a new Android permission to automatically retrieve verification codes from text messages. This feature has not yet been fully implemented, but the available code allows for some analysis and early evaluation for possible security risks, akin to similar risks we demonstrated in 2018 for the Security Code AutoFill feature in iOS and macOS.

Background

It seems that Google is updating the “Autofill Framework”, introduced with Android 8.0 in 2017, to include the new functionality. Previously, this framework’s sole purpose was to support the autofill functionality of password managers in Android apps and websites. The code fragments of this new feature reveal the names and descriptions of the associated system setting and corresponding runtime permission requests, shown below.

A screenshot of an Android phone.
The likely UI of the new setting in Android to enable/disable SMS Code Auto-fill.
The picture of an Android runtime permission request.
The likely UI of the new runtime permission request in Android to deny or allow an application’s access to the SMS Code Auto-fill feature.

Continue reading Next version of Android might introduce new security risks for online banking, 2FA, and more