Confirmation of Payee is coming, but will it protect bank customers from fraud?

The Payment System Regulator (PSR) has just announced that the UK’s six largest banks must check whether the name of the recipient of a transfer matches what the sender thinks. This new feature should help address a security loophole in online payments: the name of the recipient of transfers is ignored, contrary to expectations and unlike cheques. This improved security should make some fraud more difficult, but banks must be prevented from exploiting the change to unfairly shift the liability of the remaining crime to the victims.

The PSR’s target is for checks to be fully implemented by March 2020, somewhat later than their initial promise to Parliament of September 2018 and subsequent target of July 2019. The new proposal, known as Confirmation of Payee, also only covers the six largest banking groups, but this should cover 90% of transfers. Its goal is to defend against criminals who trick victims into transferring funds under the false pretence that the money is going to the victim’s new account, whereas it is really going to the criminal. The losses from such fraud, known as push payment scams, are often life-changing, resulting in misery for the victims.

Checks on the recipient name will make this particular scam harder, so while unlikely to prevent all types of push payment scams they will hopefully force criminals to adopt strategies that are easier to prevent. The risk that consumer representatives and regulators will need to watch out for is that these new security measures could result in victims being unfairly held liable. This scenario is, unfortunately, likely because the voluntary consumer protection code for push payment scams excuses the bank from liability if they show the customer a Confirmation of Payee warning.

Warning fatigue and misaligned incentives

In my response to the consultation over this consumer protection code, I raised the issue of “warning fatigue” – that customers will be shown many irrelevant warnings while they do online banking and this reduces the likelihood that customers will notice important ones. Even Confirmation of Payee warnings will frequently be wrong, such as if the recipient’s bank account is under a different name to what the sender expects. If the two names are very dissimilar, the sender won’t be given more details but if the name entered is close to the name in bank records the sender should be told what the correct one is and asked to compare.

Continue reading Confirmation of Payee is coming, but will it protect bank customers from fraud?

Digital Exclusion and Fraud – the Dark Side of Payments Authentication

Today, the Which? consumer rights organisation released the results from its study of how people are excluded from financial services as a result of banks changing their rules to mandate that customers use new technology. The research particularly focuses on banks now requiring that customers register a mobile phone number and be able to receive security codes in SMS messages while doing online banking or shopping. Not only does this change result in digital exclusion – customers without mobile phones or good network coverage will struggle to make payments – but as I discuss in this post, it’s also bad for security.

SMS-based security codes are being introduced to help banks meet their September 2019 deadline to comply with the Strong Customer Authentication requirements of the EU Payment Services Directive 2. These rules state that before making a payment from a customer’s account, the bank must independently verify that the customer really intended to make this payment. UK banks almost universally have decided to meet their obligation by sending a security code in an SMS message to the customer’s mobile phone and asking the customer to type this code into their web browser.

The problem that Which? identified is that some customers don’t have mobile phones, some that do have mobile phones don’t trust their bank with the number, and even those who are willing to share their mobile phone number with the bank might not have network coverage when they need to make a payment. A survey of Which? members found that nearly 1 in 5 said they would struggle to receive the security code they need to perform online banking transactions or online card payments. Remote locations have poorer network coverage than average and it is these areas that are likely to be disproportionately affected by the ongoing bank branch closure programmes.

Outsourcing security

The aspect of this scenario that I’m particularly interested in is why banks chose SMS messages as a security technology in the first place, rather than say sending out dedicated authentication devices to their customers or making a smartphone app. SMS has the advantage that customers don’t need to install an app or have the inconvenience of having to carry around an extra authentication device. The bank also saves the cost of setting up new infrastructure, other than hooking up their payment systems to the phone network. However, SMS has disadvantages – not only does it exclude customers in areas of poor network coverage, but it also effectively outsources security from the bank to the phone networks.

Continue reading Digital Exclusion and Fraud – the Dark Side of Payments Authentication

Justice for victims of bank fraud – learning from the Post Office trial

In London, this week, a trial is being held over a dispute between the Justice for Subpostmasters Alliance (JFSA) and the Post Office, but the result will have far-reaching repercussions for anyone disputing computer evidence. The trial currently focuses on whether the legal agreements and processes set up by the Post Office are a fair basis for managing its relationship with the subpostmasters who operate branches on its behalf. Later, the court will assess whether the fact that the Post Office computer system – Horizon – indicates that a subpostmaster is in debt to the Post Office is sufficient evidence for the subpostmaster to be indeed liable to repay the debt, even when the subpostmaster claims the accounts are incorrect due to computer error or fraud.

Disputes over Horizon have led to subpostmasters being bankrupted, losing their homes, or even being jailed but these cases also echo the broader issues at the heart of the many phantom withdrawals disputes I see between a bank and its customers. Customers claim that money was taken from their accounts without their permission. The bank claims that their computer system shows that either the customer authorised the withdrawal or was grossly negligent and so the customer is liable. The customer may also claim that the bank’s handling of the dispute is poor and the contract with the bank protects the bank’s interests more than that of the customer so is an unfair basis for managing disputes.

There are several lessons the Post Office trial will have for the victims of phantom withdrawals, particularly for cases of push payment fraud, but in this post, I’m going to explore why these issues are being dealt with first in a trial initiated by subpostmasters and not by the (far more numerous) bank customers. In later posts, I’ll look more into the specific details that are being disclosed as a result of this trial.

Continue reading Justice for victims of bank fraud – learning from the Post Office trial

UK Faster Payment System Prompts Changes to Fraud Regulation

Banking transactions are rapidly moving online, offering convenience to customers and allowing banks to close branches and re-focus on marketing more profitable financial products. At the same time, new payment methods, like the UK’s Faster Payment System, make transactions irrevocable within hours, not days, and so let recipients make use of funds immediately.

However, these changes have also created a new opportunity for fraud schemes that trick victims into performing a transaction under false pretences. For example, a criminal might call a bank customer, tell them that their account has been compromised, and help them to transfer money to a supposedly safe account that is actually under the criminal’s control. Losses in the UK from this type of fraud were £145.4 million during the first half of 2018 but importantly for the public, such frauds fall outside of existing consumer protection rules, leaving the customer liable for sometimes life-changing amounts.

The human cost behind this epidemic has persuaded regulators to do more to protect customers and create incentives for banks to do a better job at preventing the fraud. These measures are coming sooner than UK Finance – the trade association for UK based banking payments and cards businesses – would like, but during questioning by the House of Commons Treasury Committee, their Chief Executive conceded that change is coming. They now focus on who will reimburse customers who have been defrauded through no fault of their own. Who picks up the bill will depend not just on how good fraud prevention measures are, but how effectively banks can demonstrate this fact.

UK Faster Payment Creates an Opportunity for Social Engineering Attacks

One factor that contributed to the new type of fraud is that online interactions lack the usual cues that help customers tell whether a bank is genuine. Criminals use sophisticated social engineering attacks that create a sense of urgency, combined with information gathered about the customer through illicit means, to convince even diligent victims that it could only be their own bank calling. These techniques, combined with the newly irrevocable payment system, create an ideal situation for criminals.

Continue reading UK Faster Payment System Prompts Changes to Fraud Regulation

When Convenience Creates Risk: Taking a Deeper Look at Security Code AutoFill on iOS 12 and macOS Mojave

A flaw in Apple’s Security Code AutoFill feature can affect a wide range of services, from online banking to instant messaging.

In June 2018, we reported a problem in the iOS 12 beta. In the previous post, we discussed the associated risks the problem creates for transaction authentication technology used in online banking and elsewhere. We described the underlying issue and that the risk will carry over to macOS Mojave. Since our initial reports, Apple has modified the Security Code AutoFill feature, but the problem is not yet solved.

In this blog post, we publish the results of our extended analysis and demonstrate that the changes made by Apple mitigated one symptom of the problem, but did not address the cause. Security Code AutoFill could leave Apple users in a vulnerable position after upgrading to iOS 12 and macOS Mojave, exposing them to risks beyond the scope of our initial reports.

We describe four example attacks that are intended to demonstrate the risks stemming from the flawed Security Code AutoFill, but intentionally omit the detail necessary to execute them against live systems. Note that supporting screenshots and videos in this article may identify companies whose services we’ve used to test our attacks. We do not infer that those companies’ systems would be affected any more or any less than their competitors.

Flaws in Security Code AutoFill

The Security Code AutoFill feature extracts short security codes (e.g., a one-time password or OTP) from an incoming SMS and allows the user to autofill that code into a web form, webpage, or app when authenticating. This feature is meant to provide convenience, as the user no longer needs to memorize and re-enter a code in order to authenticate. However, this convenience could create risks for the user.

Continue reading When Convenience Creates Risk: Taking a Deeper Look at Security Code AutoFill on iOS 12 and macOS Mojave

Will new UK rules reduce the harm of push-payment fraud?

On Friday’s Rip off Britain I’ll be talking about new attempts by UK banks to prevent fraud, and the upcoming scheme for reimbursing the victims. While these developments have the potential to better protect customers, the changes could equally leave customers in a more vulnerable situation than before. What will decide between these two extremes is how well designed will be the rules surrounding these new schemes.

The beginning of this story is September 2016, when the consumer association – Which? – submitted a super-complaint to the UK Payment System Regulator (PSR) regarding push payment fraud – where a customer is tricked into transferring money into a criminal’s account. Such bank transfers are known as push payments because they are initiated by the bank sending the money, as opposed to pull payments, like credit and debit cards, where it is the receiving bank that starts the process. Banks claim that since the customer was involved in the process, they “authorised” the transaction, and so under UK and EU law, the customer is not entitled to a refund. I’ve argued that this interpretation doesn’t match any reasonable definition of the word “authorised” but nevertheless the term “authorised push payment scams” seems to have stuck as the commonly used terminology for this type of fraud, I’m sure much to the banks’ delight.

The Which? super-complaint asked for banks to be held liable for such frauds, and so reimburse the victims unless the bank can demonstrate the customer has acted with gross negligence. Which? argued that this approach would protect the customers from a fraud that exists as a consequence of bank design decisions, and provides banks with both a short-term incentive to prevent frauds that they can stop, as well as a medium-to-long term incentive for the banks to enhance payment systems to be resistant to fraud. The response from the PSR was disappointing, recognising that banks should do more, but rejecting the recommendation to hold banks liable for this fraud and requesting only that the banks collect more data. Nevertheless, the data collected proved useful in understanding the scale of the problem – £236 million stolen from over 42,000 victims in 2017, with banks only being able to recover 26% of the losses. This revelation led to Parliament asking difficult questions of the PSR.

The PSR’s alternative to holding banks liable for push payment fraud is for victims to be reimbursed if they can demonstrate they have acted with an appropriate level of care and that the bank has not. The precise definition of each level of care was a subject of consultation, and will now be decided by a steering group consisting of representatives of the banking industry and consumers. In my response to this consultation, I explained my reasons for recommending that banks be liable for fraud, including that fairly deciding whether customers met a level of care is a process fraught with difficulties. This is particularly the case due to the inequality in power between a bank and its customer, and that taking a banking dispute to court is ruinously expensive for most people since the option of customers spreading the cost through collective actions was removed from the Financial Services Act. More generally, banks – as the designers of payment systems and having real-world understanding of their use – have the greatest capacity to mitigate the risks these systems introduce.

Nevertheless, if the rules for the reimbursement scheme are set up well, it would be a substantial improvement over the current situation. On the other hand, if the process is bad then it could entrench the worst of current practices. Because the PSR has decided that reimbursement should depend on compliance to a level of care, my response also included what should be the process for defining these levels, and for adjudicating disputes.

Continue reading Will new UK rules reduce the harm of push-payment fraud?

Security code AutoFill: is this new iOS feature a security risk for online banking?

A new feature for iPhones in iOS 12 – Security Code AutoFill – is supposed to improve the usability of Two Factor Authentication but could place users at risk of falling victim to online banking fraud.

Two Factor Authentication (2FA), which is often referred to as Two Step Verification, is an essential element for many security systems, especially those online and accessed remotely. In most cases, it provides extended security by checking if the user has access to a device. In SMS-based 2FA, for example, a user registers their phone number with an online service. When this service sees a login attempt for the corresponding user account, it sends a One Time Password (OTP), e.g. four to six digits, to the registered phone number. The legitimate user then receives this code and is able to quote it during the login process, but an impersonator won’t.

In a recent development by Apple, announced at its developer conference WWDC18, they are set to automate this last step to improve user experience with 2FA with a new feature that is set to be introduced to iOS in version 12. The Security Code AutoFill feature, currently available to developers in a beta version, will allow the mobile device to scan incoming SMS messages for such codes and suggest them at the top of the default keyboard.

Description of new iOS 12 Security Code AutoFill feature (source: Apple)

Currently, these SMS codes rely on the user actively switching apps and memorising the code, which can take a couple of seconds. Some users deploy alternative try strategies such as memorising the code from the preview banner and hastily typing it down. Apple’s new iOS feature will require only a single tap from the user. This will make the login process faster and less error prone, a significant improvement to the usability of 2FA. It could also translate into an increased uptake of 2FA among iPhone users.

Example of Security Code AutoFill feature in operation on iPhone (source: Apple)

If users synchronise SMS with their MacBook or iMac, the existing Text Message Forwarding feature will push codes from their iPhone and enable Security Code AutoFill in Safari.

Example of Security Code AutoFill feature synchronised with macOS Mojave (source: Apple)

Reducing friction in user interaction to improve technology uptake for new users, and increase the usability and satisfaction for existing users, is not a new concept. It has not only been discussed in academia at length but is also a common goal within industry, e.g. in banking. This is evident in how the financial and payment industry has encouraged contactless (Near Field Communication – NFC) payments, which makes transactions below a certain threshold much quicker than traditional Chip and PIN payments.

Continue reading Security code AutoFill: is this new iOS feature a security risk for online banking?

Incentives in Security Protocols

The 2018 edition of the International Security Protocols Workshop took place last week. The theme this year was “fail-safe and fail-deadly concepts in protocol design”.

One common theme at this year’s workshop is that of threat models and incentives, which is covered by the majority of accepted papers. One of these is our (Sarah Azouvi, Alexander Hicks and Steven Murdoch) submission – Incentives in Security Protocols. The aim of the paper is to discuss how incentives can be considered and incorporated in the security of systems. In line with the given theme, the focus is on fail-safe and fail-deadly cases which we look at for the cases of the EMV protocol, consensus in cryptocurrencies, and non-economic systems such as Tor. This post will summarise the main ideas laid out in the paper.

Fail safe, fail deadly and people

Systems can fail, which requires some thought by system designers to account for these failures. From this setting comes the idea behind fail safe protocols which are such that even if the protocol fails, the failure can be dealt with or the protocol can be aborted to limit damage. The idea of a fail deadly setting is an extension of this where failure is defended against through deterrence, as in the case of nuclear deterrence (sometimes a realistic case).

Human input often plays a role in the use of the system, particularly when decisions are required as in fail safe and fail deadly instances. These decisions are then made according to incentives which can aligned to make the system robust to failure. For a fail deadly alignment, this means that a person in position to prevent system failure will be harmed by the failure. In the fail safe case, the innocent parties should be protected from the consequences of system failure. The two concepts are really two sides of the same coin that assigns liability.

It is often said that people are the weakest link in security, but that is an easy excuse for broken protocols. If security incentives are aligned properly, then humans are the strongest link.

The EMV protocol, adding incentives after the fact

As a first example, we consider the case of the EMV protocol, which is used for the majority of smart card payments worldwide, as well as smartphone and card-based contactless payment. Over the years, many vulnerabilities have been identified and removed. Fraud still exists however, due not to unexpected protocol vulnerabilities but to decisions made by banks (e.g., omitting the ability for cards to produce digital signatures), merchants (e.g., omitting PIN verification) and payment networks not sending transactions details back to banks. These are intentional choices, aiming to saves costs and cut transaction times but make fraud harder to detect.

Continue reading Incentives in Security Protocols

A witch-hunt for trojans in our chips

A Hardware Trojan (HT) is a malicious modification of the circuitry of an integrated circuit.

 

A malicious chip can make a device malfunction in several ways. It has been rumored that a hardware trojan implanted in a Syrian air-defense radar caused it to stop operating during an airstrike, thus instantly minimizing the country’s situational awareness and threat response capabilities. In other settings, hardware trojans may leak encryption keys or other secrets, or even generate weak keys that can be easily recovered by the adversary.

This article introduces a new trojan-resilient architecture, discusses its motivation and outlines how it differs from existing solutions. The full paper (Vasilios MavroudisAndrea CerulliPetr Svenda, Dan CvrcekDusan Klinec, George Danezis) has been presented in several academic and industrial venues including DEF CON 25, and ACM Conference on Computer and Communications Security 2017.

The Challenge of Detecting HT

Judging from the abundance of governmental, industrial and academic projects concerned with the prevention and the detection of hardware trojans, there is a consensus regarding the severity of the threat and it’s not taken lightly. DARPA has launched the “Integrity and Reliability of Integrated Circuits“ program aiming to develop techniques for the detection of malicious circuitry. The Intelligence Advanced Research Projects Activity funded a project aiming to redesign the fabrication of integrated circuits, while various other initiatives are currently undergoing (e.g., the COST Action project on “Trustworthy Manufacturing and Utilization of Secure Devices” and the DoD Trusted Foundry program). In addition to these, there are numerous other industrial and academic projects proposing new trojan detection techniques every year, only to be circumvented by follow up work.

But do Hardware Trojans exist?

Ironically, until now there have been no cases where malicious circuitry was detected in military-grade or even commercial chips. With nothing more than rumors to hint about hardware trojans (in places other than academic lab benches), one cannot but question their existence. In other words, is HT design and insertion too complex to be practical, or do our detection tools fail to detect the malicious circuitry embedded in the chips around us?

It could be that both are true: hardware trojans do not exist (yet) as malicious actors are focusing on other aspects of the hardware that are easier to compromise. In all cases where trojans were discovered, the erroneous behavior was traced to the chip’s firmware and not its circuitry. Interestingly, in the vast majority of those incidents the security flaws were attributed to honest fabrication mistakes (e.g., manufacturer failing to disable a testing interface).

Intentional vs. Unintentional Errors

It is safe to always assume that an IC will fail in the worst possible way, at the worst possible time (see Syrian airdefense incident). This “crash n’ burn” approach is common in critical systems (e.g., airplanes, satellites, dams), where any divergence from normal operation will result in an irrecoverable failure of the whole system.

To mitigate the risk, critical system designers employ redundancy techniques to eliminate single points of failure and thus make their setups resilient to faults. A common example are triple-redundant systems used in autopilots. Those systems employ three identical chips sourced from disjoint supply chains and replicate all the navigation computations across them. This allows the system to both tolerate a misbehaving chip and detect its presence.

It is noteworthy, that those systems do not consider the cause of the chip malfunction, and simply assume that they fail in the worst possible way. Following from this, a malicious chip is not significantly different from a defective one. After all, any adversary sophisticated enough to design and insert a hardware trojan is capable of making it indistinguishable from honest manufacturing errors. Similarly, from an operational perspective it makes little sense to distinguish between trojans in the circuitry and trojans in the firmware as the risk they pose for the system is identical.

Distributing Trust for Resilience

Given that it is impossible to achieve 100% detection rates of hardware trojans and errors, it is important that our devices maintain their security properties even in their presence. Our work introduces a new high-level device architecture that is resilient to both. In its core, it uses a redundancy-based architecture and secret-sharing protocols to distribute all secrets and computations among multiple chips. Hence, unless all chips are compromised by the same adversary, the security of the system remains intact. A key point is that those chips should originate from disjoint supply chains. This is to minimize the risk of the same adversary compromising more than one chips. To evaluate its practicality in real-life applications, we built a Hardware Security Module (HSM) that performs standard cryptographic operations (e.g., key generation, decryption, signing) at a very high rate. HSMs are commonly used in operations where security is critical, and an increased transaction throughput is needed (e.g., banking, certification Authorities). A demonstration is shown in the video above, and further details are on our website. Finally, our work can be easily combined with all existing detection and prevention techniques to further decrease the likelihood of compromises.

Liability for push payment fraud pushed onto the victims

This morning, BBC Rip Off Britain focused on push payment fraud, featuring an interview with me (starts at 34:20). The distinction between push and pull payments should be a matter for payment system geeks, and certainly isn’t at the front of customers’ minds when they make a payment. However, there’s a big difference when there’s fraud – for online pull payments (credit and debit card)  the bank will give the victim the money back in many situations; for online push payments (Faster Payment System and Standing Orders) the full liability falls on the party least able to protect themselves – the customer.

The banking industry doesn’t keep good statistics about push payment fraud, but it appears to be increasing, with Which receiving reports from over 650 victims in the first two weeks of November 2016, with losses totalling over £5.5 million. Today’s programme puts a human face to these statistics, by presenting the case of Jane and Steven Caldwell who were defrauded of over £100,000 from their Nationwide and NatWest accounts.

They were called up at the weekend by someone who said he was working for NatWest. To verify that this was the case, Jane used three methods. Firstly, she checked caller-ID to confirm that the number was indeed the bank’s own customer helpline – it was. Secondly, she confirmed that the caller had access to Jane’s transaction history – he did. Thirdly, she called the bank’s customer helpline, and the caller knew this was happening despite the original call being muted.

Convinced by these checks, Jane transferred funds from her own accounts to another in her own name, having been told by the caller that this was necessary to protect against fraud. Unfortunately, the caller was a scammer. Experts featured on the programme suspect that caller-ID was spoofed (quite easy, due to lack of end-to-end security for phone calls), and that malware on Jane’s laptop allowed the scammer to see transaction history on her screen, as well as to listen to and see her call to the genuine customer helpline through the computer’s microphone and webcam. The bank didn’t check that the name Jane gave (her own) matched that of the recipient account, so the scammer had full access to the transferred funds, which he quickly moved to other accounts. Only Nationwide was able to recover any money – £24,000 – leaving Jane and Steven over £75,000 out of pocket.

Neither bank offered Jane and Steven a refund, because they classed the transaction as “authorised” and so falling into one of the exceptions to the EU Payment Services Directive requirement to refund victims of fraud (the other exception being if the bank believed the customer acted either with gross negligence or fraudulently). The banks argued that their records showed that the customer’s authentication device was used and hence the transaction was “authorised”. In the original draft of the Payment Services Directive this argument would not be sufficient, but as a result of concerted lobbying by Barclays and other UK banks for their records to be considered conclusive, the word “necessarily” was inserted into Article 72, and so removing this important consumer protection.

“Where a payment service user denies having authorised an executed payment transaction, the use of a payment instrument recorded by the payment service provider, including the payment initiation service provider as appropriate, shall in itself not necessarily be sufficient to prove either that the payment transaction was authorised by the payer or that the payer acted fraudulently or failed with intent or gross negligence to fulfil one or more of the obligations under Article 69.”

Clearly the fraudulent transactions do not meet any reasonable definition of “authorised” because Jane did not give her permission for funds to be transferred to the scammer. She carried out the transfer because the way that banks commonly authenticate themselves to customers they call (proving that they know your account details) was unreliable, because the recipient bank didn’t check the account name, because bank fraud-detection mechanisms didn’t catch the suspicious nature of the transactions, and because the bank’s authentication device is too confusing to use safely. When the security of the payment system is fully under control of the banks, why is the customer held liable when a person acting with reasonable care could easily do the same as Jane?

Another question is whether banks do enough to recover funds lost through scams such as this. The programme featured an interview with barrister Gideon Roseman who quickly obtained court orders allowing him to recover most of his funds lost through a similar scam. Interestingly a side-effect of the court orders was that he discovered that his bank, Barclays, waited more than 24 hours after learning about the fraud before they acted to stop the stolen money being transferred out. After being caught out, Barclays refunded Gideon the affected funds, but in cases where the victim isn’t a barrister specialising in exactly these sorts of disputes, do the banks do all they could to recover stolen money?

In order to give banks proper incentives to prevent push payment fraud where possible and to recover stolen funds in the remainder of cases, Which called for the Payment Systems Regulator to make banks liable for push payment fraud, just as they are for pull payments. I agree, and expect that if this were the case banks would implement innovative fraud prevention mechanisms against push payment fraud that we currently only see for credit and debit transactions. I also argued that in implementing the revised Payment Service Directive, the European Banking Authority should require banks provide evidence that a customer was aware of the nature of the transaction and gave informed consent before they can hold the customer liable. Unfortunately, both the Payment Systems Regulator, and the European Banking Authority conceded to the banking industry’s request to maintain the current poor state of consumer protection.

The programme concluded with security advice, as usual. Some was actively misleading, such as the claim by NatWest that banks will never ask customers to transfer money between their accounts for security reasons. My bank called me to transfer money from my current account to savings account, for precisely this reason (I called them back to confirm it really was them). Some advice was vague and not actionable (e.g. “be vigilant” – in response to a case where the victim was extremely cautious and still got caught out). Probably the most helpful recommendation is that if a bank supposedly calls you, wait 5 minutes and call them back using the number on a printed statement or card, preferably from a different phone. Alternatively stick to using cheques – they are slow and banks discourage their use (because they are expensive for them to process), but are much safer for the customer. However, such advice should not be considered an alternative to pushing liability back where it belongs – the banks – which will not only reduce fraud but also protect vulnerable customers.