Risk Management

Deepfake technology poses security risks for treasurers

Published: Aug 2022

Deepfakes – synthetic media that use artificial intelligence to imitate a person – are becoming more sophisticated and widespread. Deepfake videos of Barack Obama have gone viral, and the UK’s Queen was dancing in one video, but how this technology can be used to target treasurers is no joke.

Imagine you’ve been accused of doing something, but it wasn’t you. It really wasn’t, but despite your protests, no one believes you. They may have an audio recording of you saying something, or even compromising video footage.

This reality could soon be coming to a workplace near you, especially as deepfake technology is becoming more sophisticated, and more commonplace. Artificial intelligence and machine learning can train a computer to speak like you, and act like you. And no one may be able to tell the difference between you and the fake you.

Awareness of the potential of this technology is still low, however. “If you do you not realise this can happen you are vulnerable,” says Joseph Steinberg, an author and cyber security expert witness and advisor.

For treasurers this has serious implications, especially if you are acting on instructions that you’ve taken over the phone. Unfortunately, some have already fallen into this trap and have sent massive amounts to the wrong account – all because they were fooled by the voice they were speaking to.

One notable case was from 2020 with a company that is based in the UAE. A bank branch manager in Hong Kong received a call from the director of the company – or so he thought – who relayed good news: his company was buying another and he needed US$35m to complete the transaction, according to news reports. The banker knew this person and had spoken to him before. He took him at his word, and could see that the lawyer the director mentioned, a ‘Martin Zelner’, had also sent him a number of emails about the deal. So, the manager started to transfer the money, which soon got routed to accounts all over the world, in a sophisticated money laundering network that had at least 17 people involved – according to court documents related to the case.

And this is not the first time it has happened. In 2019 a UK-based energy company fell victim to a similar scam, and the Washington Post reported how a managing director transferred US$243,000 on the instructions of what he thought was the CEO. The deepfake was so good – he really thought it was him. The company’s insurer was reported as saying, “The software was able to imitate the voice, and not only the voice: the tonality, the punctuation, the German accent.” It was only when they called back and tried a second transfer that they became suspicious and called the real CEO. The victim was then reportedly in the strange situation of speaking to the real ‘Johannes’ at the same time as the ‘fake Johannes’ was making instructions about another transfer.

The cybersecurity company Symantec has also noted there have been similar cases. Previously audio would have been edited together and left as a voicemail, but these days the deepfakes are working in real time. The deepfakes are trained by using audio – or video – footage that is already in the public domain – such as a CEO’s conference presentations, earnings calls or media interviews. These are then used as the basis to train the artificial intelligence, and when the criminal calls the victim, they can have a live conversation with them by typing their responses into the computer, which then speaks those sentences in the synthetic voice of the CEO.

For now, however, it is unlikely that corporate treasurers or CFOs would be the target of video technology, or maybe even audio if they don’t have a high public profile. Kelley Sayler, a US-based expert in advanced technology, comments, “While deepfakes are growing in sophistication, they’re generally unable to consistently fool untrained viewers. Creating a convincing deep fake video, for example, would likely require a tremendous number of image and voice samples on which the systems that create deepfakes can be trained. For that reason, it’s usually much easier to create deepfakes of public figures who have been photographed or recorded thousands of times.”

Sayler adds that she’s not aware of any systems that can currently generate consistently convincing deepfakes of private citizens or even public-facing individuals for which there are limited image and voice samples. Given, this, it’s unlikely that corporate treasurers, or perhaps lesser-known CEOs are likely to be the subject of deepfake videos. However, audio technology is much more likely to go undetected. Sayler continues, “It would likely be easier, given the state of today’s technology, to fool someone with a social engineering attack such as voice impersonation.”

CFOs and corporate treasurers, however, will always be a target for criminals because of the nature of their role. Hank Schless, Senior Manager of security solutions at data-centric cloud security company Lookout, comments that their direct line into an organisation’s finances makes them an attractive target. “The majority of cyber attacks are financially motivated, and attackers see people in these roles as the most direct route to their end goal,” he says. Schless notes that the common method of attack is by voice communications, and the call will typically have a sense of high urgency.

For treasurers and CFOs, the financial risk is the most obvious with deepfake technology. But, as Joseph Steinberg, an author and cybersecurity expert witness and advisor, comments, “Deepfakes can cause a lot of problems, not just financial.”

There are wider issues of how deepfakes can be used as evidence – or fake evidence of crimes. There could be issues, for example if a subordinate has been asked to do something illegal by their (fake) superior, or if a (fake) CEO has done something and the only evidence is that the witnesses who testify that it was him – because they spoke to him. Steinberg argues that this kind of scenario is worse than financial fraud; at least fraudulent transactions can be traced, and hopefully reversed.

But if you have been accused of doing something illegal – when, in fact it was a synthetic version of yourself, your only defense is ‘I did not make the call’, “What are you going to do?,” questions Steinberg. That’s quite a question, and one that the law courts are not well-versed in. At the moment, in a court, a person’s evidence of ‘Yes – it was him, I spoke to him’ would be taken at face value. And if the defense is that it was a deepfake, a judge may not believe them because they are not familiar with what the technology can do.

This is just one of the scenarios where deepfake technology can be applied. According to cybersecurity firm Panda Security, there are three main business threats to businesses with deepfake technology. Top of the list is fraud – like with the UAE company or the fake Johannes. Next is fabricated remarks where audio or video can impersonate an executive saying or doing something they didn’t say or do, which could massively impact a company’s reputation. And thirdly, there is extortion where the executive’s image could be grafted onto pornographic material, for example, and used for blackmail.

The technology is becoming more sophisticated, so this kind of threat is becoming more commonplace. “We’re all familiar with the deepfake videos that are used for parody across entertainment outlets, but this seemingly harmless technology party trick can actually be used in very malicious ways,” says Schless at Lookout.

In terms of the level of the capability, Steinberg explains that at the moment not everyone who wants to do it can do it, but criminals who have resources – access to these capabilities – can launch targeted attacks. “You can do it today with audio. Video is harder, but that will get there,” says Steinberg.

Aarti Samani, Senior Vice President Product & Marketing, iProov – which has developed liveness detection technology to combat deepfakes – comments on the rate at which deepfakes are improving. “Firstly, deepfake videos posted online are doubling year-on-year. As deepfakes are AI-based, the more data it has, the smarter it gets. This means deepfakes can generate a more realistic likeness, match mannerisms and expressions in videos. As more deepfakes are being created and posted online, they become more sophisticated.”

Samani continues, “This also makes them more scalable. A criminal attempting to impersonate a victim using a physical mask has to go to considerable lengths to be successful. Alternatively, deepfake technology requires a much smaller amount of effort to cause a significant amount of financial damage.”

So what can corporate treasurers and CFOs do to protect themselves? The first step is to at least be aware of the problem, something that is lacking at the moment. Steinberg comments that security personnel are aware of the potential of this kind of technology, as are senior leaders who manage large transactions, but the average employee isn’t aware.

When it comes to social engineering through deepfakes, phishing, or other tactics, attackers will almost always try to create a high sense of urgency to get their target to act without thinking.

Hank Schless, Senior Manager, Lookout

More education is needed to raise awareness of the problem, and to give guidance on how to spot a deepfake. Panda Security notes that 80% of organisations acknowledge the threat of deepfakes, but less than 30% have taken action. The cybersecurity firm gives some tips on the way to detect deepfakes. These include audio with an unnatural cadence, a robotic tone or poor audio quality. And with video the things to look for include lip movements that are out of sync with the voice, unnatural blinking or movement, or unexpected changes in lighting or skin tone. However, given that deepfake technology is getting more sophisticated, more layers are needed to protect corporates and their employees.

“Different things can be done but the bottom line is that the simple thing is awareness,” says Steinberg. One action that can be taken is to ensure that specialised approval is needed for any transaction that deviates from the norm. Steinberg suggests there should be a method of communication, say between the CEO and CFO on a dedicated phone line that is only used for that purpose, and only them and the security team know about. “Then if you get a message in the normal way [to complete a high-value transaction], you know it is not good,” says Steinberg.

Steinberg notes that the longer you speak to a deepfake, the more likely it is that you’ll realise it’s not real. There may be some shared past, or inside jokes, that you have with the person that get referenced in a way that doesn’t ring true.

Something else that should raise red flags is the sense of urgency. Schless comments, “When it comes to social engineering through deepfakes, phishing, or other tactics, attackers will almost always try to create a high sense of urgency to get their target to act without thinking.” He notes that it is best to operate with a high level of scepticism when receiving any sort of communication that asks you to access or make an unusual transfer, share your login information, or access a certain website.

“In addition,” says Schless, “organisations need to be sure they’re adding an additional layer of protection to any sensitive data, especially financial data.” He adds, “By leveraging a security platform that can detect risky behaviour such as anomalous logins, protect data from being exfiltrated, and encrypt any sensitive information as soon as it’s copied or moved, organisations can keep themselves safe from a serious financial data breach.”

Another precaution is to have a protocol in place for verifying suspicious communications. On the topic of protecting against such attacks, Sayler comments that “Traditional security practices should be in place to ward against both social engineering and deepfake attacks. Financial institutions could agree, for example, that requests to transfer funds are to be made only over pre-approved, encrypted communications channels or that any transfer requests are to be confirmed by a return call to a pre-determined secure line. Corporate treasurers and chief financial officers should be mindful of deepfakes and develop plans and procedures for exchanging authenticated communications with both internal and external audiences.”

In the future, it could be possible that those making large transactions will not rely on audio communication, and may have to do video calls, which includes technology that does a ‘proof of liveness’ test to ensure it is not a deepfake they are speaking to. Samani at iProov comments, “The use of video calls combined with proof of liveness or authentication is definitely something that we may see rolling out in the future to combat deepfakes and ensure the transfer of funds remains secure. After all, as the AI gets smarter, deepfakes are only getting more advanced and trickier to detect.”

She adds: “Undoubtedly, the levels of security will differ in line with the risk level of the transaction. A low-risk interaction involving internal colleagues, for example, might just ask a user to authenticate using basic credentials. A higher risk transaction however, where external agencies and/or a large transaction amount are involved, may require more advanced identity verification that compares a user with a trusted identity document, such as a passport or driving licence, for that additional security layer.”

And more, Samani continues, “Regardless of the security level deployed, liveness verification solutions are invaluable in ensuring any digital interactions can verify exchanges are happening with the right person, the real person and in real time – not a photo or mask, a bot or a bad actor, or a video replay or image.”

Also, Samani has three top tips for treasurers and CFOs about how to protect themselves. Firstly, they need to be vigilant and have a healthy dose of scepticism about new transaction requests. Secondly, “employing the use of multi-factor authentication is a must. Biometrics provide a very high level of assurance as a part of this process as an ‘unshareable credential’.” And finally, Samani advises to “take an ‘always on’ approach to monitoring for new risks, and ensure this is acted upon by frequently updating policies and procedures around financial transactions.”

Steinberg comments that combating the technology is essentially a human problem. It is after all humans and foolish behaviour that allows them to be tricked by deepfakes. He argues that the more layers companies put in place to protect them – whatever they may be – the more likely the criminals will move on and target someone else, and move onto other low-hanging fruit.

All our content is free, just register below

As we move to a new and improved digital platform all users need to create a new account. This is very simple and should only take a moment.

Already have an account? Sign In

Already a member? Sign In

This website uses cookies and asks for your personal data to enhance your browsing experience. We are committed to protecting your privacy and ensuring your data is handled in compliance with the General Data Protection Regulation (GDPR).