Imagine you’ve been accused of doing something, but it wasn’t you. It really wasn’t, but despite your protests, no one believes you. They may have an audio recording of you saying something, or even compromising video footage.
This reality could soon be coming to a workplace near you, especially as deepfake technology is becoming more sophisticated, and more commonplace. Artificial intelligence and machine learning can train a computer to speak like you, and act like you. And no one may be able to tell the difference between you and the fake you.
Awareness of the potential of this technology is still low, however. “If you do you not realise this can happen you are vulnerable,” says Joseph Steinberg, an author and cyber security expert witness and advisor.
For treasurers this has serious implications, especially if you are acting on instructions that you’ve taken over the phone. Unfortunately, some have already fallen into this trap and have sent massive amounts to the wrong account – all because they were fooled by the voice they were speaking to.
One notable case was from 2020 with a company that is based in the UAE. A bank branch manager in Hong Kong received a call from the director of the company – or so he thought – who relayed good news: his company was buying another and he needed US$35m to complete the transaction, according to news reports. The banker knew this person and had spoken to him before. He took him at his word, and could see that the lawyer the director mentioned, a ‘Martin Zelner’, had also sent him a number of emails about the deal. So, the manager started to transfer the money, which soon got routed to accounts all over the world, in a sophisticated money laundering network that had at least 17 people involved – according to court documents related to the case.
And this is not the first time it has happened. In 2019 a UK-based energy company fell victim to a similar scam, and the Washington Post reported how a managing director transferred US$243,000 on the instructions of what he thought was the CEO. The deepfake was so good – he really thought it was him. The company’s insurer was reported as saying, “The software was able to imitate the voice, and not only the voice: the tonality, the punctuation, the German accent.” It was only when they called back and tried a second transfer that they became suspicious and called the real CEO. The victim was then reportedly in the strange situation of speaking to the real ‘Johannes’ at the same time as the ‘fake Johannes’ was making instructions about another transfer.
The cybersecurity company Symantec has also noted there have been similar cases. Previously audio would have been edited together and left as a voicemail, but these days the deepfakes are working in real time. The deepfakes are trained by using audio – or video – footage that is already in the public domain – such as a CEO’s conference presentations, earnings calls or media interviews. These are then used as the basis to train the artificial intelligence, and when the criminal calls the victim, they can have a live conversation with them by typing their responses into the computer, which then speaks those sentences in the synthetic voice of the CEO.
For now, however, it is unlikely that corporate treasurers or CFOs would be the target of video technology, or maybe even audio if they don’t have a high public profile. Kelley Sayler, a US-based expert in advanced technology, comments, “While deepfakes are growing in sophistication, they’re generally unable to consistently fool untrained viewers. Creating a convincing deep fake video, for example, would likely require a tremendous number of image and voice samples on which the systems that create deepfakes can be trained. For that reason, it’s usually much easier to create deepfakes of public figures who have been photographed or recorded thousands of times.”
Sayler adds that she’s not aware of any systems that can currently generate consistently convincing deepfakes of private citizens or even public-facing individuals for which there are limited image and voice samples. Given, this, it’s unlikely that corporate treasurers, or perhaps lesser-known CEOs are likely to be the subject of deepfake videos. However, audio technology is much more likely to go undetected. Sayler continues, “It would likely be easier, given the state of today’s technology, to fool someone with a social engineering attack such as voice impersonation.”
CFOs and corporate treasurers, however, will always be a target for criminals because of the nature of their role. Hank Schless, Senior Manager of security solutions at data-centric cloud security company Lookout, comments that their direct line into an organisation’s finances makes them an attractive target. “The majority of cyber attacks are financially motivated, and attackers see people in these roles as the most direct route to their end goal,” he says. Schless notes that the common method of attack is by voice communications, and the call will typically have a sense of high urgency.
For treasurers and CFOs, the financial risk is the most obvious with deepfake technology. But, as Joseph Steinberg, an author and cybersecurity expert witness and advisor, comments, “Deepfakes can cause a lot of problems, not just financial.”
There are wider issues of how deepfakes can be used as evidence – or fake evidence of crimes. There could be issues, for example if a subordinate has been asked to do something illegal by their (fake) superior, or if a (fake) CEO has done something and the only evidence is that the witnesses who testify that it was him – because they spoke to him. Steinberg argues that this kind of scenario is worse than financial fraud; at least fraudulent transactions can be traced, and hopefully reversed.
But if you have been accused of doing something illegal – when, in fact it was a synthetic version of yourself, your only defense is ‘I did not make the call’, “What are you going to do?,” questions Steinberg. That’s quite a question, and one that the law courts are not well-versed in. At the moment, in a court, a person’s evidence of ‘Yes – it was him, I spoke to him’ would be taken at face value. And if the defense is that it was a deepfake, a judge may not believe them because they are not familiar with what the technology can do.
This is just one of the scenarios where deepfake technology can be applied. According to cybersecurity firm Panda Security, there are three main business threats to businesses with deepfake technology. Top of the list is fraud – like with the UAE company or the fake Johannes. Next is fabricated remarks where audio or video can impersonate an executive saying or doing something they didn’t say or do, which could massively impact a company’s reputation. And thirdly, there is extortion where the executive’s image could be grafted onto pornographic material, for example, and used for blackmail.
The technology is becoming more sophisticated, so this kind of threat is becoming more commonplace. “We’re all familiar with the deepfake videos that are used for parody across entertainment outlets, but this seemingly harmless technology party trick can actually be used in very malicious ways,” says Schless at Lookout.
In terms of the level of the capability, Steinberg explains that at the moment not everyone who wants to do it can do it, but criminals who have resources – access to these capabilities – can launch targeted attacks. “You can do it today with audio. Video is harder, but that will get there,” says Steinberg.
Aarti Samani, Senior Vice President Product & Marketing, iProov – which has developed liveness detection technology to combat deepfakes – comments on the rate at which deepfakes are improving. “Firstly, deepfake videos posted online are doubling year-on-year. As deepfakes are AI-based, the more data it has, the smarter it gets. This means deepfakes can generate a more realistic likeness, match mannerisms and expressions in videos. As more deepfakes are being created and posted online, they become more sophisticated.”
Samani continues, “This also makes them more scalable. A criminal attempting to impersonate a victim using a physical mask has to go to considerable lengths to be successful. Alternatively, deepfake technology requires a much smaller amount of effort to cause a significant amount of financial damage.”
So what can corporate treasurers and CFOs do to protect themselves? The first step is to at least be aware of the problem, something that is lacking at the moment. Steinberg comments that security personnel are aware of the potential of this kind of technology, as are senior leaders who manage large transactions, but the average employee isn’t aware.