“What’s up, TikTok,” says Tom Cruise with his signature grin, putting on sunglasses and a hat and picking up his golf club. “You guys cool if I play some sports?”
In other videos posted by @deeptomcruise, the actor performs a magic trick, tells an anecdote about a former Soviet president, and offers tips on industrial cleaning.
Except Tom Cruise isn’t really on TikTok. The videos, created by VFX specialist Chris Ume and actor Miles Fisher, are eerily convincing deepfakes – media in which artificial intelligence is used to replace the face of one person with another’s likeness.
Fake or manipulated content has been around for a long time, but advancements in computer processing power over the last few years have allowed people to apply powerful machine learning techniques to images, videos and audio.
Deepfakes are made, in simple terms, by training computer systems on large volumes of data that they use to reproduce images or sound clips of people, rather than a human, say, building a 3D model.
What makes these particularly effective is that they’re often made using something called a generative adversarial network (GAN), in which two computer systems are pitted against each other in a kind of game. One network creates new data – an image of a human face, for example – and the other tries to identify whether or not it’s a fake. The first network keeps trying to ‘trick’ the second, continuously improving its approach to create more convincing images.
These techniques have become both more advanced and more accessible in recent years. From reanimating the dead to meme-worthy lip-syncing, almost anyone can create a deepfake with an app on their smartphone, or by downloading some software and following the steps.
There are plenty of potential benefits to this technology. We’ve already seen actors brought back to life or de-aged in franchises like Star Wars, but deepfakes could also be used to translate videos into different languages, improve accessibility tools, and provide anonymity to those who need it.
But the dangers are also clear, as highlighted by Channel 4 in its ‘alternative’ Queen’s Speech broadcast on Christmas Day last year, and by Jordan Peele in a fake Barack Obama PSA in 2018.
Rising concerns about the potential for deepfakes to be used in both personal and political attacks have led the Law Commission to review criminal law in this area. In the US, the FBI recently issued a warning that synthetic content would “almost certainly” be used by malicious actors “for cyber and foreign influence operations in the next 12-18 months”.
This isn’t just a worrying sign of things to come. Deepfakes are already being used for malicious purposes, including financial and corporate fraud.
In 2019, security firm Symantec said it had seen three cases in which fake audio was used to trick senior financial controllers into releasing large amounts of cash. Around the same time, the Wall Street Journal reported an audio deepfake was used to convince the CEO of a UK energy company to transfer €220,000 to scammers, by posing as the chief executive of the firm’s parent company and convincingly replicating his accent and speech patterns.
This technology could also be used to make existing fraud schemes more convincing and harder to defend against, by reproducing another person’s likeness to steal their identity, or even generating a false identity using images of someone who doesn’t exist in real life.
So as the threat of AI-powered fraud increases, how can accountants and their clients protect themselves?
At least for the time being, one answer is to look out for tell-tale signs and inconsistencies in any video or audio you receive. Most deepfake technology isn’t yet sophisticated enough to produce truly convincing footage – something as advanced as Ume’s Tom Cruise takes hours of specialist work, with the help of a professional impersonator.
The chances are, unless they’ve had a lot of time and money put into them, most fraudulent videos are going to look slightly off, with unnatural speech or movement, out-of-sync speech, and digital artifacts or noise.
But as technology advances, the videos being used for scam purposes could end up being almost impossible to tell apart from real footage, so you can’t assume that you’ll always be able to spot the fakes. In some cases, fake audio could also be harder to spot, as in the case of the energy company scam.
Besides, scams are usually designed to rush their victims with a strict deadline and threats of negative consequences, making it hard to think clearly and critically about whether or not the message is authentic.
So, to protect your own firm and your clients, you’ll need to focus on your security and authentication procedures.
Build in checks that you and your team complete every time you release financial or other sensitive information, even if you think you know who you’re talking to, and make sure identity is always verified. This might be as simple as taking the time to call someone back if they’ve left a request in a voicemail for you.
Training is also key to make sure your staff are clued up on the kinds of fraud threats they might face, and that they know what to do if they spot a potential scam.
Having robust systems in place should help you to defend against any fraud threats, no matter how hard they are to spot.
Ready to start your journey to security now?
Capium transformed 1300+ accounting firm’s with an unique innovate 100% cloud, 100% flexible accounting software. Security is at the heart of what we do, we are hosted on Microsoft and Azure servers based in the UK. We also have an AML solution to ensure you are safe and compliant in real time. Our AML feature takes the pressure off by checking prospective clients in an instant and ensures your in line with the latest AML directives.
Like what you hear? Book a demo today to find out about our services or talk to our product specialist today.