TL;DR
- Early forms of deepfakes were low-quality audio, video or images that were falsified by amateurs for entertainment. This digital manipulation has matured to create media that is often indiscernible to the naked eye — and is being called the most dangerous form of cybercrime.
- Advanced editing technology is making it easier for anyone to change online avatars or create synthetic personas.
- Companies are advised to educate their staff to identify and red-flag deepfake activity.
READ MORE: Deepfakes: Get ready for phishing 2.0 (Fast Company)
By 2023, 20% of all account takeover attacks will make use of deepfake technology, consultancy Gartner predicts in a new report. It’s time organizations recognized this threat and raised employee awareness because synthetic media is here to stay and will certainly become more realistic and widespread.
READ MORE: Gartner: The future of AI is not as rosy as some might think (TechRepublic)
“While deepfakes may have started out as a harmless form of entertainment, cybercriminals are using this technology to carry out phishing attacks, identity theft, financial fraud, information manipulation, and political unrest,” warns Stu Sjouwerman, founder and CEO of security awareness trainer KnowBe4.
According to the Security Forum, criminals can easily manipulate videos, swap faces, change expressions or synthesize speech to defraud and misinform individuals and companies.
“What’s more, people are being bombarded with information and it’s becoming increasingly difficult to distinguish between what’s real and what’s fake,” it warns.
All the elements necessary for the widespread and malicious use of deepfake technology are readily available in underground markets and forums — the source code is public.
“Advanced editing technology, once the exclusive domain of the movie industry, is now available to the average internet Joe,” says Security Forum. “Anyone can download a mobile phone app, pose as a celebrity, de-age themselves, or add realistic visual effects that can spruce up their online avatars and virtual identities.
READ MORE: The Threat Of Deepfakes And Their Security Implications (Information Security Forum)
Sjouwerman reports that in online forums, criminal organizations routinely discuss how they can use deepfakes to increase the effectiveness of their malicious social engineering campaigns.
No one is immune. Even Elon Musk was prey to a deepfake video of himself, promoting a crypto scam that went viral on social media.
READ MORE: Deepfake Video of Elon Musk Promoting Crypto Scam Goes Viral (Decrypt)
That was an attempt to manipulate the stock market.
In 2020, fraudsters used AI voice cloning technology to scam a bank manager into initiating wire transfers worth $35 million. Deepfakes can be leveraged as a strategic tool for spreading disinformation, manipulating public opinion, stirring civil unrest and causing political polarization. As a recent example, a deepfake video of Ukrainian president Volodymyr Zelensky urging Ukrainians to lay down arms was broadcast on Ukrainian TV. Fake evidence (using deepfakes) can also be planted in the court of law. For example, in a custody battle in the UK, doctored audio files and footage were submitted to the court as evidence.
READ MORE: Fraudsters Cloned Company Director’s Voice In $35 Million Bank Heist, Police Find (Forbes)
READ MORE: ‘Deepfake’ video technology has sinister implications for governments, businesses and individuals. It isn’t too late to act (Toronto Star)
READ MORE: Doctored audio evidence used to damn father in custody battle (The Telegraph)
So how can organizations protect themselves against such attack? Sjouwerman lays out some advice. He says the key to mitigating deepfake risks is to nurture and improve cybersecurity instincts among employees and strengthen the overall cybersecurity culture of the organization.
Perhaps the best advice then is to run security awareness training sessions to ensure employees understand their responsibility and accountability with cybersecurity.
Employees can be trained to watch out for visual cues such as distortions and inconsistencies in images and video, strange head or torso movements, and syncing issues between face and lips in any associated audio.
Other tips that can help: When video conferencing, run this simple trick: ask the participant to wave their hands in front of their face or turn their side profile to the camera. If it’s a deepfake, it will reveal quality issues with the superimposition.
READ MORE: How to spot a deepfake? One simple trick is all you need (ZDNET)
READ MORE: Business execs are facing a new threat: the person talking on Zoom might be an A.I.-generated ‘deep fake.’ Here are some easy ways to tell if you’re talking to a fake. (Fortune)
You can also deploy technologies like phishing-resistant multi-factor authentication (MFA) and zero-trust to reduce the risk of identity fraud.
READ MORE: What is Phishing Resistant MFA? (SANS)
READ MORE: How to Navigate the Mitigation of Deepfakes (Dark Reading)
EXPLORING ARTIFICIAL INTELLIGENCE:
With nearly half of all media and media tech companies incorporating artificial intelligence into their operations or product lines, AI and machine learning tools are rapidly transforming content creation, delivery and consumption. Find out what you need to know with these essential insights curated from the NAB Amplify archives:
- AI Is Going Hard and It’s Going to Change Everything
- Thinking About AI (While AI Is Thinking About Everything)
- If AI Ethics Are So Important, Why Aren’t We Talking About Them?
- Superhumachine: The Debate at the Center of Deep Learning
- Deepfake AI: Broadcast Applications and Implications