TL;DR
- With new and greatly improved AI tools on the market, the 2024 election cycle has already seen Super PACs and even local election candidates experiment with deepfake ads.
- Individuals and media organizations have different responsibilities when it comes to deepfake media literacy. But just what those duties are have not yet been fully defined by society or our legal frameworks.
- It’s unclear whether our current laws will be adequate to handle the results of AI accelerated disinformation. A recent FEC petition may affect how candidates handle “synthetic media” going forward.
We’ve been warned this day would come since President Obama’s second term: Believable synthetic reanimations, also known as deepfakes, have entered the political arena.
As of spring 2023, one presidential contender’s campaign featured an in which another’s simulated voice appeared to read the content of a social media post.
In another instance, a “just for fun” video simulated the arrest of a candidate — and went viral as “breaking news” in some circles.
And we haven’t even hit the primaries yet.
What impact can we expect deepfakes to have on democracy?
Some experts think we’re in for a controversial road, while others caution that the impact these synthetic videos and audio will have on the public discourse is exaggerated.
Either way, generative AI has definitely entered the (political) chat.
TEAM DON’T OVER-REACT
If you’d like reassurance that our future hasn’t already been coopted by deepfakes, Mansoor Ahmed-Rengers, founder of OpenOrigins and Frankli. Ahmed-Rengers believes “it is clearly inevitable that generated photos and generated videos will become photorealistic indistinguishable from something taken from a camera, visually.”
He recently discussed cybersecurity and authenticity online on listen to business futurist Rishad Tobaccowala‘s What’s Next podcast.
Listen to “The camera always lies?” on Spreaker.However, the not-so-great news is that images don’t have to be perfect to influence us. Ahmed-Rengers told Tobaccowala, “There seems to be something innate in human nature that makes us want to trust photos [and video]. But I feel like we’re going to have to overcome that innate feeling in us. Or we’ll be forced to overcome that feeling. Because we will just see so much fake news being generated.”
If that all sounds like bad news, take a breath. Ahmed-Rengers is one of many working on technology that will identify or verify genuine content. If it seems unlikely to take off, he notes that there are already financial incentives for two sectors to invest in this tech: insurance companies and news organizations should take note.
Insurance companies need to fight fraudulent claims.
Newsrooms should not only be concerned about inadvertently broadcasting fake news. They also will want to protect the value of their archival content.
IDENTIFYING A DEEPFAKE
In practical terms, how can we begin to address our credulity? There are some tell-tale signs, also known as artifacts, that can provide clues that a video was generated by artificial intelligence.
The Washington Post’s Shira Ovide provided five hints that an image or video is likely synthetic:
- The hands are wrong. Too many fingers! Too many… hands? Drawing life-like hands is hand for artists, so it’s not surprising that generative tools are struggling to get this part right.
- Inanimate objects aren’t quite right. Maybe they’re defying a law of physics, or maybe just half of a pair is missing, but something is off.
- The text is nonsense. Written elements on the image may be gibberish filler text or the words may be in a language that doesn’t make sense in context.
- The background is out of focus or distorted. If it’s blurrier than it should be or the proportions are off, that’s another clue.
- Are the people shiny? Seriously. Glossy or stylized faces that aren’t in a magazine ad should tip you off.
Of course, more advanced deepfake technology won’t have such easy tells. Are state-of-the-art GANs accessible to the average person? Maybe not, but it wouldn’t be shocking for a political campaign to invest in high-dollar programs that make truly slick deepfakes, like the ones featured in the video (below).
FEC TO TAKE ON DEEPFAKE ADS?
The Federal Election Commission approved a rulemaking petition August 11 asking it to make clear that political campaign ads featuring deepfakes, unless clearly labeled as such, are subject to regulations and laws prohibiting “fraudulent misrepresentation of campaign authority.”
The FEC will seek public comments on the petition at a future date, so it’s not clear how the particulars of enforcement might play out.
UC Berkeley professor Hany Farid noted that past attempts to regulate deepfakes in campaign ads have run into challenges. Farid told NPR’s Ayesha Rascoe, “Most of the laws that exist are either toothless — that is they’re extremely hard to enforce — or … are not broad enough to really cover the most extreme cases.”
A California law for example, was sunset due to inefficacy. Farid explained its flaws made it hard to enforce and limited its usefulness. It required proof of intent and there was a geofencing element. It could not enforce ads from out of state super PACs, for example. Additionally, the ban only applied to the three months prior to election day.
Based on these prior attempts, Farid predicts, “I think the guardrails are not going to come from a regulatory regime.” After all, he notes, “it’s not illegal to lie in a political ad,” so perhaps it’s a bit much to expect the government to distinguish between synthetic and real media, if they’re not willing to differentiate between fact and fiction on this front.
Farid expects that standards enforcement will really come down to either campaigns or companies.
WHO’S REALLY ACCOUNTABLE, THOUGH?
In tandem with the deepfake conversation, the U.S. is considering the responsibility of news organizations to uphold certain ethical and journalistic standards.
This summer, the Media And Democracy project challenged the license renewal of a Philadelphia FOX affiliate, WTFX-TV, alleging that the parent company violated its “statutory duty to conduct its operations in the public interest” when it broadcast lies about the 2020 election and January 6th riots.
When, if, the FCC weighs in, the response will either promote an atmosphere of accountability for news organizations, or it will signal a continued latitude for lax reporting and usher in an era where it will be even more difficult for the public to discern authentic from synthetic media.
More on NAB Amplify
Why Deepfakes Do a Number on Cybersecurity
How Prepared Are You for a Deepfake Attack?
Deepfake AI: Broadcast Applications and Implications