Tools used: GPT4 for the script, Midjourney for the imagery, Runway Gen2 for the video
- The first AI-generated ads and official AI-generated political campaign video herald dangers for which we are ill prepared, according to experts.
- Once people grow accustomed to fake AI-generated videos, they’ll become even more hardened, cynical and harder to reach and convince.
- The time when a deepfake video that can’t be easily discerned from real events plays a major role in campaigns seems not only inevitable but closer than ever.
READ MORE ABOUT AI ON OUR DEDICATED RESOURCE PAGE
READ MORE: AI-generated ‘ads’ for beer and pizza offer glimpse of the future of film (Interesting Engineering)
Ads for insurance or a car are in many ways the same as political campaign spots, but when the truth of what they are selling can be always doubted then how can consumers or voters know what they are buying into?
Artificial intelligence is going to make fools of us all and it’s a danger for which we are not prepared, according to experts.
In the week that saw one of the foundational figures of modern AI, Geoffrey Hinton, leave Google to decry the pace of change without checks and balances, the first AI-generated commercials were released.
Two of them are experiments — proof of what’s possible, with occasionally hilarious results — but the third has more profound implications.
“‘Synthetic Summer’ is a machine-learning interpretation of an American beer advert,” said Chris Boyle, co-founder of London-based Private Island, which generated a video ad from text prompts.
“We’ve been using Stable Diffusion, Control Net and Runway to understand new forms of moving and generative image for the last 12 months — exploring new ways of working and new mediums of visuals powered by Machine Learning,” Boyle told Roland Ellison at Interesting Engineering.
Another video creator named PizzaLater also made a spot for a pizza chain using an array of AI tools. He explained to Jamie Madge at Shots that he did it just for fun.
“I asked GPT-4 to “write me a silly script for a pizza restaurant commercial using broken English.” It generated three scripts in total, and I picked my favorite parts to assemble the final script used in the video.”
He also asked the AI for 10-15 “meme-worthy names for a pizza restaurant,” and “Pepperoni Hug Spot” was chosen. In MidJourney, PizzaLater generated images of the restaurant’s exterior and some pizza backgrounds and then used Runway Gen2 to create the spot simply by requesting “a happy man/woman/family eating a slice of pizza in a restaurant, tv commercial.”
Eleven Labs’ “Voicelab” delivered a few different voiceover reads of the script, allowing him to piece together the best takes. SOUNDRAW provided some appropriate background music.
It took all of three hours.
“I believe things will really go off the rails when text-to-video becomes closer to photorealism,” he said.
Of course, it won’t be long before this is possible. Perhaps as soon as 18 months away — the date of the 2024 Presidential election.
It was inevitable that politicians would seize on the technology. Turns out that the Republican National Committee got in there first, in the US at least.
Its AI-generated ad, released last week, depicts a hypothetical future where President Biden is reelected, banks collapse, China invades Taiwan, and San Francisco is cordoned off by the military after being overrun by immigrants, gangs and drugs.
“It’s more notable for being the first of its kind than for the quality of the ad itself,” says Cameron Joseph at VICE. “President Biden and Vice President Kamala Harris clearly have that toothy, wolfish look common in AI-generated art. Most of the ad’s footage could have just as easily been replaced by b-roll video from actual events. And to their credit, the RNC explicitly released the ad as an AI-generated video, and labels it as such on the video itself.”
Campaign strategists in both parties told VICE that they doubt that major candidates and the party committees will be willing to risk, including fake, AI-generated videos in major TV campaign ads. The chance of being called out for lying and losing trust with voters simply isn’t worth it. They think that AI technology will be most often used for more mundane tasks like writing simple campaign ad scripts, and press releases.
But the video, even if it is a gimmick, marks the beginning of a new era where it could become even harder for voters to discern truth from lies.
“The concern is when we get to the point where it can be done down at the grassroots level. There are tools that are out there where they could generate this stuff en masse in an automated way,” Dave Doermann, the director of University at Buffalo Artificial Intelligence Institute, tells Joseph.
“We’re not going to be able to detect it in real time fast enough that it makes any difference, [and] even if we could, the social media sites aren’t going to be the ones that are putting the effort into taking it down.”
Even if deepfakes don’t become ubiquitous before the 2024 election, still 18 months away, the mere fact that this kind of content can be created could affect the election. Knowing that fraudulent images, audio, and video can be created relatively easily could make people distrust the legitimate material they come across.
“In some respects, deepfakes and generative AI don’t even need to be involved in the election for them to still cause disruption, because now the well has been poisoned with this idea that anything could be fake,” independent AI expert Henry Ajder tells Thor Benson at Wired. “That provides a really useful excuse if something inconvenient comes out featuring you. You can dismiss it as fake.”
Democratic ad maker Jon Vogel agrees, telling VICE: “The biggest hurdle to effective political advertising is credibility. With AI technology becoming more common in our lives, voter skepticism will only continue to grow. This increases the burden on political media firms to find additional ways to make the ads credible.”
Also in the Wired article, Hany Farid, a professor at UC Berkeley’s School of Information, calls out the companies making generative AI tools like Runway, Google and Meta for playing with a runaway train.
Farid says that nobody wants to get “left behind,” so these companies tend to just release what they have as soon as they can.
“It consistently amazes me that in the physical world, when we release products there are really stringent guidelines,” Farid says. “You can’t release a product and hope it doesn’t kill your customer. But with software, we’re like, ‘This doesn’t really work, but let’s see what happens when we release it to billions of people.’”
The time when a deepfake videos which can’t be easily discerned from real events play a major role in campaigns seems not only inevitable but closer than ever.