- The Biden administration issued an executive order directing best practices and new standards to be created for using and developing artificial intelligence. Some elements should make the M&E industry take notice.
- The directives to create “standards and best practices for detecting AI-generated content and authenticating official content” and “develop guidance for content authentication and watermarking to clearly label AI-generated content” are particularly important to keep an eye on.
- Some experts are concerned that current technology capabilities are not in alignment with the Biden administration’s vision of watermarking and authenticating content. For its part, the White House seems to hope the executive order will spur innovation in this space.
The Biden administration issued an executive order in late October, directing new best practices and standards around AI, serving as a launchpad for creating new initiatives and a regulatory framework for governing this burgeoning technology sector.
But critics and compatriots alike have expressed concerns that it’s sending some wrong messages about generative AI.
The order, while comparatively brief at around 100 pages, attempts to cover a lot of ground in order to promote “Safe, Secure, and Trustworthy Artificial Intelligence.” (You can read the White House’s full artificial intelligence executive order fact sheet here.) Not all of it is immediately applicable to M&E, but some aspects should make the industry sit up and take notice.
The Middle Way
The Biden White House’s directives to create “standards and best practices for detecting AI-generated content and authenticating official content” and “develop guidance for content authentication and watermarking to clearly label AI-generated content” are particularly important for all creators and media companies to keep an eye on.
Meanwhile, Hollywood strikers will either feel that its decision to study and then regulate the effects of AI on the labor market is an expression of solidarity or too little too late to make a real impact on their negotiations.
“President Biden’s executive order tries to chart a middle path — allowing A.I. development to continue largely undisturbed while putting some modest rules in place, and signaling that the federal government intends to keep a close eye on the A.I. industry in the coming years,” The New York Times’ Kevin Roose writes. “In contrast to social media, a technology that was allowed to grow unimpeded for more than a decade before regulators showed any interest in it, it shows that the Biden administration has no intent of letting A.I. fly under the radar.”
MIT Technology Review notes that “[t]he executive order advances the voluntary requirements for AI policy that the White House set back in August, though it lacks specifics on how the rules will be enforced.”
Labels, Watermarking and Current Capabilities
Regarding the labeling and watermarking aspects of the order, Tate Ryan-Mosley and Melissa Heikkilä write that “[t]he hope is that labeling the origins of text, audio, and visual content will make it easier for us to know what’s been created using AI online. These sorts of tools are widely proposed as a solution to AI-enabled problems such as deepfakes and disinformation.”
However, they caution, “technologies such as watermarks are still very much works in progress. There currently are no fully reliable ways to label text or investigate whether a piece of content was machine generated. AI detection tools are still easy to fool.” But the Biden admin says it will work with the (unaffiliated) C2PA initiative to improve these technologies and subsequent (voluntary but necessary) uptake.
“The White House’s approach remains friendly to Silicon Valley, emphasizing innovation and competition rather than limitation and restriction,” Ryan Moseley and Heikkilä write. “The strategy is in line with the policy priorities for AI regulation set forth by Senate Majority Leader Chuck Schumer, and it further crystallizes the lighter touch of the American approach to AI regulation.”
READ MORE: Three things to know about the White House’s executive order on AI (MIT Technology Review)
Ryan-Mosley and Heikkilä aren’t the only respondents concerned that current capabilities don’t align with the aspirations of the executive order.
“[O]n the subject of content integrity – including issues related to provenance, authenticity, synthetic media detection and labeling – the order overemphasizes technical solutions,” Renée DiResta and Dave Willner argue for Tech Policy Press.
They explain, “We are both strong proponents of watermarking and other efforts at content authentication, and believe that the Executive Order’s focus on this adds a government imprimatur to what has largely been an industry effort at establishing best practices, emphasizing its importance. But the administration’s approach fails to reckon with the fact that many of the harms of generative AI come from the open source model community, particularly where child sexual abuse materials and non-consensual intimate imagery are concerned. And it does not acknowledge that watermarking of generated content may not be adopted universally, and will not be adopted by bad actors using open source models.”
Additionally, DiResta and Willner warn, “over-reliance on technological solutions for watermarking generative content risks creating systems that miss non-watermarked but nonetheless AI-generated content, creating a false perception that it is legitimate. Additionally, there are increasingly complex attempts to assign provenance to authentic content at the time of its creation. Here too we will find ourselves in an intermediate world where older devices and many platforms and publishers do not participate. The challenges of ascertaining what is real from among this combination of generative-and-watermarked, generative-and-not-watermarked, real-and-certified, and real-and-not certified will potentially be both very confusing and deeply corrosive to public trust.”
Ultimately, they write, “the most important defense will be an educated citizenry trained in critical thinking that is reflexively skeptical of claims that are too outlandish, or too in line with their own biases and hopes.”
“For now, leaders and lawmakers are certainly giving the impression that they are taking the opportunities and threats posed by AI seriously,” Tech Policy Press’ Gabby Miller writes. She adds, “It remains possible that the world will look back on these last days of October 2023, as the turning point President Biden suggested it might be.”