Skip to content
  • Stories
    • Create
    • Connect
    • Capitalize
    • Intelligent Content
  • Events
  • Sign Up
  • Sign In
September 28, 2022

AI Motion Pictures Are Just About a Thing. Here’s Where We Are Today.

author
Adrian Pennington
image created using generative AI

READ MORE: A glimpse into the future of cinema: The ultimate guide to A.I. art for film/animation/VFX (Pinar Seyhan Demirdag)

Independent filmmakers are experimenting with AI tools today. While they are not yet ready for their big screen close-up, it won’t be long until these technologies become widely adopted in Hollywood.

The most high-profile text-to-image AI is DALL-E 2, released by OpenAI. The model does not offer motion picture sequences — but the odds are that it soon will. OpenAI is likely working on this as we speak.

Los Angeles-based director Paul Trillo has been creating stop-motion animations using DALL-E, and explains how it can be done in this twitter thread:

Only one of these cars is real. Another experiment pushing the limits of AI stop motion using dall-e in painting within video. Each frame uploaded one by one to generate 150 concept cars.@openaidalle #aiart #dalle #dalle2 #experimental #conceptcar #ai #openai pic.twitter.com/KmWBOVUem2

— Paul Trillo (@paultrillo) August 11, 2022

AI art is in its infancy and making fledgling attempts at “temporal coherence,” the ability to make something move as we expect it to in film and video (not forgetting that film is a set of still images replayed 24 times a second).

Deforum is a text-to-motion AI tool based on the AI model Stable Diffusion by Stability AI. AI artist Michael Carychao has used it to show how AI tools can re-create famous actors:

Wind and Wave #aiart #animation #stablediffusion pic.twitter.com/siQRZxroVm

— Michael Carychao (@MichaelCarychao) August 29, 2022

“In a couple of years, we’ll be able to write ‘Brad Pitt dancing like James Brown’ and be able to have a screen-ready coherent result,” Pinar Seyhan Demirdag, co-founder of AI-based content developer Seyhan Lee, predicts in her blog at Medium.

Another example using Deforum is provided by an artist known as Pharmapsychotic. The animated sample posted to Twitter is claimed to be raw output with no post-processing or interpolation:

WIP on "turbo" in #deforum. nice coherence boost and speed up by diffusing every Nth frame and interpolating forward warped prior frames. this is raw output with no post processing or interpolation.@zippy731 @xsteenbrugge pic.twitter.com/kKDW03GtKr

— pharmapsychotic (@pharmapsychotic) September 3, 2022

“Give it a couple of years, and you’ll be able to film a scene of four random people walking down an aisle, and to turn them into Dorothy and the gang in the Wizard of Oz,” Seyhan Demirdag comments. “Arguably, you can do this right now, but it will be wonky, smudgy, and missing 8K details, so not ready for the mass audience.”

There are two ways of transferring a style right now. One way is with a pre-defined style, like a Van Gogh painting, for example. The other is using text-to-image based models such as Disco Diffusion and VQGAN+CLIP, where you guide the style with words, referred to as “prompts.”

“These prompts are your most significant creative assets, and many people who make art with text-to-image tools also call themselves ‘prompt artists.’ “ she says.

There are even sites suggesting the best prompts to work with specific AI — like the DALL-E 2 Prompt Book.

Considerable work is being done to incorporate generative art models into games engines.

Daniel Skaale, who works for Khora VR, has posted a sample on Twitter where he carried a 2D image he had created into the text-to-image AI Midjourney from the Unity games engine.

A.I Midjourney to 3D (Unity HDRP) #screenshotsaturday #gamedev #indiedev #unity3d #madewithunity #IndieGameDev pic.twitter.com/UtEHd8ASWA

— Daniel Skaale (@DSkaale) August 27, 2022

Generating real-time imagery in Unity or Unreal Engine remains an unexplored territory with huge potential, Seyhan Demirdag writes.

Face Replacement

Twitter user Todd Spence posted a mashup of Willem Dafoe as Julia Roberts in Pretty Woman. While it was just for fun, examples like this — using AI apps like Reface — give us a glimpse into how AI will help optimize production in future.

If PRETTY WOMAN starred Willem Dafoe instead. Good God. pic.twitter.com/GdieGzSKuX

— SPENCE, TODD (@Todd_Spence) September 5, 2021

“Soon, studios will simply need to rent Brad Pitt’s face value rights for him to appear in the upcoming blockbuster film without having to leave the comfort of his couch,” Seyhan Demirdag comments.

Similar models have already been used. For example, Focus Pictures’ Roadrunner: A Film About Anthony Bourdain used deepfake technology to recreate Bourdain’s voice to have it say things he never actually said. This was controversial mainly because the AI wasn’t acknowledged up front by the filmmakers. The Andy Warhol Diaries also used AI to mimic Warhol’s narration, but since this was credited in the title sequence the Netflix documentary received applause for its innovation.

READ MORE: Crossing the Line: How the “Roadrunner” Documentary Created an Ethics Firestorm (NAB Amplify)

READ MORE: Pop Will Eat Itself: The Ultimate Manufactured Warhol (NAB Amplify)

As with any other technology in its infancy, AI art still misses temporal coherence — our capacity to make jumping jacks or walk down the street.

“Right now, you can produce mind-bending, never-before-seen sequences with AI, but you cannot do everything (yet),” Seyhan Demirdag says.

“In a few years, we’ll be able to generate coherent and screen-ready full features that are entirely generated. If you are a producer, director, studio owner, or VFX artist who wants to stay ahead of the curve, now is the time to invest in this technology; otherwise, your competition will be generating headlines, not you.”





Fabian Stelzer’s “Salt” is the World’s First Fully AI-generated Multiplot “Film”

By Abby Spessard

READ MORE: This guy is using AI to make a movie — and you can help decide what happens next (CNN Business)

Fabian Stelzer is creating a sci-fi movie, Salt, using artificial intelligence coupled with crowdsourced narrative twists. On Twitter, Salt is billed as “the world’s first fully AI-generated multiplot ‘film’ — a Web6 internet adventure where your choices create a 1970s lo-fi sci-fi universe.”

🧂 pic.twitter.com/Hbig80jfdZ

— SALT (@SALT_VERSE) June 14, 2022

While Stelzer may not be a filmmaker by trade, Rachel Metz at CNN Business says his use of AI tools to create a series of short films “points to what could be a new frontier for making movies.”

“Stelzer creates images with image-generation tools such as Stable Diffusion, Midjourney and DALL-E 2. He makes voices mostly using AI voice generation tools such as Synthesia or Murf. And he uses GPT-3, a text-generator, to help with the script writing,” Metz details.

After watching installments, Salt viewers can vote on story beats to determine what will happen next. Selzer has hopes of one day cutting the “play-to-create experiment” into a feature-length film.

𝙸 𝚠𝚊𝚜 𝚋𝚘𝚛𝚗 𝚘𝚗 𝙴𝚊𝚛𝚝𝚑. 𝙱𝚞𝚝 𝚠𝚎 𝚖𝚘𝚟𝚎𝚍 𝚊𝚛𝚘𝚞𝚗𝚍 𝚊 𝚕𝚘𝚝 𝚏𝚘𝚛 𝚍𝚊𝚍'𝚜 𝚠𝚘𝚛𝚔. 𝚆𝚒𝚕𝚕 𝚗𝚎𝚟𝚎𝚛 𝚏𝚘𝚛𝚐𝚎𝚝 𝚜𝚎𝚎𝚒𝚗𝚐 𝚝𝚑𝚎 𝚜𝚑𝚛𝚘𝚘𝚖𝚜 𝚏𝚘𝚛 𝚝𝚑𝚎 𝚏𝚒𝚛𝚜𝚝 𝚝𝚒𝚖𝚎. 𝚄𝙼𝙰𝙼𝙸 𝚜𝚞𝚛𝚎 𝚕𝚘𝚘𝚔𝚎𝚍 𝚘𝚞𝚝 𝚏𝚘𝚛 𝚒𝚝𝚜 𝚘𝚠𝚗. pic.twitter.com/Ag3h6QEcuH

— SALT (@SALT_VERSE) August 15, 2022

“In my little home office studio I can make a ‘70s sci-fi movie if I want to,” he says. “And actually I can do more than a sci-fi movie. I can think about, ‘What’s the movie in this paradigm, where execution is as easy as an idea?’ “

Salt’s plot is still fairly vague — at least for now, as Metz notes — but Stelzer continues to release short clips and images on Twitter. “The resulting films are beautiful, mysterious, and ominous,” she writes. “So far, each film is less than two minutes long, in keeping with Twitter’s maximum video length of two minutes and 20 seconds. Occasionally, Stelzer will tweet a still image and a caption that contribute to the series’ strange, otherworldly mythology.”

The genesis for Salt emerged from Stelzer’s experiments in the text-to-image generator Midjourney. Working from his prompts, the system generated images he said “felt like a film world,” depicting “alien vegetation, a mysterious figure lurking in the shadows, and a weird-looking research station on an arid mining planet.”

Stelzer said, “I saw this in front of me and was like, ‘Okay, I don’t know what’s happening in this world, but I know there’s lots of stories, interesting stuff. I saw narrative shades and shadows of ideas and story seeds.”

But Selzer admits that he’s not entirely sure whether the idea he has for Salt will work, partially because of community involvement driving the project to deviate wildly from what he had initially planned. “The charm of the experiment to me, intellectually, is driven by the curiosity to see what I as the creator and the community can come up with together.”




AI ART — I DON’T KNOW WHAT IT IS BUT I KNOW WHEN I LIKE IT:

Even with AI-powered text-to-image tools like DALL-E 2, Midjourney and Craiyon still in their relative infancy, artificial intelligence and machine learning is already transforming the definition of art — including cinema — in ways no one could have ever predicted. Gain insights into AI’s potential impact on Media & Entertainment in NAB Amplify’s ongoing series of articles examining the latest trends and developments in AI art
  • What Will DALL-E Mean for the Future of Creativity?
  • Recognizing Ourselves in AI-Generated Art
  • Are AI Art Models for Creativity or Commerce?
  • In an AI-Generated World, How Do We Determine the Value of Art?
  • Watch This: “The Crow” Beautifully Employs Text-to-Video Generation



  • Content Creation
  • Intelligent Content
  • Management and Systems
  • Media Content
  • Al / Machine Learning
  • Motion Picture/ Film Production

Subscribe

for more content like this sent directly to your inbox:

Sign Up
Related Article
“Kill Bill: Volume 3”? Now We Have an AI That Can Mimic Iconic Film Directors
“Kill Bill: Volume 3”? Now We Have an AI That Can Mimic Iconic Film Directors

Researchers trained an open-source procedural cinematography toolset Cine-AI to mimic human film auteurs like Quentin Tarantino and Guy Ritchie.

Next-Gen (Generated) Creativity: The AI Imagery and Text Tool Combo
Next-Gen (Generated) Creativity: The AI Imagery and Text Tool Combo

OpenAI lead researcher Mark Chen speaks to The Atlantic’s Ross Andersen about how AI tools like DALLE-2 and Chat GPT-3 are being used.

The One-Human, One-Machine Movie Studio: How AI Could Change Hollywood
The One-Human, One-Machine Movie Studio: How AI Could Change Hollywood

Hollywood must embrace AI in order to cut production costs and boost creativity or wind up like Kodak, Nokia, Yahoo, Xerox and Blockbuster.

  • Main Pages
  • Homepage
  • Stories
  • Events
  • Companies
  • Products
  • Policy
  • Privacy Policy
  • Terms of Use
  • Code of Conduct
  • Cookie Policy
  • Quick Links
  • Advertising & Thought Leadership
  • NAB Amplify Press
  • FAQs
  • Technical Difficulties
  • Contact
  • Cookie Preferences
  • RSS Feed
The Angle Newsletter

Weekly editorial newsletter covering the latest content, events and more taking place on NAB Amplify.

Subscribe

© 2023 National Association of Broadcasters. All Rights Reserved.