TL;DR
- Viral video creator Karen X Cheng talks with a16z about using generative AI tools to create videos and the implications they have for the wider creative community.
- AI art tools will democratize the art world but there will still be room for highly paid and innovative outliers able to combine the tools in unique ways.
- Cheng feels a responsibility to portray the tools in an optimistic light but admits that many people will lose out as AI enters the creative mainstream.
READ MORE: AI and the Creator Economy with Karen X Cheng (a16z)
In 2022 AI was most certainly a gimmick and a headline grabber. In 2023 the answer will be different.
“People will start using AI in their workflow because it makes sense,” award-winning director Karen X Cheng says in an interview.
In an a16z podcast, host Steph Smith talks with Cheng about her use of generative AI tools like DALL-E, Midjourney and Stable Diffusion. Cheng has more than a million followers and almost everything she creates goes viral — including a video of her becoming a lawnmower (yes, that’s right), an AI-generated magazine cover for Cosmo, and a DALL-E fashion show.
“It is so much harder to make a video go viral than it was 10 years ago,” she says. “The way to do it now is you have to have a following. It’s not so much about trying to make something viral but about building an established follower base so that the number of people who see your work steadily gets higher.”
For Cheng, the secret ingredient that unlocked viral videos in the age of the algorithm was to produce a behind-the-scenes look at how she made a piece of content, posting it alongside a new release.
Recently she has successfully created content for sponsored partners using generative AI.
“I had to find new toys to play with,” she says. “I noticed the insane stuff in AI white papers and what researchers were doing but it’s not their job to explore AI storytelling or cinematic potential.
“So, I started experimenting by taking the research from white papers and making them into social media-friendly videos.”
Aside from the leading text-to-image tools, Cheng also uses AIs that require more technical knowledge but enable niche techniques. Dain, for example, applies artificial slow motion to your video. She used it on a stop-motion video of her lying on the lawn to smooth the action to appear “as if she were a lawnmower.”
It went viral.
NeRF technology uses any camera to scan a scene to create a 3D scan constructing a particle lightfield so that the light changes realistically during the scene. “That’s why it can handle mirrors,” she explains, “Whereas traditional photogrammetry cannot.”
Each AI tool has a specific purpose: “Where it becomes more interesting is when you combine them. For example, you can generate an image in DALL-E and then use the CapCut app to turn it into a 3D image. Since image synthesizers don’t do human faces very well, you can run them through Facetune — an app that will fix it.”
Cheng feels that AI art will significantly lower the barrier to entry to becoming an artist. “To be an artist you [historically] have to have a lot of time, a lot training and sometimes the money to do that. Now, everyone can do it. Image synthesizers takes the artistic skill of artists and gives it to everyone. There will still be standouts — they will be the ones finding different or creative innovations with the ability to combine things in different ways.”
As it’s popularly conceived, AI is all-powerful and will replace humans. Cheng says she feels a responsibility to portray the tools in an optimistic light.
“I’ve had to unlearn a lot of my bad habits. As a trained viral video creator, I am rewarded for making clickbait headlines. My first instinct was to make a bunch of human versus machine videos. Then I realized that will just freak people out,” she comments.
“There are legitimate reasons to be worried about AI and it will negatively impact some people more than others but AI can be used for good or bad. The media will push toward the bad because that gets clicks and views.”
Having amassed a sizeable following, Cheng says she felt less pressure to make clickbait and decided to make videos depicting AI in a positive manner.
“It does feel like a collaboration. You often get results back that you didn’t expect and which prompt you to go down that rabbit hole.”
Read It on Amplify: The Ways AI Is Going to Revolutionize Filmmaking
Read It on Amplify: Now We Have an AI That Mimics Iconic Film Directors
Read It on Amplify: Can AI Imagery and Video Actually Be Good for the Creator Economy?
However, she doesn’t deny that AI will negatively impact many people. The introduction of AI into the creative industries for example will put “incredible downward pressure” on prices where the vast majority of people will lose out.
“I do worry about that for creators and I don’t know how it will play out. If you hire a human to use AI as a tool then you pay the human.”
Cheng advises, “I would say best thing to do is not to learn a specific skill, because technology is changing so quickly, but to adopt a specific mindset. You have to accept that the model humans had which is to choose a career and have it for life, is gone. The sooner you can accept that the world is always changing the better off you will be.
AI-powered video tools for creators are on the way — but aren’t here yet. “That’s why I’ve been doing so much hacking of AI tools because it’s not quite there yet. Once it does though, be careful what you wish for. My wish is for humans to take the ethics of AI very seriously. By which I mean that everyone working on AI be held to a standard to use AI for positive force rather than negative. I hope society finds a way to seriously penalize those who use AI negatively.
“For example, deception is bad. If you alter things you need to disclose what and how.”
Do we need to label things as AI or human-generated?
“It will be necessary and will be similar to nutrition facts on food packaging,” Cheng suggests. “There will need to be a universal standard that shows this is a video produced in such and such a way especially if the message in the video is very important (like a political video, rather than a social influencer’s vlog).
“I would love to see a culture develop where, as part of being human, we use technology responsibly and ethically.”
AI ART — I DON’T KNOW WHAT IT IS BUT I KNOW WHEN I LIKE IT:
Even with AI-powered text-to-image tools like DALL-E 2, Midjourney and Craiyon still in their relative infancy, artificial intelligence and machine learning is already transforming the definition of art — including cinema — in ways no one could have ever predicted. Gain insights into AI’s potential impact on Media & Entertainment in NAB Amplify’s ongoing series of articles examining the latest trends and developments in AI art:
- What Will DALL-E Mean for the Future of Creativity?
- Recognizing Ourselves in AI-Generated Art
- Are AI Art Models for Creativity or Commerce?
- In an AI-Generated World, How Do We Determine the Value of Art?
- Watch This: “The Crow” Beautifully Employs Text-to-Video Generation