TL;DR
- Renard T. Jenkins, president of the Society of Motion Picture and Television Engineers, is concerned that the proliferation and sophistication of large language models are being embedded with bias, unconscious or otherwise.
- That said, bias is not inherently a bad thing, says Jenkins. Erasing misogyny from LLMs would be a good thing, for example.
- He calls on companies to ethically source data, to employ a diverse group of decision makers and developers and to educate, educate, educate.
SMPTE president Renard T. Jenkins has flagged concerns about bias in development of AI tools and says diversity among decision makers is a practical means to prevent it.
“We should be fighting against bias in everything that we build and the best way I believe for us to do that is through inclusive innovation,” he told the Curious Refuge Podcast.
You can listen to the full episode here or read on for highlights.
“If your team looks like the world that you intend to serve, and to develop these tools for, then they’re going to notice when something is biased towards them or towards others.”
Jenkins expressed concern that the proliferation and sophistication of large language models are being inbuilt with bias, unconscious or otherwise.
He suggests that bias is not inherently a bad thing because certain forms of bias are there for our protection.
“As a child you learned very, very early not to put your hand on the stove because it’s hot. You develop a bias towards putting your hand on hot things. That’s good.
“That helps protects us as human beings. So there is that innate ‘bias’ that is born in us to protect us. The problem is when that bias is led by fear and inaccurate understanding of individuals or cultures That’s what leads to the bad side of things.”
That goes for AI as well. Fear or misunderstanding of others can actually make its way into the development of a tool, he said, and once it makes its way in, it’s very hard to get it out.
He advocates a system of “good bias” that is not going to be xenophobic, misogynistic, racist, or homophobic. “I want all systems to be that way,” he said. “But I also believe that it can’t go into hyper overdrive because then it’s going to harm us. That’s why we have to understand bias and we have to remove bad bias from these algorithms.”
Aside from inclusive innovation, removing bias requires “sourcing ethically, cleaning [data] and monitoring your data,” Jenkins said. “That’s how we get to the point where we can hopefully one day not have to worry about the bad bias because it’s sort of been wiped out. That’s my goal.”
The problem is, as moderator Caleb Ward points out, the competing pressure to make money from AI product risks ethically sources models being relegated behind the drive to monetize. Not even new AI regulation in Europe or the US might be sufficient to stop it.
“It’s an arms race right now,” Jenkins agreed. “There’s a lot of money being thrown around and that sometimes drives product that it not ready for primetime, without being fully vetted for what their impact will be.
“It’s not just about the tool in itself in the sense of helping the creative, it really is about the impact that it has on the user and on our society as a whole. That should be one of the primary things that all of these companies take into account when they’re doing this.”
Jenkins says he is saying to executives that there’s a way for them to continue to make “all of the wonderful financial gains that you’re making and for you to continue to create phenomenal tools, but there’s also a way for you to protect users.
“Because in truth, if you’re doing something that’s harming your users, that’s bad business. You know it’s bad business because over time you you’re going to run out of users.”
Everybody in media and business generally from the C-Suite on down needs to be educated about AI risk and reward.
“The more that you’re educated about it the more that you’ll understand. When you see something that could actually go in the wrong direction then you have the responsibility to say ‘let’s slow this down’ and try and make sure that we’re helping,” he says.
“You got to protect the people and truly you shouldn’t be creating anything that’s going to cause harm.”
Why subscribe to The Angle?
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!