“If you haven’t had an existential crisis about it, you probably haven’t really experienced AI,” says Ethan Mollick, associate professor of management at the University of Pennsylvania and co-director of the Generative AI Lab at Wharton.
In an episode of Big Think, Mollick argues, “We shouldn’t feel like we don’t have control over how AI is used. As managers and leaders, you get to make these choices about how to deploy these systems to increase human flourishing.
Watch the full episode, below, or read on to learn Mollick’s “four scenarios of the future” for AI.
“As individuals, we get to decide how to be the human who uses these systems well.”
He recently published “Co-Intelligence: Living and Working With AI,” a book that considers how people might leverage AI.
“We have a lot of early evidence that this is going to be a big deal for work,” Mollick says.
In fact, he notes, “There’s now multiple studies across fields ranging from consulting to legal to marketing to programming, suggesting 20-80% performance improvements across a wide range of tasks for people who use AI versus don’t.”
Mollick notes that not only does GPT stand for “generative pre-training transformer,” it’s also the abbreviation for “general purpose technology,” which he explains as “once-in-a-generation technologies, things like steam power or the internet or electrification, that change everything they touch.”
Scenario One: The World Is Static
Potentially, “This is the best AI you’re ever going to use,” he says. However, Mollick notes that this “is the least likely” of the four scenarios. Instead, he predicts, “whatever AI you’re using now is the worst AI you’re ever going to use.”
Why? Mollick explains, “Even if the core large language model development stopped right now, there’s another 10 years of just making it work better with tools and with industry in ways that’ll continue to be disruptive.”
Scenario Two: Linear Growth
“It’s very likely that AIS will continue to improve and get better in the near term,” Mollick says, so he wouldn’t rule this version out.
Scenario Three: Exponential Growth
However, “Right now, the doubling time for [AI] capability is about every five to nine months, which is an exceptionally fast doubling time.”
For context, “Moore’s law, which is the rule that’s kind of kept the computer world going, doubles the power of computer processing chips every 24 to 26 months.”
Scenario Four: AGI
In the final scenario, OpenAI achieves its goal of artificial general intelligence. “This is the idea that a machine will be smarter than a human [at] almost all tasks.”
Mollick points out that this version of AI’s development removes human agency from the equation “because it’s something that happens to us.”
It’s also interrelated to the concept of (P)Doom, which Mollick explains as the “probability that we’re all going to die” as a result of this experimentation.
He doesn’t contemplate this sub-scenario much, for two reasons: “I don’t think we can assign a probability to things going wrong, and again, that makes the technology the agent. We get to decide how this thing is used.”
Why subscribe to The Angle?
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Discussion
Responses (1)