READ MORE: We Need to Talk About How Good A.I. Is Getting (The New York Times)
The best AI systems from DALL-E 2 and DeepMind’s AlphaFold to OpenAI’s GPT-3 are now so capable — and improving at such fast rates — that the conversation in Silicon Valley is starting to shift.
Fewer experts are confidently predicting that we have years or even decades to prepare for a wave of world-changing AI; many now believe that major changes are right around the corner, for better or worse.
“We all need to start adjusting our mental models to make space for the new, incredible machines in our midst,” says Kevin Roose, a technology columnist and the author of Futureproof: 9 Rules for Humans in the Age of Automation.
Take Google’s LaMDA, the AI that hit the headlines when a senior Google engineer was fired after claiming that it had become sentient.
Google disputed the claims, and lots of academics have argued against the engineer’s conclusions but take out the sentience part, “and a weaker version of the argument — that state-of-the-art language models are becoming eerily good at having humanlike text conversations — would not have raised nearly as many eyebrows,” Roose says, in an article for The New York Times.
It seems as if AI models targeting all sorts of applications in different industries have suddenly hit a switch marked turbo-charge.
“AI systems can go from adorable and useless toys to very powerful products in a surprisingly short period of time,” Ajeya Cotra, a senior analyst with Open Philanthropy, told Roose. “People should take more seriously that AI could change things soon, and that could be really scary.”
EXPLORING ARTIFICIAL INTELLIGENCE:
With nearly half of all media and media tech companies incorporating Artificial Intelligence into their operations or product lines, AI and machine learning tools are rapidly transforming content creation, delivery and consumption. Find out what you need to know with these essential insights curated from the NAB Amplify archives:
- This Will Be Your 2032: Quantum Sensors, AI With Feeling, and Life Beyond Glass
- Learn How Data, AI and Automation Will Shape Your Future
- Where Are We With AI and ML in M&E?
- How Creativity and Data Are a Match Made in Hollywood/Heaven
- How to Process the Difference Between AI and Machine Learning
There are plenty of skeptics who say claims of AI progress are overblown and that we’re still decades away from creating true AGI — artificial general intelligence — that is capable of “thinking” for itself.
“Even if they are right, and AI doesn’t achieve human-level sentience for many years, it’s easy to see how systems like GPT-3, LaMDA and DALL-E 2 could become a powerful force in society,” Roose says. “AI gets built into the social media apps we use every day. It makes its way into weapons used by the military and software used by children in their classrooms. Banks use AI to determine who’s eligible for loans, and police departments use it to investigate crimes.”
In a few years, it’s likely that the vast majority of the photos, videos and text we encounter on the internet could be AI-generated. Our online interactions “could become stranger and more fraught,” as we struggle to figure out which of our conversational partners are human and which are convincing bots. And “tech-savvy propagandists could use the technology to churn out targeted misinformation on a vast scale,” distorting the political process in ways we won’t see coming.
Roose outlines three things that could help divert us from this dystopian future.
First, regulators and politicians need to get up to speed.
“Few public officials have any firsthand experience with tools like GPT-3 or DALL-E 2, nor do they grasp how quickly progress is happening at the AI frontier,” he says.
If more politicians and regulators don’t get a grip “we could end up with a repeat of what happened with social media companies after the 2016 election — a collision of Silicon Valley power and Washington ignorance.”
“Even if they are right, and AI doesn’t achieve human-level sentience for many years, it’s easy to see how systems like GPT-3, LaMDA and DALL-E 2 could become a powerful force in society. AI gets built into the social media apps we use every day. It makes its way into weapons used by the military and software used by children in their classrooms. Banks use AI to determine who’s eligible for loans, and police departments use it to investigate crimes.”
— Kevin Roose
Second, Roose calls on big tech — Googles, Meta and OpenAI — to do a better job of explaining what they’re working on, “without sugar coating or soft-pedalling the risks,” he says.
ALSO ON NAB AMPLIFY:
“Right now, many of the biggest AI models are developed behind closed doors, using private data sets and tested only by internal teams. When information about them is made public, it’s often either watered down by corporate PR or buried in inscrutable scientific papers.
“Tech companies won’t survive long term if they’re seen as having a hidden AI agenda that’s at odds with the public interest.”
And if these companies won’t open up voluntarily, AI engineers should go around their bosses and talk directly to policymakers and journalists themselves.
Third, it’s up to the news media needs to do a better job of explaining AI to the public. Roose isn’t excluding himself from criticism either.
“AI systems can go from adorable and useless toys to very powerful products in a surprisingly short period of time. People should take more seriously that AI could change things soon, and that could be really scary.”
— Ajeya Cotra, Open Philanthropy
Journalists too often uses lazy and outmoded sci-fi shorthand (Skynet, HAL 9000) to translate what’s happening in AI to a general audience.
“Occasionally, we betray our ignorance by illustrating articles about software-based AI models with photos of hardware-based factory robots — an error that is as inexplicable as slapping a photo of a BMW on a story about bicycles.”
Cotra has estimated that there is a 35% chance of “transformational AI” emerging by 2036. This is the sort of AI that is so advanced it will deliver large-scale economic and societal changes, “such as eliminating most white-collar knowledge jobs.”
Roose says we need to move the discussion away from a narrow focus on AI’s potential to “take my job,” and rather to try to understand all of the ways AI is evolving for good and bad.
“What’s missing is a shared, value-neutral way of talking about what today’s AI systems are actually capable of doing, and what specific risks and opportunities those capabilities present.”
We need to do this in a hurry.