With the advance of systems like OpenAI’s Dall-E 2, DeepMind’s Gato and large language models like Facebook’s OPT, some experts believe that we are now within striking distance of “artificial general intelligence,” otherwise known as AGI. This is an often-discussed benchmark that refers to powerful, flexible AI that can outperform humans at any cognitive task.
Nonsense, says Rob Toews, a partner at VC firm Radical Ventures, but so too are the arguments of those who dismiss AI’s breathtaking possibilities.
“There is no such thing as artificial general intelligence,” he writes at Forbes. “AGI is neither possible nor impossible. It is, rather, incoherent as a concept.”
The public discourse needs to be reframed.
ALSO ON NAB AMPLIFY:
“Both the overexcited zealots who believe that super-intelligent AI is around the corner, and the dismissive skeptics who believe that recent developments in AI amount to mere hype, are off the mark in some fundamental ways in their thinking about modern AI.”
“It is a mistake to analogize AI too directly to human intelligence. Today’s AI is not simply a ‘less evolved’ form of human intelligence; nor will tomorrow’s hyper-advanced AI be just a more powerful version of human intelligence.”— Rob Toews
His argument follows that of influential philosopher Thomas Nagel in that AI is and will be fundamentally unlike human intelligence.
Nagel (writing in 1974) claimed that it is impossible to know, in a meaningful way, exactly what it is like to be another organism or species. The more unlike us the other organism or species is, the more inaccessible its internal experience is.
He used bats as an example to illustrate this point. He chose bats because, as mammals, they are highly complex beings, yet they experience life dramatically differently than we do: they fly, they use sonar as their primary means of sensing the world, and so on.
“It is a mistake to analogize AI too directly to human intelligence,” says Toews. “Today’s AI is not simply a ‘less evolved’ form of human intelligence; nor will tomorrow’s hyper-advanced AI be just a more powerful version of human intelligence.”
With nearly half of all media and media tech companies incorporating Artificial Intelligence into their operations or product lines, AI and machine learning tools are rapidly transforming content creation, delivery and consumption. Find out what you need to know with these essential insights curated from the NAB Amplify archives:
- This Will Be Your 2032: Quantum Sensors, AI With Feeling, and Life Beyond Glass
- Learn How Data, AI and Automation Will Shape Your Future
- Where Are We With AI and ML in M&E?
- How Creativity and Data Are a Match Made in Hollywood/Heaven
- How to Process the Difference Between AI and Machine Learning
The problem with the entire discussion about the presence or absence of sentience is by definition unprovable, unfalsifiable, unknowable.
“When we talk about sentience, we are referring to an agents’ subjective inner experiences, not to any outer display of intelligence. No one (not Google’s sacked AI engineer Blake Lemoine, nor his bosses that dismissed both him as well as his claims) can be fully certain about what a highly complex artificial neural network is or is not experiencing internally.”
ALSO ON NAB AMPLIFY:
AI, he maintains, is best thought of not as an imperfect emulation of human intelligence, but rather as a distinct, alien form of intelligence, whose contours and capabilities differ from our own in basic ways.
To make this more concrete, consider the state of AI today. Today’s AI far exceeds human capabilities in some areas — and woefully underperforms in others.
“There is no such thing as artificial general intelligence. AGI is neither possible nor impossible. It is, rather, incoherent as a concept.”— Rob Toews
For example, AI models have produced a solution to the protein folding problem, a fiendishly complicated riddle that requires forms of spatial understanding and high-dimensional reasoning “that simply lie beyond the grasp of the human mind.”
Meanwhile, any healthy human child possesses “embodied intelligence” that according to Toews, far eclipses the world’s most sophisticated AI.
“From a young age, humans can effortlessly do things like play catch, walk over unfamiliar terrain, or open the kitchen fridge and grab a snack. Physical capabilities like these have proven fiendishly difficult for AI to master.”
“Both the overexcited zealots who believe that super-intelligent AI is around the corner, and the dismissive skeptics who believe that recent developments in AI amount to mere hype, are off the mark in some fundamental ways in their thinking about modern AI.”— Rob Toews
So we need to conceive of intelligence differently. It is not a single, well-defined, generalizable capability, nor even a particular set of capabilities.
To define general AI as an AI that can do what humans do — but better — is shortsighted, Toews asserts. “To think that human intelligence is general intelligence is myopically human-centric,” he says. “If we use human intelligence as the ultimate anchor and yardstick for the development of artificial intelligence, we will miss out on the full range of powerful, profound, unexpected, societally beneficial, utterly non-human abilities that machine intelligence might be capable of.”
The point is that AI’s true potential lies with the development of novel forms of intelligence that are utterly unlike anything that humans are capable of. If AI is able to achieve goals like this, who cares if it is “general” in the sense of matching human capabilities overall?
“Artificial intelligence is not like human intelligence. When and if AI ever becomes sentient — when and if it is ever ‘like something’ to be an AI, it will not be comparable to what it is like to be a human. AI is its own distinct, alien, fascinating, rapidly evolving form of cognition.”
What matters is what artificial intelligence can achieve. Delivering breakthroughs in basic science (like the protein research Alphafold), tackling species-level challenges like climate change, advancing human health and longevity, deepening our understanding of how the universe works — outcomes like these are the true test of AI’s power and sophistication.
To Come to Terms With AI, We Need to Understand Humanity
Call it evolution if you like, but everyone has to get over the hurdle of thinking that there’s something unique about homo sapiens — creativity, dexterity, empathy perhaps — that differentiates us from machines.
“The key to understanding both the inaccurate embracing of machines and the over-dismissal of AI capacities is to see the limits of the divide between human nature and digital algorithmic control,” says Joanna J. Bryson, a Professor of Computer Science at the University of Bath (in the UK) and Professor of Ethics and Technology at the Hertie School in Berlin.
Her op-ed, published in Wired, is another response to the claim that Google’s LaMDA AI has achieved consciousness.
She wants to “break the mystic hold of seemingly sentient conversations” by exposing how the system works. Bryson doesn’t just mean how AI works – she means by being honest about how we operate as humans.
Humans beings, she argues, are algorithmic too. “Much of our culture and intelligence works like a large language model, absorbing and recombining what we’ve heard. Then there’s the fundamental algorithm for humanity and the rest of natural intelligence: evolution.
“Evolution is the algorithm that perpetuates copies of itself. Evolution underlies our motivations. It ensures that things central to our survival — like intelligence, consciousness, and also cooperation, the very capabilities central to this debate — mean a lot to us.”
Understanding the “AI sentience” debate also requires that we talk about how we all construct individual identities. We think identity makes us different when in fact we have more in common with each other then we acknowledge. Our “unique” ID is forged in the company of others.
“Many of the ways we define our identity is through our alignment with various in-groups: our religion, our home town, our gender (or lack of gender), our job, our relative height, our relative strength or skills,” Bryson notes. “So, we are driven both to differentiate, but also to belong.”
“The tech industry in particular needs to prove it is on the side of the transparency and understanding that underpins liberal democracy, not secrecy and autocratic control. Ultimately, it isn’t really likely even to be a cost burden to the corporations; systems that are transparent are easier to maintain and extend.”— Joanna J. Bryson
Understanding this goes someway to divining what makes us human and that will help us differentiate between human and AI in future.
“We will still get pleasure out of singing with our friends or winning pub quizzes or local soccer matches, even if we could have done better using web search or robot players,” Bryson suggests. “These activities are how we perpetuate our communities and our interests and our species. This is how we create security, as well as comfort and engagement.”
She also makes the point that the threat of AI is most keenly felt among the cultural elite. “Sure, it is some kind of threat, at least to the global elite used to being at the pinnacle of creativity. The vast majority of humanity, though, had to get used to being less-than-best since we were in first grade.”
Even if no skills or capacities separate us from artificial intelligence, there is still a reason to fight the assessment that machines are people.
“If you attribute the same moral weight to something that can be trivially and easily digitally replicated as you do to an ape that takes decades to grow, you break everything — society, all ethics, all our values,” she argues.
Achieving this understanding without “embracing polarizing, superstitious, or machine-inclusive identities that endanger our societies” isn’t only a concern for the academics, she says, but our politics too.
“Democracy means nothing if you can buy and sell more citizens than there are humans, and if AI programs were citizens, we so easily could.”
“Evolution is the algorithm that perpetuates copies of itself. Evolution underlies our motivations. It ensures that things central to our survival — like intelligence, consciousness, and also cooperation, the very capabilities central to this debate — mean a lot to us.”— Joanna J. Bryson
One pathway to power may be for politicians to “encourage and prey upon” the insecurities and misconceptions around AI, just as some (Trump; Russian state) presently use disinformation to disrupt democracies and regulation.
“The tech industry in particular needs to prove it is on the side of the transparency and understanding that underpins liberal democracy, not secrecy and autocratic control,” she says. “Ultimately, it isn’t really likely even to be a cost burden to the corporations; systems that are transparent are easier to maintain and extend.”
The new EU AI Act, for example, demands relatively little from the developers of the vast majority of AI systems. But its most basic requirement is that AI is always identified. No one thinks they are talking to a person when really they are talking to a machine.
How Does Generative AI Work?
By Abby Spessard
READ MORE: How do DALL-E, Midjourney, Stable Diffusion, and other forms of generative AI work? (Big Think)
Generative AI is taking the tech world by storm even as the debate about AI art rages on. “Meaningful pictures are assembled from meaningless noise,” Tom Hartsfield, writing at Big Think, summarizes the current situation.
The generative model programs that power the likes of DALL-E, Midjourney and Stable Diffusion can create images almost “eerily like the work of a real person.” But do AIs truly function like a person, Hartsfield asks, and is it accurate to think of them as intelligent?
“Generative Pre-trained Transformer 3 (GPT-3) is the bleeding edge of AI technology,” he notes. Developed by OpenAI and licensed to Microsoft, GPT-3 was built to produce words. However, OpenAI adapted a version of GPT-3 to create DALL-E and DALL-E 2 through the use of diffusion modeling.
Diffusion modeling is a two-step process where AIs “ruin images, then they try to rebuild them,” as Hartsfield explains. “In the ruining sequence, each step slightly alters the image handed to it by the previous step, adding random noise in the form of scattershot meaningless pixels, then handing it off to the next step. Repeated, over and over, this causes the original image to gradually fade into static and its meaning to disappear.
“When this process is finished, the model runs it in reverse. Starting with the nearly meaningless noise, it pushes the image back through the series of sequential steps, this time attempting to reduce noise and bring back meaning.”
While the destructive part of the process is primarily mechanical, returning the image to lucidity is where training comes in. “Hundreds of billions of parameters,” including associations between images and words, are adjusted during the reverse process.
The DALL-E creators trained their model “on a giant swath of pictures, with associated meanings, culled from all over the web.” This enormous collection of data is partially why Hartsfield says DALL-E isn’t actually very much like a person at all. “Humans don’t learn or create in this way. We don’t take in sensory data of the world and then reduce it to random noise; we also don’t create new things by starting with total randomness and then de-noising it.”
Does that mean generative AI isn’t intelligent in some other way? “A better intuitive understanding of current generative model AI programs may be to think of them as extraordinarily capable idiot mimics,” Hartsfield clarifies.
As an analogy, Hartsfield compares DALL-E to an artist, “who lives his whole life in a gray, windowless room. You show him millions of landscape paintings with the names of the colors and subjects attached. Then you give him paint with color labels and ask him to match the colors and to make patterns statistically mimicking the subject labels. He makes millions of random paintings, comparing each one to a real landscape, and then alters his technique until they start to look realistic. However, he could not tell you one thing about what a real landscape is.”
Whatever your stance is on generative AI, we’ve landed in a new era, one in which computers can generate fake images and text that are extremely convincing. “While the machinations are lifeless, the result looks like something more. We’ll see whether DALL-E and other generative models evolve into something with a deeper sort of intelligence, or if they can only be the world’s greatest idiot mimics.”
“Complying with this law may finally get companies like Google to behave as seriously as they always should have been — with great transparency and world-class devops,” Bryson says. “Rather than seeking special exemptions from EU transparency laws, Google and others should be demonstrating — and selling — good practice in intelligent software development.”
Helping us accept who we really are, how we work, without us losing engagement with our lives, is for Bryson, an enormous extended project for humanity.