TL;DR
- To understand the strengths and limitations of artificial intelligence we may need to adopt a new perspective, argues Jaron Lanier, a tech guru who currently works for Microsoft.
- He suggests that even using words like “intelligence” and “training” don’t provide us with information about the technology’s weaknesses.
- Lanier’s essay coincides with Elon Musk’s apparent irony-free bid to sue OpenAI for putting profit before humanity.
Want to learn more about how artificial intelligence is impacting M&E?
Join us at NAB Show, April 13-17 (use code AMP05 to receive a free exhibits pass) for sessions, expert speakers and exclusive technology experiences. You can view all of our AI-focused programming here.
READ MORE: How to Picture A.I. (The New Yorker)
“If we can’t understand how a technology works, we risk succumbing to magical thinking,” says Jaron Lanier in the tech guru’s latest contribution to the debate on AI.
“Is there a way to explain AI that isn’t in terms suggesting human obsolescence or replacement? If we can talk about our technology in a different way, maybe a better path to bringing it into society will appear.”
This is the week in which Elon Musk filed a lawsuit against OpenAI for putting profit before the good of humanity in proceeding full steam to develop next-level Artificial General Intelligence.
READ MORE: Elon Musk sues OpenAI accusing it of putting profit before humanity (The Guardian)
Since Microsoft has invested heavily in OpenAI to do that, Lanier’s intervention could be seen as an attack on Musk for raising the ‘threat’ levels of AI but Lanier is too smart and liberal an operator for that.
He also wants all AI leaders to open the black box and show us what is inside.
Lanier attempts to this in his essay published in The New Yorker. He is alarmed by the fever pitched discussion of AI where the loudest voices appear to be at its extremes. Those doomsayers fearful of the technology’s inevitable human apocalypse and those who think that humans will always be masters of their own destiny evolving with AI as a force for overall good.
Some hold both positions.
“I have trouble understanding why some of my colleagues say that what they are doing might lead to human extinction, and yet argue that it is still worth doing,” Lanier writes. “It is hard to comprehend this way of talking without wondering whether AI is becoming a new kind of religion.”
Lanier advocates a third way, a middle way, which he hopes offers an alternative to the view that AI does nothing but regurgitate — while also communicating skepticism about whether AI will become a transcendent, unlimited form of intelligence.
He thinks that we should start by demystifying what AI is.
“We usually prefer to treat AI systems as giant impenetrable continuities. Perhaps, to some degree, there’s a resistance to demystifying what we do because we want to approach it mystically,” he argues.
He continues, “One problem with the usual anthropomorphic narratives about AI is that they don’t nurture our intuitions about its weaknesses. As a result, our discussions about the technology tend to involve confrontations between extremes: there are enthusiasts who think that we’re building a cosmically big brain that will solve all our problems or wipe us out, and skeptics who don’t see much value in AI.”
He takes issue with the term “artificial intelligence,” suggesting it permeates the idea that we are making new creatures instead of new tools. “This notion is furthered by biological terms like ‘neurons’ and ‘neural networks,’ and by anthropomorphizing ones like ‘learning’ or ‘training,’ which computer scientists use all the time.”
It’s also a problem that “AI” has no fixed definition.
“It’s always possible to dismiss any specific commentary about AI for not addressing some other potential definition of it,” he says.
The lack of mooring for the term coincides with a “metaphysical sensibility” according to which the human framework will soon be transcended.
In an earlier essay he discussed reconsidering AI as a form of human collaboration. Here he deconstructs how AI works for the layman.
READ MORE: Jaron Lanier: Is Data Dignity the Answer for Regaining “Control” of AI? (NAB Amplify)
“Most non-technical people can comprehend a thorny abstraction better once it’s been broken into concrete pieces you can tell stories about, but that can be a hard sell in the computer-science world,” he says.
The science-fiction writer Arthur C. Clarke famously stated that a sufficiently advanced technology is indistinguishable from magic. Lanier says that is only true if that technology is not explained well enough.
He adds, “It is the responsibility of technologists to make sure their offerings are not taken as magic.”