Artificial Intelligence is around us more and more, but the inventor of one of our most ubiquitous household robots has a message for fellow humans: We’re still needed for the systems to get things right.
Rodney Brooks notes that when he developed the Roomba in 2002, the worst-case scenario was that the self-guided vacuum might miss some dust balls. A few years later, robots were detecting and disabling IEDs in Iraq and Afghanistan with lives at stake.
“Regardless of what you might think about AI, the reality is that just about every successful deployment has either one of two expedients: It has a person in the loop, or the cost of failure, should the system blunder, is very low,” Brooks says in the October 2021 issue of IEEE Spectrum, the journal of the Institute of Electrical and Electronics Engineers.
The evidence of AI trying — and failing — to get it right is all around us, from bots writing Netlfix shows (see below) to false arrests based on bad matches of facial recognition applications.
Even self-driving cars with years of development still need a hand on the wheel, Brooks argues.
He makes the points in “An Inconvenient Truth About AI – AI won’t surpass human intelligence anytime soon,” part of IEEE Spectrum’s special report, “The Great AI Reckoning.”
AI is in its third wave of major investment, Brooks says. Big promises have been coming since the 1960s, with the periodic forecasts of the end of human dominance in the tasks at hand.
The first published discussions of AI came around 1950. The ELIZA chatbot, which simulated a human conversation and learned from the words it read, hit the streets in 1966.
Brooks takes a look at this human-machine interaction — our sound systems and car controls learning to understand us.
READ MORE: The Great AI Reckoning (IEEE Spectrum)
“We, the consumers, soon adapt our language to each such AI agent, quickly learning what they can and can’t understand, in much the same way as we might with our children and elderly parents,” he says. “The AI agents are cleverly designed to give us just enough feedback on what they’ve heard us say without getting too tedious, while letting us know about anything important that may need to be corrected. Here, we, the users, are the people in the loop. The ghost in the machine, if you will.
“Ask not what your AI system can do for you, but instead what it has tricked you into doing for it,” Brooks says.
Head over to IEEE Spectrum to read the full story.