Call it evolution if you like, but everyone has to get over the hurdle of thinking that there’s something unique about homo sapiens — creativity, dexterity, empathy perhaps — that differentiates us from machines.
“The key to understanding both the inaccurate embracing of machines and the over-dismissal of AI capacities is to see the limits of the divide between human nature and digital algorithmic control,” says Joanna J. Bryson, a Professor of Computer Science at the University of Bath (in the UK) and Professor of Ethics and Technology at the Hertie School in Berlin.
Her op-ed, published in Wired, is another response to the claim that Google’s LaMDA AI has achieved consciousness.
She wants to “break the mystic hold of seemingly sentient conversations” by exposing how the system works. Bryson doesn’t just mean how AI works – she means by being honest about how we operate as humans.
Humans beings, she argues, are algorithmic too. “Much of our culture and intelligence works like a large language model, absorbing and recombining what we’ve heard. Then there’s the fundamental algorithm for humanity and the rest of natural intelligence: evolution.
“Evolution is the algorithm that perpetuates copies of itself. Evolution underlies our motivations. It ensures that things central to our survival — like intelligence, consciousness, and also cooperation, the very capabilities central to this debate — mean a lot to us.”
Understanding the “AI sentience” debate also requires that we talk about how we all construct individual identities. We think identity makes us different when in fact we have more in common with each other then we acknowledge. Our “unique” ID is forged in the company of others.
“Many of the ways we define our identity is through our alignment with various in-groups: our religion, our home town, our gender (or lack of gender), our job, our relative height, our relative strength or skills,” Bryson notes. “So, we are driven both to differentiate, but also to belong.”
“The tech industry in particular needs to prove it is on the side of the transparency and understanding that underpins liberal democracy, not secrecy and autocratic control. Ultimately, it isn’t really likely even to be a cost burden to the corporations; systems that are transparent are easier to maintain and extend.”Joanna J. Bryson
Understanding this goes someway to divining what makes us human and that will help us differentiate between human and AI in future.
“We will still get pleasure out of singing with our friends or winning pub quizzes or local soccer matches, even if we could have done better using web search or robot players,” Bryson suggests. “These activities are how we perpetuate our communities and our interests and our species. This is how we create security, as well as comfort and engagement.”
She also makes the point that the threat of AI is most keenly felt among the cultural elite. “Sure, it is some kind of threat, at least to the global elite used to being at the pinnacle of creativity. The vast majority of humanity, though, had to get used to being less-than-best since we were in first grade.”
Even if no skills or capacities separate us from artificial intelligence, there is still a reason to fight the assessment that machines are people.
“If you attribute the same moral weight to something that can be trivially and easily digitally replicated as you do to an ape that takes decades to grow, you break everything — society, all ethics, all our values,” she argues.
Achieving this understanding without “embracing polarizing, superstitious, or machine-inclusive identities that endanger our societies” isn’t only a concern for the academics, she says, but our politics too.
“Democracy means nothing if you can buy and sell more citizens than there are humans, and if AI programs were citizens, we so easily could.”
“Evolution is the algorithm that perpetuates copies of itself. Evolution underlies our motivations. It ensures that things central to our survival — like intelligence, consciousness, and also cooperation, the very capabilities central to this debate — mean a lot to us.”Joanna J. Bryson
One pathway to power may be for politicians to “encourage and prey upon” the insecurities and misconceptions around AI, just as some (Trump; Russian state) presently use disinformation to disrupt democracies and regulation.
“The tech industry in particular needs to prove it is on the side of the transparency and understanding that underpins liberal democracy, not secrecy and autocratic control,” she says. “Ultimately, it isn’t really likely even to be a cost burden to the corporations; systems that are transparent are easier to maintain and extend.”
The new EU AI Act, for example, demands relatively little from the developers of the vast majority of AI systems. But its most basic requirement is that AI is always identified. No one thinks they are talking to a person when really they are talking to a machine.
“Complying with this law may finally get companies like Google to behave as seriously as they always should have been — with great transparency and world-class devops,” Bryson says. “Rather than seeking special exemptions from EU transparency laws, Google and others should be demonstrating — and selling — good practice in intelligent software development.”
Helping us accept who we really are, how we work, without us losing engagement with our lives, is for Bryson, an enormous extended project for humanity.
With nearly half of all media and media tech companies incorporating artificial intelligence into their operations or product lines, AI and machine learning tools are rapidly transforming content creation, delivery and consumption. Find out what you need to know with these essential insights curated from the NAB Amplify archives:
- AI Is Going Hard and It’s Going to Change Everything
- Thinking About AI (While AI Is Thinking About Everything)
- If AI Ethics Are So Important, Why Aren’t We Talking About Them?
- Superhumachine: The Debate at the Center of Deep Learning
- Deepfake AI: Broadcast Applications and Implications