As artificial intelligence gains more sophistication and penetration into our daily lives questions do need to be asked about controlling its power. These questions aren’t new and in many ways are an extension of the classic Three Laws of Robotics devised by science fiction author Isaac Asimov eighty years ago. You don’t have to be a Skynet technophobe to join the conversation.
Consider the following ethical challenges around AI as neatly outlined by futurist and “influencer” Bernard Marr.
AI’s are trained on data and we need to be aware of the potential for bias in that raw input.
“When we train our AI algorithms to recognize facial features using a database that doesn’t include the right balance of faces, the algorithm won’t work as well on non-white faces, creating a built-in bias that can have a huge impact,” Marr says.
EXPLORING ARTIFICIAL INTELLIGENCE:
With nearly half of all media and media tech companies incorporating Artificial Intelligence into their operations or product lines, AI and machine learning tools are rapidly transforming content creation, delivery and consumption. Find out what you need to know with these essential insights curated from the NAB Amplify archives:
- This Will Be Your 2032: Quantum Sensors, AI With Feeling, and Life Beyond Glass
- Learn How Data, AI and Automation Will Shape Your Future
- Where Are We With AI and ML in M&E?
- How Creativity and Data Are a Match Made in Hollywood/Heaven
- How to Process the Difference Between AI and Machine Learning
Control and the Morality of AI
The increasing use of AIs to make split-second decisions should be a cause for concern. Automating a goal highlight for churning out to social media is one thing; having your car react when a child runs out in front of it at 40 mph is another.
“It’s important that the AI is in control of the situation,” Marr writes, adding, “This creates interesting ethical challenges around AI and control.”
We need data to train AIs, but where does this data come from, and how do we use it? Marr cites Mattel’s Barbie, which now has an AI-enabled doll that children can speak to. “What does this mean in terms of ethics? There is an algorithm that is collecting data from your child’s conversations with this toy. Where is this data going, and how is it being used?”
This clearly speaks to a need to check the power of big corporations with stricter rules around data collection, its transparency of use and legal protection.
Marr extends this idea of power balance, and the dangerous lack of it to governments (and therefore the industrial-military complex, so yes, Skynet).
“How do we make sure the monopolies we’re generating are distributing wealth equally and that we don’t have a few countries that race ahead of the rest of the world?” he asks. “Balancing that power is a serious challenge in the world of AI.”
Of immediate concern to anyone in media should be due protection of intellectual property. If an AI is trained on a data, that data will likely originate from a human source, so to what extent should their rights be protected — and recompensed?
Blockchain is the likely solution here as a means of tracking an IP asset as it is parsed at lightspeed across the internet. But this is field is nascent.
Marr suggest that training in AI can create 17 times more carbon emissions than the average American does in about a year. That’s a pretty startling stat and of course it’s an extrapolation of our daily use of the internet. Every single email and internet search clicks the gears (power, water, heat) in a data farm somewhere. It’s not in the cloud, the impact is real.
“How can we use this energy for the highest good and use AI to solve some of the world’s biggest and most pressing problems? If we are only using artificial intelligence because we can, we might have to reconsider our choices.”
Marr’s final challenge is “How does AI make us feel as humans?” As AI automates more of our jobs, what will our contribution be, as human beings? Even if AI augments more than it replaces jobs Marr says “We need to get better at working alongside smart machines so we can manage the transition with dignity and respect for people and technology.”
It’s clear that the discussion around the ethics of AI are actually about the morality and ethics of us as a species. The challenge is now only how we impose or insert that morality and ethics inside of a machine — but if we can.