TL;DR
- Russell Wald, the director of policy for Stanford’s Institute for Human-Centered AI, argues for regulation that recognizes the unique benefits of AI for humanity as well as that there are some very serious dangers.
- Aside from regulation, Wald thinks the US needs a national AI strategy with politicians educated about the issues, greater transparency of data models, the involvement at policy level of academics and other noted society leaders, and an emphasis on STEM skills to build the workforce.
- He looks to the EU and the UK lawmakers for guidance as to how the US should police AI at home.
It is urgent that we regulate synthetic media and deepfakes before they undermine our faith in the truth, says Russell Wald, the director of policy for Stanford’s Institute for Human-Centered Artificial Intelligence.
“I’m concerned about synthetic media, because of what will ultimately happen to society if no one has any confidence in what the veracity of what they’re seeing,” he says in an interview with Eliza Strickland at IEEE Spectrum about creating regulations that are able to cope with the rapidly evolving technology.
“You’re not going to be able to necessarily stop the creation of a lot of synthetic media but at a minimum, you can stop the amplification of it or at least put on some level of disclosure, that there is something that signals that it may not be in reality what it says it is,” he says.
The other area that Wald thinks would help in terms of overall regulation is greater transparency regarding foundation data models.
“There’s just so much data that’s been hoovered up into these models, [but] what’s going into them? What’s the architecture of the compute? Because at least if you are seeing harms come out at the back end, by having a degree of transparency, you’re going to be able to [identify the cause].”
Of calls for regulation coming from AI developers themselves, Wald is scathing, “For them, it really comes down to would they rather work now to be able to create some of those regulations versus avoiding reactive regulation. It’s an easier pill to swallow if they can try to shape this at this point.”
What he would really like to see is great diversity of viewpoint in the discussions and decision-making process, not just from those in the tech industry, but from academics like himself and from law makers.
“Others need to have a seat at the table. Academia, civil society, people who are really taking the time to study what is the most effective regulation that still will hold industry’s feet to the fire but allow them to innovate?”
This would mitigate the risk of inherent bias in certain algorithms on which decisions in judicial systems or legal systems or medical contexts might be based.
Like many academics with knowledge of the subject, Wald calls for a balanced approach. AI does have significant upside for humans as a species he says, pointing out the unprecedented ability of AI to sift through and test data to find solutions for diseases.
“At the same time, there’s the negative that I am truly concerned about in terms of existential risk. And that is where the human comes into play with this technology. Synthetic biology, for instance, could create agents that we cannot control. And there can be a lab leak or something that could be really terrible.”
Having given a precis of what is wrong, Wald turns to potential solutions by which we might regulate our way out of potential disaster. This is multi-pronged.
“First, I think we need more of a national strategy, part of which is ensuring that we have policymakers as informed as possible. I spend a lot of time in briefings with policymakers and you can tell the interest is growing, but we need more formalized ways of making sure that they understand all of the nuances here,” he says.
“The second part is we need infrastructure. We absolutely need a degree of infrastructure that ensures we have a wider degree of people at the table. The third part of this is talent. We’ve got to recruit talent and that means we need to really look at STEM immigration, and see what we can do because at least within the US the path for those students who can’t stay here, the visa hurdles are just too terrible. They pick up and go, for example, to Canada. We need to expand programs like the intergovernmental personnel act that can allow people who are in academia or other nonprofit research to go in and out of government and inform governments so that they’re more clear on this.”
The final piece in Wald’s argument is to adopt regulation in a systematic way. For this, he looks to the European Union, which is one of the most advanced territories in terms of formulating an AI Act. However, this is not expected to be ratified for at least another year.
“Sometimes I think that Europe can be that good side of our conscience side and force the rest of the world to think about these things. This is Brussels effect — which is the concept Europe has such a large market share, that they’re able to force through their rules and regulations, being among the most stringent and it becomes the model for the rest of the world.”
He identifies the UK’s approach to AI regulation as a potential model to follow because it seems to be more balanced in favor of innovation.
“The Brits have a proposal for an exascale computing system [to] double down on the innovation side and, where possible, do a regulatory side because they really want to see themselves as the leader. I think Europe might need to look into as much as possible, a degree of fostering an environment that will allow for that same level of innovation.”
Wald’s concern that AI will stem innovation is not to protect the larger companies, who can look after themselves, he says, but the smaller players might not be able to manage to continue if the law is too stringent.
“The general public should be aware that what we’re starting to see is the tip of the iceberg,” he warns. “There’s been a lot of things that have been in labs, and I think there’s going to be just a whole lot more coming.
“I think we need to have a neutral view of saying there are some unique benefits of AI for humanity but at the same time, there are some very serious dangers. So the question is how can police that process?”
EXPLORING ARTIFICIAL INTELLIGENCE:
With nearly half of all media and media tech companies incorporating artificial intelligence into their operations or product lines, AI and machine learning tools are rapidly transforming content creation, delivery and consumption. Find out what you need to know with these essential insights curated from the NAB Amplify archives:
- AI Is Going Hard and It’s Going to Change Everything
- Thinking About AI (While AI Is Thinking About Everything)
- If AI Ethics Are So Important, Why Aren’t We Talking About Them?
- Superhumachine: The Debate at the Center of Deep Learning
- Deepfake AI: Broadcast Applications and Implications