2023 was an inflection point in the evolution of artificial intelligence and its role in society. The year saw the emergence of generative AI, which moved the technology from the shadows to center stage in the public imagination. It also saw boardroom drama in an AI startup dominate the news cycle for several days. And it saw the Biden administration issue an executive order and the European Union pass a law aimed at regulating AI, moves perhaps best described as attempting to bridle a horse that’s already galloping along.
We’ve assembled a panel of AI scholars to look ahead to 2024 and describe the issues AI developers, regulators and everyday people are likely to face, and to give their hopes and recommendations.
Casey Fiesler, Associate Professor of Information Science, University of Colorado Boulder
2023 was the year of AI hype. Regardless of whether the narrative was that AI was going to save the world or destroy it, it often felt as if visions of what AI might be someday overwhelmed the current reality. And though I think that anticipating future harms is a critical component of overcoming ethical debt in tech, getting too swept up in the hype risks creating a vision of AI that seems more like magic than a technology that can still be shaped by explicit choices. But taking control requires a better understanding of that technology.
One of the major AI debates of 2023 was around the role of ChatGPT and similar chatbots in education. This time last year, most relevant headlines focused on how students might use it to cheat and how educators were scrambling to keep them from doing so — in ways that often do more harm than good.
However, as the year went on, there was a recognition that a failure to teach students about AI might put them at a disadvantage, and many schools rescinded their bans. I don’t think we should be revamping education to put AI at the center of everything, but if students don’t learn about how AI works, they won’t understand its limitations — and therefore how it is useful and appropriate to use and how it’s not. This isn’t just true for students. The more people understand how AI works, the more empowered they are to use it and to critique it.
So my prediction, or perhaps my hope, for 2024 is that there will be a huge push to learn. In 1966, Joseph Weizenbaum, the creator of the ELIZA chatbot, wrote that machines are “often sufficient to dazzle even the most experienced observer,” but that once their “inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away.” The challenge with generative artificial intelligence is that, in contrast to ELIZA’s very basic pattern matching and substitution methodology, it is much more difficult to find language “sufficiently plain” to make the AI magic crumble away.
I think it’s possible to make this happen. I hope that universities that are rushing to hire more technical AI experts put just as much effort into hiring AI ethicists. I hope that media outlets help cut through the hype. I hope that everyone reflects on their own uses of this technology and its consequences. And I hope that tech companies listen to informed critiques in considering what choices continue to shape the future.
Kentaro Toyama, Professor of Community Information, University of Michigan
In 1970, Marvin Minsky, the AI pioneer and neural network skeptic, told Life magazine, “In from three to eight years we will have a machine with the general intelligence of an average human being.” With the singularity, the moment artificial intelligence matches and begins to exceed human intelligence — not quite here yet — it’s safe to say that Minsky was off by at least a factor of 10. It’s perilous to make predictions about AI.
Still, making predictions for a year out doesn’t seem quite as risky. What can be expected of AI in 2024? First, the race is on! Progress in AI had been steady since the days of Minsky’s prime, but the public release of ChatGPT in 2022 kicked off an all-out competition for profit, glory and global supremacy. Expect more powerful AI, in addition to a flood of new AI applications.
The big technical question is how soon and how thoroughly AI engineers can address the current Achilles’ heel of deep learning — what might be called generalized hard reasoning, things like deductive logic. Will quick tweaks to existing neural-net algorithms be sufficient, or will it require a fundamentally different approach, as neuroscientist Gary Marcus suggests? Armies of AI scientists are working on this problem, so I expect some headway in 2024.
Meanwhile, new AI applications are likely to result in new problems, too. You might soon start hearing about AI chatbots and assistants talking to each other, having entire conversations on your behalf but behind your back. Some of it will go haywire — comically, tragically or both. Deepfakes, AI-generated images and videos that are difficult to detect are likely to run rampant despite nascent regulation, causing more sleazy harm to individuals and democracies everywhere. And there are likely to be new classes of AI calamities that wouldn’t have been possible even five years ago.
Speaking of problems, the very people sounding the loudest alarms about AI — like Elon Musk and Sam Altman — can’t seem to stop themselves from building ever more powerful AI. I expect them to keep doing more of the same. They’re like arsonists calling in the blaze they stoked themselves, begging the authorities to restrain them. And along those lines, what I most hope for 2024 — though it seems slow in coming — is stronger AI regulation, at national and international levels.
Anjana Susarla, Professor of Information Systems, Michigan State University
In the year since the unveiling of ChatGPT, the development of generative AI models is continuing at a dizzying pace. In contrast to ChatGPT a year back, which took in textual prompts as inputs and produced textual output, the new class of generative AI models are trained to be multi-modal, meaning the data used to train them comes not only from textual sources such as Wikipedia and Reddit, but also from videos on YouTube, songs on Spotify, and other audio and visual information. With the new generation of multi-modal large language models (LLMs) powering these applications, you can use text inputs to generate not only images and text but also audio and video.
Companies are racing to develop LLMs that can be deployed on a variety of hardware and in a variety of applications, including running an LLM on your smartphone. The emergence of these lightweight LLMs and open source LLMs could usher in a world of autonomous AI agents — a world that society is not necessarily prepared for.
These advanced AI capabilities offer immense transformative power in applications ranging from business to precision medicine. My chief concern is that such advanced capabilities will pose new challenges for distinguishing between human-generated content and AI-generated content, as well as pose new types of algorithmic harms.
The deluge of synthetic content produced by generative AI could unleash a world where malicious people and institutions can manufacture synthetic identities and orchestrate large-scale misinformation. A flood of AI-generated content primed to exploit algorithmic filters and recommendation engines could soon overpower critical functions such as information verification, information literacy and serendipity provided by search engines, social media platforms and digital services.
The Federal Trade Commission has warned about fraud, deception, infringements on privacy and other unfair practices enabled by the ease of AI-assisted content creation. While digital platforms such as YouTube have instituted policy guidelines for disclosure of AI-generated content, there’s a need for greater scrutiny of algorithmic harms from agencies like the FTC and lawmakers working on privacy protections such as the American Data Privacy & Protection Act.
A new bipartisan bill introduced in Congress aims to codify algorithmic literacy as a key part of digital literacy. With AI increasingly intertwined with everything people do, it is clear that the time has come to focus not on algorithms as pieces of technology but to consider the contexts the algorithms operate in: people, processes and society.
Anjana Susarla, Professor of Information Systems, Michigan State University; Casey Fiesler, Associate Professor of Information Science, University of Colorado Boulder, and Kentaro Toyama, Professor of Community Information, University of Michigan
BY ANJANA SUSARLA, MICHIGAN STATE UNIVERSITY, CASEY FIESLER, UNIVERSITY OF COLORADO BOULDER & KENTARO TOYAMA, UNIVERSITY OF MICHIGAN
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The breakthrough year of gen AI is being likened to the appearance of the first web browsers by a senior scientist at IBM, but just like the internet, there will only be a few companies or people to whom power and wealth will accrue.
Darío Gil, SVP and director of Research at IBM, said that at the birth of the Web browser “people imagined experiencing the internet. All of a sudden there was this network of machines, and content can be distributed, and everybody can self-publish.”
Speaking in a special live episode of Smart Talks with IBM, Gil said society finds itself it a similar “significant inflection point” he said.
AI’s Tipping Point
Though perhaps this should be termed “tipping point,” a phrase popularized by Gil’s on stage interlocuter, the psychologist and culture commentator Malcolm Gladwell.
“Previously, AI systems were behind the scenes, [computing] your search results, or translation systems,” Gil said. “Now people can touch it. Fundamentally, at this moment is where that the number of people that can build and use AI has skyrocketed.”
Gladwell asked the computer scientist to place AI on a list of the biggest technological achievements of the last 75 years.
Gil said he would put it first.
“Since World War II, undoubtedly, computing has reshaped our world. And within computing, I would say, the role that semiconductors have had has been incredibly defining. I would say AI is the second example of that as a core architecture, that is going to have an equivalent level of impact.“
The third leg will be Quantum information, he said. “I like to summarize that the future of computing is bits, neurons, and qubits. It’s that idea of high-precision computation — the world of neural networks and AI and quantum. The combination of those things is going to be the defining force of the next 100 years in computing.”
Leverage AI — and Add Value
However, the distribution of power when it comes to AI will not be equal, Gil predicted.
“It likely be true that the use of AI will be highly democratized, meaning the number of people that have access to its power to make improvements in terms of efficiency and so on will be fairly universal, and also that the ones who are able to create AI may be quite concentrated.
Gil elaborated, “If you look at it from the lens of who creates wealth and value over sustained periods of time then just being a user of AI technology is an insufficient strategy.
“And the reason for that is you will get the immediate productivity boost of API which will be a new baseline for everybody. But you’re not accruing value in terms of representing your data inside the AI in a way that gives you a sustainable competitive advantage.
“So what I always try to tell people is, don’t just be an AI user; be an AI value creator.”
As a business, he warned, “It would be kind of a mistake to develop strategies that are just about usage. There will be lot of consequences in terms of the haves and have-nots that will apply to institutions and regions and countries.”
At the same time, predictions about how and where and to what extent AI will impact are tricky because of the way that AI draws its information from across systems and networks.
“In this very high-dimensional representation of information that is present in neural networks, we may find amazing adjacencies or connections of themes and topics in ways that the individual practitioners cannot describe.”
We are going to suffer from not knowing the root cause of something impacted by AI, he argued.
“One of the unsatisfying aspects [of AI] is that it may give you answers but not give you good reasons for where the answers came from.”
Gladwell drew from this implications for how educational and medical institutions would have to change their learning methodology. He said he would encourage institutions not to teach how to use individual AI tools but to embed AI across the curriculum.
“Understanding how we do problem solving in the age of data and data representation means it needs to be embedded in the curriculum of everybody and certainly in the context of medicine and scientists.”
Students entering college are going to know more about AI than the academics do, he said. “That alone is a Herculean people problem.”
What Hollywood Studios Don’t Understand
Institutions of all sorts will have to be at the forefront of integration in order to unlock the full power of AI thoughtfully and responsibly, Gladwell said. “Even Hollywood is being forced to figure this out.”
The writer popularized the idea that if you practice at something for 10,000 hours you will achieve world-class expertise in any skill. But with AI on tap to automate and radically speedup the process to achieving goals where does this leave the creative process?
It should make studios concerned for their future not so much the writers and actors, Gladwell said.
“In the strikes, the frightened ones were the writers and not the studios. Wasn’t that backwards? It should be the studios who are worried about the economic impact of AI. Doesn’t AI, in the long run, put the studios out of business long before it puts the writers out of business?
“I only need the studio because the costs of production are as high as the sky and the costs of production are overwhelming.
“Whereas if I have a tool which introduces massive technological efficiencies to the production of movies, then why do I need a studio?”