TL;DR
- We’re not going to fully understand the potential and the risks of AI without having individual users play around with it en masse.
- The MIT Technology Review asks six unresolved questions to bear in mind as we watch the generative AI revolution unfold in 2024.
- In 2024, new forms of misuse will continue to surface. There will be a few standout examples, possibly involving electoral manipulation.
READ MORE: Generative AI took the world by storm in 2023. Its future—and ours—will be shaped by what we do next. (MIT Technology Review)
“People are reluctant to imagine what could be the future in 10 years, because no one wants to look foolish,” Alison Smith, head of generative AI at tech consultancy Booz Allen Hamilton. “But I think it’s going to be something wildly beyond our expectations.”
She was talking to the MIT Technology Review, where Will Douglas Heaven provides a status report and a 2024 prediction for GenAI’s evolution by way of asking six pertinent questions.
Read the highlights of these six as-yet unresolved questions below:
1. Will we ever mitigate the bias problem?
Bias has become a byword for AI-related harms, for good reason. Real-world data, especially text and images scraped from the internet, is riddled with it, from gender stereotypes to racial discrimination. Models trained on that data encode those biases and then reinforce them wherever they are used.
AI tools developers like Stability AI and OpenAI are trying to fix the problem in newer versions of their models. Critics say this won’t solve deep seated issues with source data.
MIT predicts that bias will continue to be an inherent feature of most generative AI models, but workarounds and rising awareness could help policymakers address the most obvious examples.
2. How will AI change the way we apply copyright?
There are dozens of class action lawsuits against OpenAI, Microsoft, and others, claiming copyright infringement. Getty, for example, is suing Stability AI, the firm behind the image maker Stable Diffusion. Celebrity claimants such as Sarah Silverman and George R.R. Martin have drawn media attention.
But don’t hold your breath. It will be years before the courts make their final decisions, says Katie Gardner, a partner specializing in intellectual-property licensing at Gunderson Dettmer.
By that point, she says, “the technology will be so entrenched in the economy that it’s not going to be undone.”
In the meantime, the tech industry is building on these alleged infringements at breakneck pace. “I don’t expect companies will wait and see,” says Gardner. “There may be some legal risks, but there are so many other risks with not keeping up.”
Some companies have taken steps to limit the possibility of infringement. Google, Microsoft and OpenAI now offer to protect users of their models from potential legal action.
“We’ll take that burden on so the users of our products don’t have to worry about it,” Microsoft CEO Satya Nadella told the MIT Technology Review.
Also, new kinds of licensing deals are popping up. For example, Shutterstock has signed a six-year deal with OpenAI for the use of its images.
Douglas Heaven says that high-profile lawsuits won’t stop companies from building on generative models. “New marketplaces will spring up around ethical data sets, and a cat-and-mouse game between companies and creators will develop.”
3. How will it change our jobs?
The fear of AI taking our jobs still seems a little in the distance — but that didn’t stop the writers from striking for safeguards from their employers this year.
Many researchers deny that the performance of large language models is evidence of true intelligence and that there’s a lot more to most professional roles than the tasks those models can currently do.
Yet many businesses are already using large language models for research. Handing over grunt work to machines lets people focus on more fulfilling parts of their jobs. The tech also seems to level out skills across a workforce: early studies suggest that less experienced people get a bigger boost from using AI.
“But change is always painful, and net gains can hide individual losses. Technological upheaval also tends to concentrate wealth and power, fueling inequality,” says Douglas Heaven.
Fears of mass job losses will prove exaggerated, says the MIT Technology Review, but generative tools will continue to proliferate in the workplace. Roles may change and new skills may need to be learned.
4. What misinformation will it make possible?
The Biden administration made labeling and detection of AI-generated content a focus of its executive order on AI in October. But the order fell short of legally requiring tool makers to label text or images as the creations of an AI.
The European Union’s AI Act, agreed upon in December, goes further. Part of the sweeping legislation requires companies to watermark AI-generated text, images, or video, and to make it clear to people when they are interacting with a chatbot. And the AI Act has teeth: the rules will be binding and come with steep fines for noncompliance.
The US has also said it will audit any AI that might pose threats to national security, including election interference.
But here’s the catch: it’s impossible to know all the ways a technology will be misused until it happens.
The MIT Technology Review predicts: “New forms of misuse will continue to surface as use ramps up. There will be a few standout examples, possibly involving electoral manipulation.”
5. Will we come to grips with its costs?
The development costs of GenAI, both human and environmental, are also to be reckoned with. According to the MIT Technology Review, the “invisible-worker” problem is an open secret: “We are spared the worst of whatgenerative models can produce thanks in part to crowds of hidden (often poorly paid) laborers who tag training data and weed out toxic, sometimes traumatic, output during testing. These are the sweatshops of the data age.”
It’s to be hoped that as the human costs come into sharper focus, developers will be pressured to address the issue.
The other major cost, the amount of energy required to train large generative models, is set to climb before the situation gets better. In August, NVIDIA announced Q2 2024 earnings of more than $13.5 billion, twice as much as the same period the year before. The bulk of that revenue ($10.3 billion) comes from data centers — in other words, other firms using NVIDIA’s hardware to train AI models.
“The demand is pretty extraordinary,” says NVIDIA CEO Jensen Huang. He acknowledges the energy problem and predicts that the boom could even drive a change in the type of computing hardware deployed. “The vast majority of the world’s computing infrastructure will have to be energy efficient,” he says.
But don’t expect significant improvement on either front soon, says the MIT Technology Review.
6. Will doomerism continue to dominate policymaking?
Doomerism — the fear that the creation of smart machines could have disastrous, even apocalyptic consequences — has long been an undercurrent in AI.
OpenAI CEO Sam Altman, among others, have suggested that AI systems should have safeguards similar to those used for nuclear weapons.
Others call this fearmongering, at risk of stifling innovation. The debate also channels resources and researchers away from more immediate risks, such as bias, job upheavals, and misinformation.
“Some people push existential risk because they think it will benefit their own company,” François Chollet, an AI researcher at Google, tells the MIT Technology Review. “Talking about existential risk both highlights how ethically aware and responsible you are and distracts from more realistic and pressing issues.”
The MIT Technology Review predicts that the fearmongering will die down, but the influence on policymakers’ agendas may be felt for some time.
Yes, as Douglas Heaven also notes, some of the same people ringing the alarm are also raising millions of dollars in investment and making heaps of money for themselves.
Is doomerism in fact a fundraising strategy?