The Future of AI Isn’t What You Think
What can science fiction and history tell us about where AI is headed?
The hype cycle around artificial intelligence — at least the large language model version of AI — has been strong and swift. There was an initial burst of optimism after the release of ChatGPT, in which people seemed to think it was amazing and would change everything forever. Now, we’ve settled into a period in which, under closer scrutiny, the tool seems to not be very good at what it purports to do.
We seem to be in kind of a weird place with AI. On the one hand, it feels like the Next Big Thing. Tons of money is pouring into AI, and everybody from Snapchat to Microsoft is scrambling to incorporate AI chatbots into their platforms. On the other hand, even carefully planned corporate demonstrations of new AI technology have revealed it to be far from perfect — both Microsoft’s and Google’s AI platforms made embarrassing factual errors in their big, flashy demos.
Thanks for reading George Dillard! Subscribe for free to receive new posts and support my work.
So where is this headed? I can see four possible futures, from apocalyptic to utopian. I’ll describe them in order, from what I think is least likely to most likely.
It’s reasonable to worry about the negative impacts of AI. It’s kind of fun to worry about it, too — some of my favorite science fiction is about artificial intelligence becoming self-aware and doing terrible things to the humans that created it.
There are all sorts of ways that AI could go wrong. The classic sci-fi version is that it becomes sentient and, out of self-preservation or a misguided adherence to its programmed mission, decides to start doing terrible things to human beings. In this scenario, we’re all headed for the world of 2001, or the Matrix and Terminator films.
There are other, subtler versions of this, as well. What if we become dependent on AI for national defense? After all, an AI would be able to react much more quickly than humans in case of an attack, especially if the opposition was also using AI to command its forces. It’s not too hard to imagine an AI making a catastrophic mistake and causing World War III. After all, there are several stories from the Cold War in which officials had to make difficult snap judgments based on gut instinct. The humans got them right and avoided nuclear war. Would an AI?
But how likely is this? Various AI futurologists have tried to put precise numbers on it — one guy says the chances of AI apocalypse are a very precise .0144%! — but it might be worth looking at history for some context.
When is the last time a new technology that we feared would ruin everything actually did ruin everything? The development of nuclear weapons and the industrial revolution both inspired apocalyptic worries that didn’t come to fruition, with their own sci-fi freakouts (Godzilla, Frankenstein, etc.). I wouldn’t be surprised if AI turns out similarly. It’s worth being concerned about the worst-case scenarios, but I don’t see them as particularly likely.
The economist John Maynard Keynes famously described a wonderful future in his 1930 essay “Economic Possibilities For Our Grandchildren.” He predicted that machines would soon become advanced enough that they’d replace a lot of human work. The machines would provide us with abundance and leisure. We would be able to take care of our needs with a minimum of work — he predicted 15 hours a week — leaving tons of time for leisure and family.
Keynes was a smart guy, but he was very wrong about this one. It turns out that humans aren’t satisfied with earning enough; we’re ravenously acquisitive. And we’ve created cultures around work that make us feel ashamed unless we’re spending much of our time doing things that seem economically productive.
Now, we see some similar predictions about AI. Proponents say it will replace millions, maybe billions of jobs. It will allow humans to enjoy abundance without having to work 40 hours a week. It will finally create the reality that Keynes dreamed of (or, if you prefer science fiction, the world of Star Trek, where people work for personal satisfaction but their physical needs are met by advanced technology).
That would certainly be nice, but don’t hold your breath. Though new technologies have transformed the global economy many times, they’ve never really created lives of leisure and freedom for most people. I wouldn’t bet on it this time, either.
Now that we’ve dealt with the positive and negative extremes, let’s dig into the scenarios that I think are actually more likely. The first is that AI will simply suck as a technology.
There are two ways to interpret the rough start that these early AIs have had — you know, the fact that search chatbots are wrong all the time, the fact that reporters can get them to reveal their dark fantasies, and the fact that much of their output is bullshit. The more hopeful frame is that these are growing pains. They’re solvable problems, and the industry will soon have those issues all patched up.
But there’s another possibility: that AI, at least the large language model version of it (which is simply a very powerful assimilator and predictor of content) might never be very good.
Maybe AI will be like Google Glass or jetpacks or self-driving cars: a technology that seems like a world-changer but can never really be made to work well in the real world.
Years ago, I hoped that self-driving cars would be here before my kids reached driving age. I fantasized about being able to put my kids in a safe, automated car and send them off to a friend’s house. I wouldn’t have to worry about them being shaky teen drivers piloting thousands of pounds of metal on dangerous roads.
It hasn’t quite worked out that way; my youngest just got her license, and actually autonomous self-driving cars seem just as far away as they did a decade ago. It turns out that, while self-driving vehicles can work in theory or under very controlled conditions, the real world is just too complex and difficult a place for them. Now car companies are backing off of their once-rosy predictions of a self-driving future. As Darrell Etherington writes,
in terms of the daily lived experience of most people reading this, truly autonomous vehicles just aren’t going to happen. The evidence pointing to this has been mounting for years now, if not decades, but it’s now tipped the balance to where it’s hard to ignore for a reasoned observer — even one like myself who has previously been very optimistic about self-driving prospects.
What if AI is like the self-driving car? Full of theoretical promise but unlikely to be able to handle real-world scenarios well enough for us to actually trust it? What if we soon find that the best large-language models can do is vomit out well-structured but factually unreliable text? That they’re fun toys to play with but can’t be trusted with anything that actually needs to be done well?
I wouldn’t be surprised if, after a few hype cycles, we all decide that AI just sort of sucks, and we put it on the shelf with a million other ideas that were supposed to Change Everything.
A mixed bag
Remember when social media seemed like a good idea?
Remember when Facebook and Twitter were going to connect the world, create a global village, allow us to express ourselves, and generally produce a brotherhood of man?
Well, it didn’t quite turn out that way, did it? Though social networks had the potential to be useful and liberating, they’re better known these days for being toxic swamps of misinformation, places where bad actors manipulate us, and drivers of mental illness for young people. They harvest our data and strive to addict us. Not exactly the beautiful global village we were promised.
AI has the potential to be a transformative technology for the human race. But we have to remember that it will still be guided by humans. Perhaps AI will become a widespread and useful tool. But if it does, it will also be a tool for people who want to use it selfishly.
What will authoritarian governments and other political bad actors do with the ability to churn out limitless amounts of deepfake misinformation? What will governments and corporations do with tools that can mine our data so precisely that they can predict and maybe even guide our every action? What if, rather than freeing us, AI finds ways to entrap, manipulate, and addict us?
And what about the scenarios that we can’t even really envision yet, but will seem painfully obvious once they happen?
Remember, AI is being created by corporations, and the job of those corporations is to make as much money for their investors as possible.
Social media companies had a choice: make more money by addicting and manipulating us or make less money by turning social media into a force for good in the world. We know what they chose.
Is there any reason that AI companies won’t make the same choice?
In the end, I think the most likely scenario is this: AI will become somewhat useful, but won’t catapult us into some sci-fi future. It will have some serious drawbacks, most of which will only become apparent after we’ve become quite dependent on it. Bad actors will find ways to use AI to advance their interests. It will make a small number of already-wealthy people very rich, but most people won’t feel like AI has made their lives all that much better.
Pretty much every big change in history hasn’t matched the apocalyptic visions of its opponents or the utopian predictions of its proponents. The world is a messy place, and new technology often makes it messier, in unintended and unexpected ways.
The same is probably true of AI.
Thanks for reading George Dillard! Subscribe for free to receive new posts and support my work.