
A friend and I were talking, and they asked if I felt like I could still make projections out into the future. I’m not sure of the full reason they asked, but the question felt like it came combined with the sense of science fictional wonder mixed with concern for the common-day pragmatism that says things are moving so fast it’s hard to see straight. I said, yes, I felt I could still make projections—and went on to list several areas I focus a lot of my time on. Because that’s what I do with my “free time.” I read and listen, and watch material that allows me to absorb multiple perspectives on things that are going on, then try to think about the future.
The list specifically included Artificial Intelligence.
When I said AI, they asked what I thought was going to happen.
I immediately, and too quickly, said I didn’t think artificial intelligence was going to make a major difference in day-to-day life over the next five years.
That’s malarkey, of course. I knew it the minute I said it. The introduction of AI is already influencing our daily lives, and whether we like this or not, that influence is only going to get stronger. So, of course, AI is going to impact us, even in just a year. However, that affect may not be what we’re thinking. Especially if you’re trying to push some specific capability. Say, humanoid robots, or fully automated cars, or an agent that, with a single push of the button, can complete your taxes with perfect accuracy.
All of those will push forward, and they will be better than they are today. But I don’t see a short-term future where they all work immaculately. That’s the wrong bar, though. AI does not have to be perfect to be better than what we have. AI merely needs to make fewer errors than people. Which is why AI is already making a direct impact on our lives. AI is now more capable than that average human being in a LOT of areas. It makes mistakes, but (in general) quite a bit fewer mistakes than a person of moderately reasonable competence. So, of course, these things are happening.
But that’s not really what I think we mean when we talk about projecting out into the future.
Golden-age SF writer Fred Pohl said that a science fiction writer’s job was not to predict the car, but to predict the traffic jam. This is an idea I particularly like. It doesn’t take any grand intelligence to predict that most workers will someday (very soon) be AI-driven robots. Or that most cars and trucks will drive themselves. Or that solar power (and maybe forms of nuclear) will be the prevalent power source. Or that medical advances are going to be astounding. There are, for example, already people walking the planet who are likely to live 150 years or more. This is not particularly fanciful thinking anymore. Even i have projected a future (in my stand-alone novel Wakers) where money does not exist and people are allowed (or required, depending on your viewpoints) to figure out their purposes all on their own.
As a creator, it also does not take much intelligence anymore to predict that AI will be completely acceptable in the eyes of the law to use for all creative endeavors. The cases being progressed still have appeals to go through, and there’s a big issue around the fact that various companies used pirated material, but unless appeals are successful the answer on fair use has been provided—tech companies (again, pending appeal) can use whatever they legally acquire to train their LLMs, simply because the output is so dramatically transformative.
Creators still complain. And they are gung ho on the pirating case, for good cause. But that is the current ruling.
The question in projecting the future then becomes: what is the next traffic jam?
In the case of medical advances, maybe the question is: who will get them, and when? In the case of human robots, maybe the question is: how will we treat them? Or what rights will they have? Are they sentient? What the heck even is sentience?
Do you tip a self-driving car?
If an AI comes to power, what will its currency be? Will the world run on compute power rather than dollars or Euros or whatever?
For me, the more interesting projections are not about the tech. They are about us as people. Who are we, and how do we decide to live our lives when suddenly the need to pursue monetary currency is removed? (Mo Gawdat, one of many futurists I pay attention to—and a guy with past ties to Google’s AI efforts) has said that we’re three years away from living in a world where literally everything is free (*), but that the process of getting there is essentially terrifying.
(*) Aside: If that statement made you laugh out loud, you really owe it to yourself to take an afternoon and listen to him. I don’t think his projection is going to happen, at least not in three years, but it’s clearly possible if we could just have our politicians focus on people rather than resources.
And that’s the thing.
The use of AI is going to create many traffic jams. One of them is the field of international politics, which is based on fear of the other and the need to control scarce and valuable resources. Remove the need to control the resources, and you’re left with fear of the other.
Where does that leave us?
I propose that this is the source of all the major traffic jams of our time: fear of the other.
Or maybe just the fear of being disrespected by the other. Fear of losing what we think is our dues, and our place, which in this terrifying moment of transition gets tangled up with the fears that come from losing our livelihood. Perhaps there will come a day when we don’t need money, but neither you nor I live in that future world just yet, and today money is the magic fluid that makes everything else possible. We creatives say that an AI cannot replace us, to use the example nearest to my heart, but our actions betray our feelings. Or at least our anger and actions as a result of others using AI say that we are feeling disrespected. And, between the cracks, our behaviors say that we fear that a world in which we can be replaces—a world where an AI can do what we do—is real.
Or, perhaps an alternative fear is not that we’re worried about a push-button AI being able to replace us, but that we’re worried that a person who we see as an “inferior writer” will be able to use AI to create work that is more popular (better?) than ours. To my eyes, this psychology clearly runs deep in our community. We draw hard lines that have very little to do with true copyright questions and have everything to do with our thoughts on who might be a real artist, and who most definitely is not. But we have these conversations without stating the obvious rule (and terrifying rule) that, as writers, it will always be the reader who decides what they like and what they do not.
This fact is, to me, at the root of the raw anger you hear in creative voices as they vilify the existence of AI in a creative field. If we truly trust the reader to ignore the dross and love true “quality,” whatever that is, then we would simply shrug off the existence of AI. But we do not shrug off the existence of AI at all. In that fact, we show that somewhere deep inside, we do not trust the reader.
Our heads say we do, but in our hearts lives a fearful contrarian that is, perhaps, the source of our impostor syndrome.
So, there we have it.
My projection—or at least my thoughts on projection for all projections.
The traffic jams that will come with AI will be fully focused on the idea that, suddenly, the 8+ billion people who live on planet Earth will not need to fight over resources. So, being human, our collective histories say we will find ways to fight over other things. That is what we are. We like hierarchies. We don’t like being low on the ranking table, but we will accept it as long as there’s someone else lower.
So, what exactly is going to happen in the next five years?
I have no idea, though my thoughts will probably fuel several stories going forward.
But all I’ll say for now is that the way forward will be tumultuous, and it’s going to be intensely painful for some people—a list of which we can maybe predict, but probably we cannot.
All I’ll say for certain is that the impact that AI is already having on our daily lives will continue to accelerate today into the foreseeable future, and the skill set that will be most needed is going to be flexibility, nimbleness, and (I hope) compassion for others.
I keep a Patreon page where I talk about writing and being a writer (among other things), and share occasional work in progress. If you’d like to support me–or just this blog–you can do so there.
