(or, 2026 — The Year of the Reader)

Note: I released this on Patreon almost on time…then messed up and forgot to push it here. Sorry for the delay., but it just stands a proof that it’s just me in here. [grin] Regardless, given how fast the world is moving, maybe it’s even more interesting to see how this stands up three months later.
The new year is still very early as I write this, and outside the Las Vegas skyline is cold, gloomy, and cloaked in clouds. I hope that’s not a portent.
Anyway, as I sit here typing (or, since I’ve set my desk to hits fuller height, I should say while I stand here typing), Ilm thinking about making a prediction. I should note that, while I enjoy watching the world around me do their annual projections, I am not usually a prediction kind of guy all by myself. That said, I do have a projection for the one huge, bloated monstrosity that’s been crashing like a deranged bull through the China shop that is the writer’s community. That monstrosity is, of course, Artificial Intelligence.
The animosity brought by the mere existence of AI has brought to us in the writing community has been building for the past three years, and this year is almost certainly going to pass the point of no return. All right. I just lied a little bit. I don’t think that LLM-driven involvement in the publishing world is almost certainly going to cross the point of no return this year. I think that line has already been crossed.
That crossing came the day the Anthropic opinion said it was fair use for tech companies who obtain books legally to use them to train their Large Language Models. Sure, Anthropic is going to have to drop $1.5B in breakroom couch chump change to make up for having taken some of their training fodder from pirating sites (places that distribute unlicensed copies), but in all seriousness, companies like Anthropic don’t care about that. They do, however, care about the first part—they care that, otherwise, their actions are considered fair use
This year, though, I think we’re going to start to see what happens next.
For writers, that means this is the year readers will come to the forefront.
More cases are running, of course, and appeal cycles are in work.
But without a reversal of the core of the Anthropic decision, AI companies will be free to use anything they buy.
Not license, buy.
That’s an important differentiation. If Anthropic wants to use my work, they will not need to contact me and work out a deal. Instead, they can simply buy a copy for five bucks (with me getting maybe $3), and off they go.
I am not saying that is right. Personally, I think it kind of sucks.
But it is what it is. I can complain and argue, but, I mean, I didn’t fall off the juneberry bush yesterday. I’m not holding my breath until the courts reverse that ruling.
Maybe I’ll be wrong.
Maybe the courts will reverse the precedent.
If so, great.
To be open, though, I think it’s fine either way, but more on that as I go along here.
Assuming courts do not reverse the decision, though, I think 2026 is the year the social construct around the artistic community will “finish” (ha!) the process of ripping itself into shreds, and then somehow move on. Regardless of how this works out, I expect 2026 will be the year where we begin to see if my read of the world regarding readers’ opinions is on point or not.
I expect readers will, on the whole reject true slop.
I expect readers will, on the whole, accept works that use AI, but only when enough human shaping exists that the work is “good” (whatever that means to them) and that a true fandom can thrive around. This is already proving to be true in small doses, so I see no reason to think otherwise.
I expect writers to find that it doesn’t matter to them—that their readers will follow them because of who they are and what they make, not because of what tool they use to make it.
Will readers’ reaction to AI-based writing be what I expect it will?
If so, will that be enough to help writers come together or not?
Of course, that last bit could take a decade or two, I suppose, because the vitriol around the idea of AI in the arts—which has already been poisonous and virulent—feels like it’s growing stronger every day. It’s been caustic from the moment ChatGPT came onto the scene, of course—which I think is interesting because the writing world reacted with what can be categorized as a disdainful thumbing of the nose back in 2016, which is the first time I could find where I talked about generative text on my blog (though I could swear I did something a couple years earlier when I saw the first proto-LLMs struggling along).
Time moves only forward, however, and now it’s the tech bros who are sitting with the stronger hand.
By “stronger hand” I don’t mean that AI makes better work or work of higher quality than professional writers. It most definitely does not, especially not by itself.
By “stronger hand” I mean they are now more politically and economically powerful.
If the tech companies want to make something they are going to make it, and the tech companies have decided that we’re getting AI no matter what. The wave that started over a decade ago is here. So it’s not surprising that we’re starting to see the world adjust to this fact. When I first wrote that last sentence, I used the word “accept,” but that’s clearly not true.
The entire world—not just us creators—is adjusting because it really has no other choice.
Pretty much every company is at least fiddling with it. Most every group that uses white collar workers is trying to figure out what, if anything, it’s going to mean to them. The impact could wind up being scary—something that tech companies are not attempting to hide.
Sure, parts of the world are fighting against the tide.
In addition to the hardcore social vitriol being thrown against AI progression from certain elements of the creative community, the system also has a whole flotilla of lawsuits flying that are still flying. But those lawsuits don’t seem to be doing anything to slow the process down.
And still the clock ticks.
Hence, adjustments are being tried on for size.
Whether any individual likes those adjustments or not depends on a million different variables, but the final reaction is not hard to separate into a couple buckets: those who embrace, those who still ponder, and those who despise.
In my little world of science fiction, for example, over the recent past I’ve seen that SFWA has had a recent public kerfuffle or two. In one case, they put out guidance on their Nebula awards and then had a prominent author get caught up in a tiny maelstrom because that author (properly and openly) declared a book ineligible because two years ago they had toyed with AI for a passage in the book. Another author wrote a letter under SFWA letterhead that (simplifying dramatically) presented the idea that secondary uses of AI are mostly outside of a dependently published author’s control, and proposed that those authors should not be penalized for having AI foisted upon them, then went further and proposed that writers should not be penalized for using AI for any work not directly related to creativity (which is an interesting slope).
Then I came across Chuck Wendig’s reply, which is well-done and entertaining in its own way. Wendig is a fun follow. He’s very much himself, and he’s a very skilled writer. That’s something you can say about everything Chuck Wendig does. He is not shy about putting forth his opinions and he puts them forward in such spectacularly precise ways that you can’t possibly walk away feeling ambiguous about where he stands.
Chuck Wendig makes no bones of despising AI in pretty much all forms.
And to be clear, I mostly agree with his frame of mind.
Or at least I understand it and understand why it exists.
I really, really, really wish that we were not in a world where AI was in the conversation. If I let myself dig in on that side, I can get pissed off, too. But my personality is tempered by years in a different environment. I come out of the political “reality” that is corporate America. Like many of us, I’d guess, I have a sense of where the world is going, but I can and do often also hold many emotion-laden ideas in my head at the same time without having it explode.
And in this case, I disagree with one very important idea that is the underpinning that provides leverage to the rocket fuel of Wendig’s vitriolic reaction.
Wendig’s anger is fueled by the idea that a world shaped by AI is not inevitable.
In fact, it’s a point he repeats several times for emphasis.
Counter to his argument, though. my read on the world today is that the AI wave is not only inevitable, but that it has already crashed down upon us.
If I agree with him, it’s because the world I see has now passed through the “inevitable stage. AI is no longer inevitable because it is now fully here.
The right question, though, is “Here For What?”
Which leads to the more personal question: “What am I going to do?”
The bottom line is that, even though AI has arrived as a tool that can touch any part of our lives, writers don’t need to do anything at all. We can simply keep on keeping on.
Sure, other writers may end up writing faster or doing their work cheaper, but that’s always been the case. And that doesn’t matter. My success depends on the relationship between me and my readers, and my ability to continue to find new readers.
If you, too, are a writer, your success depends on the relationship between you and your readers, and your ability to find new readers.
If you are a reader, I’d guess your connection with a book results in a connection between you and the writer.
If you, the reader, likes what you get, that connection will be strong and the writer can grow. If not, then the writer will fade. This has always been the case. Though Wendig buries that idea underneath a thick layer of loathing, he believes that, too.
My favorite part of his response lies here:
“I think people actually hate it. I think they naturally resist it because we can smell the existential threat coming off it like the stench of the aforementioned bad shrimp.
“I think we intuitively can detect how it was made by rich fucks who want to be richer fucks, and how we’re just chum in the bucket for their digital sharks.
“And I think it sucks.”
He is more abrasive than I am, and as I noted, I think there are some reader’s who will accept their writers using AI to help them. But his viewpoint is at the heart of what I mean when I say it’s the reader who will make the call. So, really, I wish he and the rest of the crew that are so heated in their push against AI would lead with this. I think it’s a stronger position than the argument of “don’t do that because you suck and then everyone is out of a job.”
I mean, when you get down to the brass capitalistic facts of why corporations exist “you suck, and then everyone is out of a job,” is the fundamental motto of every company. Corporations gonna corporation.
And traditional publishing companies are gonna traditional publish.
We are at the point where the only way a writer can possibly claim 100% human generation is if they go full Indie, and even then, it’s hard to be certain. I mean, how sure are you that the stock art you bought for your cover isn’t touched by AI? Really? Are you 100% certain?
Gnashing one’s teeth about that is like getting angry that a cheetah kills an antelope.
“It’s Up to the Readers” Has A Problem, Though.
Obviously, I, too, think the collective (that is Wendig’s people above) will reject writing that is heavily infused with AI. This is why I don’t feel the need to get overly spun up about the existence of AI. I am of the opinion that readers will almost certainly select things they view as being created by humans over things created by computers.
So, yes, I agree with Chuck Wendig.
But to say that is to leave space for the idea that there will exist a pool of readers, a market per see, for whom idea that humans can work with AI to make something “good” exists. And unless you are very, very comfortable with your relationship with your readers, to leave that door open means opening the door to the possibility of that competition.
If that market is large, then one can then try to argue that we’re screwed.
If both Chuck Wendig and I are wrong, and it is true that the vast majority of readers will accept and gravitate to work written with AI assistance, then the arrival of the wave of AI is potentially a literal death knell for those who don’t use it. In that sense, just like for all the corporate white-collar workers who have to be fearing for their jobs, this becomes an existential question. At that point, us writers would have to look in the mirror and ask ourselves if we want to be involved with creating stories so fervently that we, too, are willing to use AI to assist us.
As I’ve said in my own manifesto about AI transparency, at present, I can’t see that happening for me. I like doing my fiction myself, thank you. I am beginning to think about how to use AI to make my business stuff better. I’m still not sure what that will look like in the end, but I’m walking into that with eyes, ears, and heart open.
I’m queasy about it fiddling with my words, though.
A Related Aside – for the first time, a few days ago I decided to actually try my hand at using AI to do something I’ll call “fiction-adjacent.” I used it to form an idea, and then draft a piece for a strange computer baseball game environment that I play in. Sometime maybe I’ll write a detailed entry about how that process went (feel free to let me know if you’d like to read about the experience, or expressly NOT like to read about it), but for the moment suffice to say that (1) when I baked in the time it took me to rewrite and reshape it into something I liked—it didn’t save me any time at all, and maybe even took longer, (2) while at the end of the day I left the process feeling like it was “mine” (regardless of legal/copyright issues, I spent the time crafting it, and it certainly carried a lot of me in it), and (3) while several people who read it liked it [and several others also revolted due to my transparency note specifying it had AI in it], (4) for me it wasn’t as satisfying or interesting to work on. It felt more like making a jigsaw puzzle than writing a story. Like I was putting myself into AI rather than using AI to bring myself out better. It was fun, but not fulfilling in any way that made me hyped up to go do it again.
So, no. Maybe future me will change his mind, but even if readers will eventually accept AI in their fiction, I still don’t see using it to create stories in my future.
Anyway, today, as I’m standing here at my desk and looking out over an overcast Las Vegas, I’m expressly trying hard not to make any predictions about the future. Except that I do think 2026 is the year the throttle opens full bore, and we find out just how open readers are to the idea of getting AI swirled into their fiction.
So, hang on tight.
The ride might get a little (more) bumpy.
