
(or, maybe we should all just go back to pencils)
I’m settling back into things after my epic cross-country journey with my brother, and that means trying to catch up on all the news. Just before I left Las Vegas, word of the Anthropic settlement over their acquisition of pirated books dropped, a deal that was subsequently reported as $3K per book, or an estimated total of $1.5B that Anthropic would pay to authors. The judge later put that settlement on hold, so things are still in limbo. That doesn’t keep me from thinking that, if this goes through, it is a pretty big win for Anthropic.
Sure, $1.5B isn’t fun to lose, but for a company with a valuation of roughly $190B, and aspirations in the trillions, it’s more of an annoyance than a roadblock. A year or two back, OpenAI’s Sam Altman went on record with a quote that said it didn’t matter how much money the development of AI cost, it would be worth it. “Whether we burn $500 million a year or $5 billion—or $50 billion a year—I don’t care, I genuinely don’t”. Against this, lay the fact that a podcast I listened to this morning noted that Anthropic’s monthly spend right now is $5B.
In other words, if this settlement lands as reported, it costs Anthropic a little less than two weeks of capital expense.
More important to me as a writer (or maybe even all of us as readers?) is that we need to remind ourselves that this settlement is not about Anthropic using books to train their LLMs. Until appeals are processed, that practice has already been ruled as fair use. This settlement is simply about the fact that Anthropic (and other AI modelers) grabbed books from pirated sites. If that ruling for fair use stands, AI companies are completely within their rights to buy a book, then scan it—hence paying the authors whatever their cut of the book is. Say $2-$4. That’s a lot less than $3K. They are also able to acquire (buy, borrow, check out, or whatever) a paperback or DRM-free ebook, and scan that. Or scrape the web for free books that are out there—or free stories we’ve posted on our sites, or perhaps even those “look ahead” features that Amazon and other retailers use.
I got to thinking about this again last night because I read Kris Rusch’s latest Patreon post, which was about how different Big, Big Business is from us tiny tadpoles running our own little businesses, but how we can still learn a lot from watching them operate. (Aside, if you’re interested in the publishing business from the perspective of someone who moved through traditional, dependent publishing and into independent publishing, and you don’t follow Kris, you probably should.)
If, upon hearing of the $1.5B settlement, your first thought was to cheer because the writers really Stuck It To the Man, you should probably recalibrate yourself. Yes, that number sounds good if you think of it from a single business deal. I mean, if someone wants to pay me $3K (and royalties) to reprint Starflight, assuming all the other rights are properly delineated in the contract, I expect I’d make that deal. But this is not what’s happening. This is not a licensing case. This is a copyright infringement case. Which means that $3K is a small portion of the penalty that a person or company would be subjected to paying if the case ran through the end, and statutory damages were applied. Those numbers could have been in the multi-trillions, which I think we can safely assume would have bankrupted Anthropic.
So, yes, assuming the settlement agreement stands, this is a big win for Anthropic simply because for a relatively small chunk of their company, they get to stay in business and make even more cash going forward. One assumes this was on the strategy board as an eventuality in the days when the company’s leadership decided to grab pirated work in the first place.
As Kris notes, Big, Big Business is just different, and Anthropic is Very, Very, Big, Big Business.
Anyway…Here We Are
I’ve said it before, and I’ll say it again: I wish we were not in a world where AI was infringing on our space. Things were simpler before it arrived. I liked it. Alas, I do not get to pick the world I live in. Artificial Intelligence (or, as I’m beginning to think of it: Augmented Intelligence) is here, and only the staunchest of traditionalists can still pretend it’s not going to change what it means to be creative.
So, what should we tadpoles in independent publishing do?
Or the tadpoles in dependent publishing, for that matter.
Should we just give up? Bow to the AI overlords of the novelistic production line, and go eat bonbons all day while bitching about the audacity of the Techbros?
No. I think not.
While I expect a market for true, fully AI-created work will exist, I think that will be small. I say that because I think a large majority of readers want to read authors, not computers. When we find an author we like, we read a bunch of their work, right? Maybe I’ll be wrong, but I can’t see that changing. So, whatever tools a writer uses to put themselves on the page, I think it’s going to be imperative that we remain human storytellers. Important that we have a vision of our story, and that we keep to that vision. The story is the thing. Really, it is. The story (and our voice and vision) is all we have. The story—our story—is what our readers want.
That is good news. To me, anyway.
Human readers want stories that are about human vision.
This means I believe that spammers who simply push a button and try to sell whatever comes out will eventually fail miserably in the marketplace.
Writers who put themselves into the work will succeed. By that, though, I can include people who use AI in their process. Because, to be clear, there already exist markets of readers who will accept a human using AI as long as that use results in a work that hits that writer’s vision (and, stealing my own thunder a bit, if those writers are transparent about their use). But I think there will never be a day when books without a human in the loop will be viable on a mass scale.
Readers buy authors, remember? Mostly, anyway.
This means authors succeed by attracting readers. Marketing and packaging aside, I think the most attractive authors are those who tell the most compelling, most human stories. If an AI can help an author hit their vision, I suppose that’s good on them. There are still copyright and licensing issues to deal with, but that’s a business thing…and, pulling oneself completely out of the artsy world of creatives, there can be perfectly fine business decisions to use AI.
The key there is that authors really do need to be aware of the pratfalls they are exposing themselves to and make their decisions appropriately. Copyright in a generative AI world is going to get even more complicated than it already is.
In that light, however, I think it’s important that we keep showing our readers we are human, and I think it’s important that our stories retain the vision that only we can bring to them. Our books comprise our brand. And since, as writers, we are our brand—and our brand is us, it’s important to keep ourselves in the story.
Given this, the thing that is going to be required—at least for a time period as the world continues to transition into AI augmentation—is that the author (or musician, or graphic artist, or…) be transparent. As I noted earlier, I am aware of several writers today who are actively using AI, and who have readers happily reading their work. Joanna Penn (J. F. Penn) is probably the most obvious example. These authors are open about what they are doing, so there is no reasonable case where their readers should ever be caught off guard by any “revelation” that what they had read includes elements of AI assistance.
This makes sense to me, of course, because as I’ve noted before, I am of the mind that the reader is the final arbitrator.
As they always have been.
Restating my usage…
After all this, let me state for the record: I do not use AI in making my fiction, though two years back I did make “Five Seven Five,” which used my own SF haiku as prompts for one of the early AI art generators, which was a fascinating exercise. You can find it here if you are curious enough.
I like writing my work all by myself, thank you very much.
If, for whatever reason, I do ever decide to use AI, I promise I will disclose it up front.
No tomfoolery.
Or, um, no Ronfoolery? Whatever comes with the use of this technology in creative pursuits, I think transparency is the key to the future. If you are one of my readers, I shall not mess with you.
Next week marks the launch of my new collection: 1101 Digital Stories in an Analog World! It goes live for pledges on Tuesday, 9/23. This is the second—and for now, final—gathering of my stories from Analog Science Fiction and Science Fact, pairing with its sibling 1100 Digital Stories in an Analog World to complete the set. That’s pretty cool if you ask me. If you’d like to follow the project (and get notified when it launches), you can check it out here! (*)

(*) For irony’s sake, and to show you what I mean, I put my initial little blurb for this Kickstarter into ChatGPT and asked it to make it more concise. What I got back was pretty okay, but not quite what I wanted, so I took half its suggestions and then mushed it again myself to make it say what I wanted it to say. It was fine. I’m not sure this is any better or worse than what I had to begin with, and it probably took me longer than doing it myself. Whatever, I suppose. Maybe I should add in: “I’m Ron Collins, and I approve this message?”
Regardless, I can promise that nothing in these books has been touched by AI, including the artwork or layout. Or the Kickstarter stuff…or…well, anything else!
