How Authors Are Thinking About AI

lizmonster

Your Friendly Neighborhood Spider-Mom
Moderator
Absolute Sage
Super Member
Registered
Joined
Jul 5, 2012
Messages
20,946
Reaction score
49,187
Location
Massachusetts
Website
elizabethbonesteel.com
This survey was done through BookBub, and sampled ~1200 authors. (Pretty sure I was one of them.)

A few highlights:

Among survey respondents, about 45% are currently using generative AI to assist with their work while 48% are not and do not plan to in the future. Another 7% of respondents are not currently using AI but might use it in the future.

Another common use for AI is assisting with marketing, and many authors appreciate being able to outsource tasks they don’t enjoy doing themselves.

"It’s very helpful for marketing copy, especially if there is a word count. For example, I have a long blurb I wrote myself but I might use AI to generate 150, 250 or 300–word blurbs as requested by various platforms."

In the interests of RYFW, I shall only note that I hope Marketing Copy Person is double-checking those word counts, because GenAI can't do math.
 

Friendly Frog

Snarkenfaugister
Kind Benefactor
Super Member
Registered
Joined
Sep 23, 2011
Messages
6,395
Reaction score
11,463
Location
Belgium
On the whole a little misleading (if unintentional). It doesn't split art and writing in those 45%. And more seem to use it for marketing than plagiarising wholesale stories via AI.

I wonder whether the 70% of selfpublishers skew the results, but then a painful amount of publishers have gone to the AI dark side too, so maybe not significant at all.

Interesting though, that some 75% doesn't want to share with their readers that they do use AI. So I daresay some realise the optics on AI are not entirely positive.
 

lizmonster

Your Friendly Neighborhood Spider-Mom
Moderator
Absolute Sage
Super Member
Registered
Joined
Jul 5, 2012
Messages
20,946
Reaction score
49,187
Location
Massachusetts
Website
elizabethbonesteel.com
On the whole a little misleading (if unintentional). It doesn't split art and writing in those 45%. And more seem to use it for marketing than plagiarising wholesale stories via AI.

Yeah, I noticed that. I don't think genAI images are any ethically better, of course.

I wonder whether the 70% of selfpublishers skew the results, but then a painful amount of publishers have gone to the AI dark side too, so maybe not significant at all.

It would be interesting to see that data breakdown. Because self-publishers bear the cost of things themselves, it wouldn't surprise me if more of them are vulnerable to the allure of genAI.

Interesting though, that some 75% doesn't want to share with their readers that they do use AI. So I daresay some realise the optics on AI are not entirely positive.

Yeah. And places like Amazon want us to believe they're dealing with the issue by having people check a box if they've used AI. It's about as effective as those "check this box if you are over 18" dialogs on some web sites.
 

Friendly Frog

Snarkenfaugister
Kind Benefactor
Super Member
Registered
Joined
Sep 23, 2011
Messages
6,395
Reaction score
11,463
Location
Belgium
Yeah, I noticed that. I don't think genAI images are any ethically better, of course.
Absolutely. 100% agreed.

But with a survey of writers one automatically thinks of writing and so it makes the generative use of AI look more widespread and accepted than it really is when you look at the breakdown.

'Research' comes up most. (I don't know whether to be releaved by that or not...)
 

lizmonster

Your Friendly Neighborhood Spider-Mom
Moderator
Absolute Sage
Super Member
Registered
Joined
Jul 5, 2012
Messages
20,946
Reaction score
49,187
Location
Massachusetts
Website
elizabethbonesteel.com
'Research' comes up most. (I don't know whether to be releaved by that or not...)

Honestly, there's nothing that relieves me about any of this. I was speaking with someone the other day who has a layperson's knowledge of genAI, and she said "they're making too much money from it." I had to correct her. It's not profitable for them yet, but they've bet too much on it, and they keep pushing it on us hoping it'll become something we can't do without. Part of how they're trying to get the public to buy into it is they lie about what it can do.

It's not a search engine. Anybody doing research with genAI is at best wasting their time.
 

RichardGarfinkle

Second Edition and Second Laughter
Super Moderator
Moderator
Kind Benefactor
Super Member
Registered
Joined
Jan 2, 2012
Messages
12,565
Reaction score
6,995
Location
Swimming in the Shallows
Website
www.richardgarfinkle.com
Yeah.

Bit of a pyramid game of a sorts, when you think of it. They will make money when they convince enough marks to buy into their promises and in the end everybody else loses.
Same as their last scam: NFTs. The techbros don’t understand the tech, but they only talk to each other. They speak the language of grandiose promises and sullen threats. Like most con artists they are first and foremost marks.
 

Jazz Club

It's not wrong, it's dialect
Super Member
Registered
Joined
Dec 18, 2021
Messages
4,389
Reaction score
7,215
Location
Northern Ireland
Does ProWritingAid count? Lots of self-publishers use it as an editor, which might help explain the high number self-reporting using AI.
 

Unimportant

doggone
Self-Ban
Super Member
Registered
Joined
May 8, 2005
Messages
29,129
Reaction score
40,710
Does ProWritingAid count? Lots of self-publishers use it as an editor, which might help explain the high number self-reporting using AI.
Based on a very animated discussion (yes, that's a euphemism for screaming match) I had with a friend today, people think everything is AI. The machines that run blood tests in diagnostic labs. The GPS systems in cars that beep at you when you go two miles over the speed limit and tell you to turn left and cross a nonexistent bridge over a river. The recommendations on Amazon for "If you liked this book, you'll also like..."
 

jappolack

Super Member
Registered
Joined
Apr 30, 2025
Messages
86
Reaction score
88
I am a computer scientist more than familiar with how this is all programmed. I can tell you from the programming aspect, it looks like

If you say A
Print out B, C, and D
else if you say B
Find more information about it
else ........
Do something else

Meaning that it has no thought of its own. It looks for patterns, and based on the history of other patterns, it will give you the most likely answer. It is still very new and will never know whether the answers are right or wrong. Every time a person says " like, the answer goes into one bin: "I am right, " reinforcing this is correct.

AI can be fooled. If most people have marked enough of the gathered data as they like it, that answer will keep propagating. It is the programmers behind the scenes who would have to go into the code and say

If (this thing people keep trying to push comes up)
Mark as false

Self Learning means that if we let it go wild with no intervention from a programmer that the results may well be bullshit. It is up to the human to correct it; this is where problems begin. People like Elon Musk is a shit person to work for and he would prefer machines. He actually is not a great or even good programmer, he creates one thing that many people could do. But he had the idea. He would rather have a computer do his work, even if it is wrong. That is what he is doing with DOGE. The kids that he has in the federal buildings run AI all day to see what it thinks it can cut. I know I live around there and have friends tell me what those kids do in the office all.
 

lizmonster

Your Friendly Neighborhood Spider-Mom
Moderator
Absolute Sage
Super Member
Registered
Joined
Jul 5, 2012
Messages
20,946
Reaction score
49,187
Location
Massachusetts
Website
elizabethbonesteel.com
Agreed with the rest, but:

AI can be fooled.

Just gonna say, because I always do, that this kind of phrasing gives AI agency that it doesn't have. It can't be "fooled" because it doesn't make decisions based on meaning. It's the same reason they should stop using the term "hallucinations" and start saying "works as designed."

The uses wouldn't be so horrific if the people who made it weren't lying about what it does. Accuracy doesn't play into it at all, and any application requiring facts is de facto one genAI shouldn't be given.

It's a Word Cloud of complete sentences.
 

RichardGarfinkle

Second Edition and Second Laughter
Super Moderator
Moderator
Kind Benefactor
Super Member
Registered
Joined
Jan 2, 2012
Messages
12,565
Reaction score
6,995
Location
Swimming in the Shallows
Website
www.richardgarfinkle.com
I am a computer scientist more than familiar with how this is all programmed. I can tell you from the programming aspect, it looks like

If you say A
Print out B, C, and D
else if you say B
Find more information about it
else ........
Do something else

Meaning that it has no thought of its own. It looks for patterns, and based on the history of other patterns, it will give you the most likely answer. It is still very new and will never know whether the answers are right or wrong. Every time a person says " like, the answer goes into one bin: "I am right, " reinforcing this is correct.

AI can be fooled. If most people have marked enough of the gathered data as they like it, that answer will keep propagating. It is the programmers behind the scenes who would have to go into the code and say

If (this thing people keep trying to push comes up)
Mark as false

Self Learning means that if we let it go wild with no intervention from a programmer that the results may well be bullshit. It is up to the human to correct it; this is where problems begin. People like Elon Musk is a shit person to work for and he would prefer machines. He actually is not a great or even good programmer, he creates one thing that many people could do. But he had the idea. He would rather have a computer do his work, even if it is wrong. That is what he is doing with DOGE. The kids that he has in the federal buildings run AI all day to see what it thinks it can cut. I know I live around there and have friends tell me what those kids do in the office all.
One correction. It does not give the most likely answer, it gives what amounts to the most popular answer. So if it's doing writing it will conform to simple tropes. If it's asked a question that requires deep knowledge and understanding to even understand, it will likely give you the currently popular misunderstanding.

Some of the writing I do is Math and Science popularizations. A great deal of that involves clearing away popular misconceptions. My book sales cannot get near the endless repetition of errors made about these subjects.

This gets even worse when the questions asked are about cultural prejudices. I'm Jewish, I know that the most likely answer to what a Pharisee is will be a Christian pop culture answer not an answer from people who actually know who they were and are.

One more thing. If you look at visual art, the popular answer will always be a current fashion. Fashions disappear. The fashionable is almost never the enduring. And fashions (as our own Alessandra Kelley deduced) usually have a ten year life cycle. And when one looks back on them they mostly look ridiculous. So an AI image from a recent time period will be a weird mash up of fashionable art, and an AI image from a long past time period will be a weird mash up of what happens to have survived.

None of this will look anything like what a competent human can produce.
 

buz

can't stop hemorrhaging emojis
Kind Benefactor
Super Member
Registered
Joined
Nov 11, 2011
Messages
5,807
Reaction score
3,611
Does ProWritingAid count? Lots of self-publishers use it as an editor, which might help explain the high number self-reporting using AI.
It looks like some features of ProWritingAid use non-generative AI and some use generative AI: Link


Based on a very animated discussion (yes, that's a euphemism for screaming match) I had with a friend today, people think everything is AI. The machines that run blood tests in diagnostic labs. The GPS systems in cars that beep at you when you go two miles over the speed limit and tell you to turn left and cross a nonexistent bridge over a river. The recommendations on Amazon for "If you liked this book, you'll also like..."
Some of those things do use a form of AI… diagnostic testing machines can use convolutional neural hoopajoops to analyze images from samples and such and whatnot; ultrasound can use AI to enhance images and so on … but those are different from generative AI or LLMs, which is why conversations about AI can be so confusing. 🫠
 

Unimportant

doggone
Self-Ban
Super Member
Registered
Joined
May 8, 2005
Messages
29,129
Reaction score
40,710
Some of those things do use a form of AI… diagnostic testing machines can use convolutional neural hoopajoops to analyze images from samples and such and whatnot; ultrasound can use AI to enhance images and so on … but those are different from generative AI or LLMs, which is why conversations about AI can be so confusing. 🫠
They run tests using automated machinery that's a glorified conveyer belt. They flag results above/below the reference ranges developed and selected by the scientists who use them. They analyse images based on input selected by the scientists who use them, and the image analysis is then visually checked and verified by the scientist before the results are released. It's not a form of AI: there's no intelligence other than the scientist's brain, and there's nothing artificial about the input that created the database.
 

lizmonster

Your Friendly Neighborhood Spider-Mom
Moderator
Absolute Sage
Super Member
Registered
Joined
Jul 5, 2012
Messages
20,946
Reaction score
49,187
Location
Massachusetts
Website
elizabethbonesteel.com
They run tests using automated machinery that's a glorified conveyer belt. They flag results above/below the reference ranges developed and selected by the scientists who use them. They analyse images based on input selected by the scientists who use them, and the image analysis is then visually checked and verified by the scientist before the results are released. It's not a form of AI: there's no intelligence other than the scientist's brain, and there's nothing artificial about the input that created the database.
@buz is right, though - this is a conflation of marketing terms. Machine learning has been around for a long time, and does indeed have useful applications. The term "AI" has been used there as well. These days the genAI grifters are happy enough to have their environment-destroying plagiarism machines conflated with genuinely useful tech, but you're right: they don't do the same thing by any measure.

I'm sure Microsoft would be perfectly happy to have people assume Word's eight-thousand-year-old somewhat-clunky grammar checker is the same kind of tech as Copilot, but it is not.
 

buz

can't stop hemorrhaging emojis
Kind Benefactor
Super Member
Registered
Joined
Nov 11, 2011
Messages
5,807
Reaction score
3,611
They run tests using automated machinery that's a glorified conveyer belt. They flag results above/below the reference ranges developed and selected by the scientists who use them. They analyse images based on input selected by the scientists who use them, and the image analysis is then visually checked and verified by the scientist before the results are released. It's not a form of AI: there's no intelligence other than the scientist's brain, and there's nothing artificial about the input that created the database.
Well yes: no form of AI is actually intelligent in most ways people would think of intelligence, or independently thinking, and all requires a person to interpret/do something with the info it churns out… but in that sense, nothing is AI, right?

But so many things are called AI, which can encompass all sorts of machine learning I think…and when you talk about an “artificial neural network” the “artificial” refers to nodes of “fake neurons” …so in a sense people could say it’s “artificial” pattern recognition…

which is all another way the term “AI” is super fluffy and difficult to pin down in conversation 😄

The term AI can apply to so many things it’s hard to say what is or is not AI…

“Generative AI” is a little more concrete I think, but, though I could be wrong, I don’t think it’s common knowledge what it is and what its costs are vs. actually useful things called “AI” like the pattern recognition algorithms in (some) diagnostic machines, etcetera 🙂
 
Last edited:

Lundgren

Super Member
Registered
Joined
Oct 10, 2021
Messages
587
Reaction score
988
Pattern recognition software (images, voice to text, handwriting like using a phone with a stylus) has and generative AI has a lot in common. Those are using artificial neural networks (but can use different technologies before and after it). If the phrase "training the software" is used, an ANN is most likely used but the phrase could be misused for other stuff as well. There are a lot of other AI technologies besides ANN, but those are not relevant to generative AI.

So, yes, there are a lot of confusion of what people mean when they say AI.

As a ProWritingAid user, I've never used the "rephrase" function. As someone having English as a second language, it does catches quite a few of my grammar mistakes, but also flags stuff I don't agree should be changed. There is also some nice functions like flagging if I've started three sentences in a row with the same word, which might not be great if it was unintentional, or checking my distribution of different length of the sentences.

The generative AI function that I've tried in it, for funsies, but consider it to be useless beyond getting some unearned praise, is the AI critique. It tells me I have an interesting premise and realistic dialogue and some other positive comments depending on which story I've tried it on. As suggestion for improvement I tend to get that I should expand on the characters or setting. That it only checks the first 4000 words put into it, and my WIPs are in genres where around 100k words are the norm... Kind of what the other 96k words are for. Yeah, not very useful for me, no matter its correctness or ethicality.
 

RichardGarfinkle

Second Edition and Second Laughter
Super Moderator
Moderator
Kind Benefactor
Super Member
Registered
Joined
Jan 2, 2012
Messages
12,565
Reaction score
6,995
Location
Swimming in the Shallows
Website
www.richardgarfinkle.com
Even before AI got involved I found style and grammar checking software intrusive because they are disrupting the actual human art of writing by assuming that no one would have a use for any nonstandard prose or poetic use of language. Something marking starting three sentences in a row with the same word is disruptive to poetic construction, and as I point out in this thread in Poetry, Poetic usage enhances prose.
 

Lundgren

Super Member
Registered
Joined
Oct 10, 2021
Messages
587
Reaction score
988
Even before AI got involved I found style and grammar checking software intrusive because they are disrupting the actual human art of writing by assuming that no one would have a use for any nonstandard prose or poetic use of language. Something marking starting three sentences in a row with the same word is disruptive to poetic construction, and as I point out in this thread in Poetry, Poetic usage enhances prose.
That's why I said "if it was unintentional" :) If that was my intention, I just click that it should ignore that specific instance. If the software doesn't have that option, then I would probably not bother with it at all, even if I can ignore the different color codes myself.
 

RichardGarfinkle

Second Edition and Second Laughter
Super Moderator
Moderator
Kind Benefactor
Super Member
Registered
Joined
Jan 2, 2012
Messages
12,565
Reaction score
6,995
Location
Swimming in the Shallows
Website
www.richardgarfinkle.com
That's why I said "if it was unintentional" :) If that was my intention, I just click that it should ignore that specific instance. If the software doesn't have that option, then I would probably not bother with it at all, even if I can ignore the different color codes myself.
I find it disruptive. If I'm writing I would like my software to make that easier rather than insist that I click on what it was coded to alert me about because someone read a manual of style and thought they knew what writers needed.
 

lizmonster

Your Friendly Neighborhood Spider-Mom
Moderator
Absolute Sage
Super Member
Registered
Joined
Jul 5, 2012
Messages
20,946
Reaction score
49,187
Location
Massachusetts
Website
elizabethbonesteel.com
I find it disruptive. If I'm writing I would like my software to make that easier rather than insist that I click on what it was coded to alert me about because someone read a manual of style and thought they knew what writers needed.

I run a grammar check, but mostly because it sometimes finds typos that spell check doesn't. :)
 

Lundgren

Super Member
Registered
Joined
Oct 10, 2021
Messages
587
Reaction score
988
I run a grammar check, but mostly because it sometimes finds typos that spell check doesn't. :)
I don't know if I have a mild case of dyslexia or something, or if it is just human, but I do end up with a lot of spelling errors that happens to be another word. A grammar check tend to catch a bunch of those, as that "alternative word" tend to be grammatically way off. 😅
🙄
 

Jazz Club

It's not wrong, it's dialect
Super Member
Registered
Joined
Dec 18, 2021
Messages
4,389
Reaction score
7,215
Location
Northern Ireland
I don't know if I have a mild case of dyslexia or something, or if it is just human, but I do end up with a lot of spelling errors that happens to be another word. A grammar check tend to catch a bunch of those, as that "alternative word" tend to be grammatically way off. 😅
🙄
Yeah that's true, it's useful for that at least. I don't like how it tries to standardise my sentences though.
 
  • Like
Reactions: Lundgren

VeryVerity

Aliens ate my homework
Super Member
Registered
Joined
Aug 5, 2011
Messages
417
Reaction score
148
Location
UK
Yeah that's true, it's useful for that at least. I don't like how it tries to standardise my sentences though.
Grammar checkers are frustrating like that.

I've got to the point where I don't even see the red lines in Scrivener for all my British-spelled words.
ProWritingAid at least picks up that I'm using British English, except I leave it turned off most of the time because it annoys me by highlighting things that aren't errors, or are a deliberate choice.
I only use PWA free, but I like it for picking up grammar things I hadn't spotted, and for things like repeated sentence starts etc. I don't bother with the rewriting function (tried it a few times before I realised what it was, and didn't like it).
 
  • Like
Reactions: Lundgren