Helping People Understand What's Really Going On With "AI"

RichardGarfinkle

Rereading and Rewriting
Super Moderator
Moderator
Kind Benefactor
Super Member
Registered
Joined
Jan 2, 2012
Messages
11,904
Reaction score
5,105
Location
Walking the Underworld
Website
www.richardgarfinkle.com
Mod Note:

This is a thread for brainstorming how to help people who may be caught up in the hype and over-promise of LLMs and related software. The goal is to put our writing skills to work to create a bunch of different manners of illuminating and discussing the reality underlying the image of the software.

Hopefully, we'll be able to clear away what won't work and synthesize what can work.

This is not a thread for complaining and while expression of frustration is inevitable and often cathartic, don't drag it out. One person's catharsis is another person's PTSD trigger.

We're not after a single, unified approach. There is no one method that works for everyone. And we should work to make sure that we speak honestly and don't join the over claimers and the AI evangelists in dishonest methodologies.

Each of us has areas and methods of writing that we do well. As the FAQ for AIs in this board shows, I tend toward verbose explanations which is fine for most of my writing since I do novels, science and math popularizations and textbooks. But my short stuff tends to be poetry and humor.

But I've seen a lot of you do very well at quick explanations and illuminations. If enough of us work at this, I think we'll be able to make ways to get through to the people who care about the reality of this situation.
 

worrdz

Type typity typing in flyover country
Kind Benefactor
Super Member
Registered
Joined
Jun 6, 2024
Messages
620
Reaction score
1,268
Location
USA
Assume that anything you share with an LLM will become part of the material used for its continued training and will not remain confidential in any way.

Therefore, never share anything secret, sensitive, or for which you have signed an NDA.
 

lizmonster

Possibly A Mermaid Queen
Moderator
Absolute Sage
Super Member
Registered
Joined
Jul 5, 2012
Messages
18,285
Reaction score
38,735
Location
Massachusetts
Website
elizabethbonesteel.com
The pithiest expression I've seen to describe AI is "spicy autocomplete."

It is, essentially, autocomplete - extremely sophisticated and impressive, but with as much understanding as your phone when it suggests completing your sentence with "tomorrow" when you want to say "together."

It can't fact-check, because it doesn't know what a fact is. It doesn't answer questions, because it doesn't know what an interrogative is. It's a big Word Salad database.
 

Brigid Barry

Taking a break.
Self-Ban
Kind Benefactor
Joined
Jan 22, 2012
Messages
12,190
Reaction score
23,449
Location
Maine, USA
Not 100% sure I understand the purpose (aka, what is the question being answered) but I would like to try.

My suggestion would be to look carefully at the many examples of LLM failure (strawberry only has two r's example) and understand the consequences of using LLM, such as failing an assignment in an academic setting, privacy issues (as mentioned above) in a professional setting, or copyright issues in a publishing setting, prior to committing to use of LLM. An example of several of these concerns that I found amusing was the attorney who used AI to compose a brief but didn't fact check it prior to submitting to the Judge.

And I cackled at "spicy autocomplete".
 

worrdz

Type typity typing in flyover country
Kind Benefactor
Super Member
Registered
Joined
Jun 6, 2024
Messages
620
Reaction score
1,268
Location
USA
I asked ChatGPT to describe my house. I gave it the address and everything. It described features that aren't there.
 

Unimportant

Pushing buttons. Usually other people's.
Administrator
Super Moderator
Moderator
Kind Benefactor
Super Member
Registered
Joined
May 8, 2005
Messages
26,411
Reaction score
35,498
Location
Aotearoa
LLMs are like they stole all the lego and meccano and log-cabin-kit parts from kids, and then burgled the local lumberyard and hardware store, and then when you ask them to make the pieces into a swiss chalet, they put them together using random sections of blueprints they stole from architects. The building is unlikely to actually have a roof and a door and retaining walls, let alone a functional toilet, but what the hell, it'll be a kind-of building-shaped house-curious chalet-adjacent structure. (Assuming that, yanno, you're happy with a non-functional non-chalet made from stuff stolen from kids and neighborhood shops and professionals alike.)
 
Last edited:

be frank

not a bloke, not named frank
Kind Benefactor
Super Member
Registered
Joined
Dec 16, 2015
Messages
10,335
Reaction score
5,376
Location
Melbourne
Website
www.lanifrank.com
It depressed me how many people I know (including usually conscientious writers) jumped straight into using LLMs without a second's pause. "Oh, I'm just using it to compose boring emails for me, not to write my novels or anything." Or, "I'm just using it to help me brainstorm; there's no harm in that!"

I've found that in most cases, the moral arguments against "AI" has no effect, even when the person knows better (convenience wins out). BUT what has seemingly had a tangible impact is explaining the huge environmental cost. I've found most people are unaware of the water and electricity usage involved in LLMs, and they're horrified when they hear about it.
 

JoeySL

Writing in Circles
Super Member
Registered
Joined
Aug 22, 2023
Messages
919
Reaction score
1,753
Location
EU
Safety concerns are huge for me. When people in my vicinity ask ChatGPT for recipes or tips on healthy foods, measurements for ingredients, and the likes, I always try to raise awareness that LLMs are not search engines, they don't know things, they just autocomplete sentences so they are grammatically correct. It doesn't seem to click.
 

Naja Noir

Registered
Joined
Aug 18, 2024
Messages
8
Reaction score
14
I just wanted to say that it is a breath of fresh air to see threads like this. I'm new here, and the reason I began shopping for a new writer home, is because my main forum has begun to allow AI writing and even critiques.

At first it was simply dedicating a subforum to speculate about. Then more and more threads about how it cures writers block popped up. We have competitions there, and the rules on one of them was left open on purpose to allow, but discourage AI writing.

A few people have begun leaving AI critiques and they refuse to say anything, because, "who's it really hurting?"

The whole thing makes me sick and so very sad. I'm glad to find that not every forum is like that.

I'd love to know the right way to appeal to people on this issue, what can actually be said to get people to care?
 

Silenia

Moderator
Kind Benefactor
Super Member
Registered
Joined
Dec 28, 2014
Messages
1,795
Reaction score
3,445
One thing I've found that also helps when it comes to "AI" and folks who do know a bit about it and/or use at least one form of it, is being specific. LLM is one subset of AI. Image generation is another. Image detection/recognition is a third. There is overlap, of course, but some of the issues are specific to one type or another. Trying to convince someone who jumped on the AI art bandwagon with arguments against LLMs is just going to get you (unfairly) put in the "oh look, another AI doomsayer, ignore 'm" camp, and so on.

Another thing that helps a bit against being treated as "AI doomsayer" is acknowledging that yes, there are a couple of actually genuinely beneficial AI uses (pretty much all on the image recognition side--some forms of cancer screening, for example), but those are currently very much the exceptions to the rule and even then they need to be used alongside human judgement.

Combined with the moral, legal and environmental issues with AI, most of it just...is not worth it for personal use yet. Who knows, one day we might reach a point where there are LLMs and image generating AIs trained exclusively on donated and no-longer-copyrighted materials, with a seriously reduced environmental cost, and with actually built-in safeties to ensure AI-produced materials can be easily detected as AI-generated and clear communication about its limits. At that point, the equation might be different--for some, but probably never all, uses.

But the current generation of it, trained on heaps of work without their rights' owners consent while slurping up massive amounts of water and energy and merrily pretending to be the equivalent of human judgement and output? Definitely not.
 

worrdz

Type typity typing in flyover country
Kind Benefactor
Super Member
Registered
Joined
Jun 6, 2024
Messages
620
Reaction score
1,268
Location
USA
LLMs are like they stole all the lego and meccano and log-cabin-kit parts from kids, and then burgled the local lumberyard and hardware store, and then when you ask them to make the pieces into a swiss chalet, they put them together using random sections of blueprints they stole from architects. The building is unlikely to actually have a roof and a door and retaining walls, let alone a functional toilet, but what the hell, it'll be a kind-of building-shaped house-curious chalet-adjacent structure. (Assuming that, yanno, you're happy with a non-functional non-chalet made from stuff stolen from kids and neighborhood shops and professionals alike.)
And the people who live there have too many fingers…
 

Alessandra Kelley

Sophipygian
Super Moderator
Moderator
Super Member
Registered
Joined
Mar 27, 2011
Messages
17,494
Reaction score
6,858
Location
Near the gargoyles
Website
www.alessandrakelley.com
Here's a tip for searching on Google if you want to avoid AI content:

Include "Before:2022" (without quote marks, and note there is no space) in your search string. It will limit results to things posted before 2022.

This is also useful if you should want to demonstrate to someone how very different search results have become now that AI slop is overflowing the internet.
 

BeautifulRoses

Registered
Joined
Feb 22, 2018
Messages
39
Reaction score
29
When people compare AI to human art or writing, one needs to remember that they are talking about machine code vs a product of an imagination. The former being something that was nothing but machine code since is formation.
 

Silenia

Moderator
Kind Benefactor
Super Member
Registered
Joined
Dec 28, 2014
Messages
1,795
Reaction score
3,445
In general, there's a fairly big difference between generative AI (LLMs, image creation, etc.) and non-generative AI of which the variety of AI-based detection programs are part. They're all based on pattern detection. However, there's a huge gap between "detecting whether a pattern is present/which pattern is present" (non-generative AI) and "predicting how the pattern should continue/calculating a response to a pattern" (generative AI).

Non-generative AI has some genuinely beneficial uses, is somewhat less riddled with ethical problems (though still not quite free from it either) and can, if properly trained and used within their appropriate context^1, work quite well. Because what it's doing is what computers are good at: high-speed high-volume analysis whether specific input data does or does not fall within specific ranges.
It still requires human supervision, and can be used in a number of incredibly harmful ways alongside some beneficial uses, but it's not exclusively problematic the way generative AI thus far is.

^1 if you've got, say, a hypothetical AI that's exclusively trained to determine if a picture most likely contains a bird or a cat, and you instead feed it images of a desk chair, a school of fish, Jupiter, or other nonsense, obviously the output will be utter garbage, and full of false positives. That one's obvious. Less obvious is inherited bias from the datasets its trained on (which doesn't need to be deliberate) and finding-and-mislabeling patterns.

E.g. if you train an AI on a lot of pictures of sheep in pastures and train it that those should be labeled "sheep", and of fluffy dogs in buildings or urban terrain and those should be labeled "dog"? It's likely it will 1. start labeling empty pastures as "sheep"; 2. label fluffy white dogs in a pasture as "sheep" and 3. label sheep in buildings/urban settings as "dogs". Because AI doesn't actually know what a sheep or a dog is as a concept. It just knows that certain arrangements of pixels should be labeled one way or the other, and that the pixel pattern formed by the thing we call a "pasture" is present in all those pictures it was told to label "sheep".

Some of those failures (poodles in a pasture being labeled sheep; sheep in a shearing building being called dogs; construction cranes being labeled giraffes because the dataset had a more-than-representive number of giraffes in there) are pretty easy to spot. Others, less so.

In a medical context, for example, there is a risk it will further pre-existing biases. Already, most textbooks and example images of various skin rashes default to what that rash looks like on white skin, and already it's causing under-detection of a number of health conditions in folks with non-white skin.

But if you were to train an AI on that pre-existing dataset without considering said pre-existing issue? You'll end up training a detection AI that gets very good at recognizing specific rashes... on white skin.

So can it be useful? Yes, if used in addition to human eyeballs, and if there's sufficient understanding of what AI *actually* does to ensure the datasets it's trained on are not giraffed; unintended patterns it picks up on are corrected; and so on.

And the more generative AI gets praised as intelligent, the fewer folks that seem to recognize what AI of any kind actually is or does.

(Cross-posted from here on @RichardGarfinkle's request)