Writing Believable Artificial Intelligence

Vida Paradox

Code Surfer
Super Member
Registered
Joined
Aug 23, 2018
Messages
183
Reaction score
27
Location
Drifting in Space
I'd say this question is actually indeterminate, but because we don't know All The Things, we can't say it's not possible.

If it's not impossible, it's possible! :p

Also, I always wondered why people always focus on the complex stuff like the Human Brain. Why not start at something much simpler like a worm's brain and work our way from there?

Unless I haven't searched enough that is. Has there anyone out there who does an extensive search on how a worm's brain work?

Do worms have brains?
 

OldHat63

Banned
Flounced
Joined
Aug 21, 2018
Messages
404
Reaction score
30
Location
Lost in the woods of TN and prefer it that way
It's all in the execution, and all of us write uniquely (and, I'm sure, beautifully :)).

From your lips/keyboard to whatever gods or goddesses may be listening/reading, Liz.

I just wish getting the stuff swimmin' around in my head out into the "real" world was as easy as coming up with it in the first place. I'd have it made, if that were the case.

Do worms have brains?

Google says yes. And that's about as much of a study of it as I've ever made. lol


O.H.
 
Last edited:

Dennis E. Taylor

Get it off! It burns!
Kind Benefactor
Super Member
Registered
Joined
Jul 1, 2014
Messages
2,602
Reaction score
365
Location
Beautiful downtown Mordor
They have completely mapped every single cell of the flatworm. They have, in fact, mapped the complete growth of the flatworm from single cell to complete adult, and can account for when each cell divides and what it becomes.

They still can't duplicate the flatworm's behaviour in software. Because just because they can identify every neuron (reductionism) doesn't mean they can describe how the neurons interact in order to produce specific behaviours (emergentism).
 

OldHat63

Banned
Flounced
Joined
Aug 21, 2018
Messages
404
Reaction score
30
Location
Lost in the woods of TN and prefer it that way
They still can't duplicate the flatworm's behaviour in software. Because just because they can identify every neuron (reductionism) doesn't mean they can describe how the neurons interact in order to produce specific behaviours (emergentism).

Yup... because a brain doesn't run on software. There's no coding in there that you can alter without changing the architecture of the "device". A brain is both "hardware" and "software" combined.

You can remove or damage fairly large portions of the human brain, and it will still function. But damage or delete a portion of a computer program, and it'll generally stop working altogether.

The same goes for a computer chip; it can only do what it's supposed to if it's intact... or at least largely intact, if there's some redundancy built in.

There are more than a few people who've managed to not only live, but carry on like anyone else, with only half a brain, however.

But try cutting your computer in half and see how well it does... even if the portion of the memory that contains the operating code is still in one piece. What you'll most likely have is a pair of glorified paperweights.



O.H.
 

MJHeiden

Registered
Joined
Sep 25, 2018
Messages
11
Reaction score
0
On a somewhat tangential note, people talk about looking for life out in the grandness of space, and so far we haven't detected anything - nor many habitable planets. What if life does exist out in the great cosmos, it's just synthetic in nature? AI or faux intelligent robots that we create on earth are limited because we are limited in our own knowledge of how our own brain and consciousness works. Until we understand how our own mysterious melon works, I don't expect we'll be creating self-aware artificial intelligence anytime soon.

However, considering we are creating machines to learn and adapt and figure out solutions to problems we don't know how to solve (nor understand how the AI came up with the solutions, they just work), I do think its possible modern AI manage to create a new organism we didn't know how to create before. It'll probably be due to its need to solve a particular problem, which requires an ever-changing and adaptable way of thinking.

Either way, I think nature within the rest of the universe has already figured it out and lifeforms that are simply inorganic in being exist, living out their almost eternal lives with their own distinct motives, goals, and mindsets.
 

knight_tour

Fantasy Tourist
Super Member
Registered
Joined
Feb 27, 2009
Messages
957
Reaction score
62
Location
Rome, Italy
Website
tedacross.blogspot.com
I'm not even sure the AI in my novel qualifies as an AI. The 'mad scientist' genius who created it wanted to capture all the various forms of critical data from his own brain and then have an 'operating system' that could use this data to mimic the workings of a human brain as much as possible. He spent years getting closer and closer and pulled it off just at the moment that he had a stroke and died. So now he lives on in the web. He has to wall off his initial code so that as the rest of him expands throughout the web he is able to also maintain a part of him that can still feel human. His ultimate goal is to wait until technology reaches a stage where his DNA could be cloned and his mind data reseeded into a fresh, young body. Yeah, I know, all impossible, right? Except human 'experts' have been claiming things are impossible for centuries only to be proven wrong again and again. Heck, I recall chess experts not long ago claiming no chess computer would ever be able to beat the elite grandmasters, and that claim didn't manage to survive long at all.

Is my AI an exact replica of its human originator? Of course not, but then we all change more often than we realize throughout our own lives, so even getting close to mimicking a person can feel pretty darn close to seeming like that person. If you get brain damage and are not quite like your self anymore, does that mean you are not yourself? The son and wife of the scientist both wrestle with these issues and deal with them in their own ways.
 
Last edited:

knight_tour

Fantasy Tourist
Super Member
Registered
Joined
Feb 27, 2009
Messages
957
Reaction score
62
Location
Rome, Italy
Website
tedacross.blogspot.com
That depends on your definition. I've yet to see anything that withstands close scrutiny. Most people use "AI" to mean "source of information that can parse a sophisticated decision tree really really fast." Of course, I haven't seen--nor heard!--of it all. :)

I would count this -- https://www.theguardian.com/technol...on-program-teaching-itself-to-play-four-hours

I also find it interesting that the reason chess experts believed no computer would ever be able to beat the best human players was because they believed human 'intuition' held the ability to see unusual circumstances beyond the capability of machine logic to comprehend. Instead what happened was they discovered that brute calculation taken to the extreme ends up mimicking intuition.
 
Last edited:

BT Lamprey

Super Member
Registered
Joined
Sep 22, 2018
Messages
63
Reaction score
14
I recommend the Bobiverse novels if you're interested in human-like AI.
 

LesFewer

Super Member
Registered
Joined
Oct 1, 2018
Messages
87
Reaction score
4
Here's my take on it. There's this Israeli historian Yuval Noah Harari who has written a book, Homo Deus: A Brief History of Tomorrow. He does TED talks and has been interviewed on a bunch of shows, I found him through YouTube.

The way he describes AI is this, a huge amount of intelligence but no conscious. He says man and other mammals like dolphins and dogs have conscious, conscious is our emotions, empathy etc.

So we humans can make intelligent things but we can't make things with consciousness, at least yet.

Intelligence to Harari can be even a algorithm. There are many algorithms today that are making decisions for us and we don't really know exactly how they are making decisions. One place they have a lot of algorithms is in the finance industry, sometimes these algorithms will sell off stocks even when a human can't see a ration reason to do so.

AI are a mystery and that won't improve as time passes.

So one take might be an AI being a sociopath or maybe smart enough to fake having a conscious.
 
Last edited:

Laer Carroll

Aerospace engineer turned writer
Super Member
Registered
Temp Ban
Joined
Sep 13, 2012
Messages
2,476
Reaction score
266
Location
Los Angeles
Website
LaerCarroll.com
The best books I've ever read with convincing artificial humans were the Cassandra Kresnov series by Joel Shepherd.

https://www.amazon.com/gp/product/B00CIVJWFI/?tag=absowrit-20
_______________________________​
My reading of AI research is that there are two basic kinds, sometimes labeled general AI and specialized AI.

The first is into ways to duplicate ALL human qualities, especially including emotion and consciousness. There is some research into that, but it's pretty slow and limited because "modern" psychology is still very primitive. (We're still not perfectly clear on just where and how memory is stored, for instance.)

The second is where most time and money is spent. It has a lot of successes and increasingly is a selling point of products. Google Translate for instance recently improved dramatically because of AI research. It is still imperfect. Which is what we'll see in all AI products: imperfect but useful.

But for we writers who want to write convincing general AI our main tactic is just to show it in action without a lot of (or maybe ANY) explanation. Don't slow down the action. For that will give our readers the chance to examine the illusion we'll building too closely.
 

MaeZe

Kind Benefactor
Super Member
Registered
Joined
Jun 6, 2016
Messages
12,748
Reaction score
6,435
Location
Ralph's side of the island.
The best books I've ever read with convincing artificial humans were the Cassandra Kresnov series by Joel Shepherd.

https://www.amazon.com/gp/product/B00CIVJWFI/?tag=absowrit-20
_______________________________​
My reading of AI research is that there are two basic kinds, sometimes labeled general AI and specialized AI.

The first is into ways to duplicate ALL human qualities, especially including emotion and consciousness. There is some research into that, but it's pretty slow and limited because "modern" psychology is still very primitive. (We're still not perfectly clear on just where and how memory is stored, for instance.)

The second is where most time and money is spent. It has a lot of successes and increasingly is a selling point of products. Google Translate for instance recently improved dramatically because of AI research. It is still imperfect. Which is what we'll see in all AI products: imperfect but useful.

But for we writers who want to write convincing general AI our main tactic is just to show it in action without a lot of (or maybe ANY) explanation. Don't slow down the action. For that will give our readers the chance to examine the illusion we'll building too closely.
You might find research into the evolution of emotions and moral behavior useful. That's another aspect of the human brain where research is opening a lot of doors. It's separate from the brain's mechanism of consciousness but closer to AI, in my opinion, than stored memory and learning capability.
 

jmurray2112

Super Member
Registered
Joined
Oct 1, 2018
Messages
74
Reaction score
5
Location
Northern CA
How do I limit the growth of an AI?

I can see the need to parse through the evolution of AI as we see it forming around us, but the OP's question initial question could, in my opinion, be answered by addressing how AI is fed.

So, maybe bandwidth? Connectivity? Server issues?
 

knight_tour

Fantasy Tourist
Super Member
Registered
Joined
Feb 27, 2009
Messages
957
Reaction score
62
Location
Rome, Italy
Website
tedacross.blogspot.com
I can see the need to parse through the evolution of AI as we see it forming around us, but the OP's question initial question could, in my opinion, be answered by addressing how AI is fed.

So, maybe bandwidth? Connectivity? Server issues?

I dealt with this by having the AI 'cordon off' its original code so that it could always see its original state distinctly from its expansion.
 

indianroads

Wherever I go, there I am.
Super Member
Registered
Joined
Mar 4, 2017
Messages
2,372
Reaction score
230
Location
Colorado
Website
indianroads.net
[...]
But for we writers who want to write convincing general AI our main tactic is just to show it in action without a lot of (or maybe ANY) explanation. Don't slow down the action. For that will give our readers the chance to examine the illusion we'll building too closely.

For me, the intriguing aspect of fictional AI is the concept of consciousness and self determination.

IMO being conscious = being self aware = self preservation. So, a lot of creatures must be conscious to some degree because they do their best to stay alive. There are varying degrees of this of course - I recall there was some research done with plants back in the 60's and 70's, and some researchers said there was some indication that they were aware of their surroundings and creatures that might cause them harm. Back then, the researchers could have been on an acid trip though, so who knows what's real and what isn't.

Currently, our computers don't appear to be self aware. My desktop computer won't mind it if I toss it away in favor of a newer version. What would happen if the machines around us became interested in self preservation? It would probably be a bad day for us since they control so many aspects of our lives. But how could that happen? Is it possible to encode consciousness? We might be able to mimic it, but the real thing? <shrug>

But this begs the question, if are WE truly conscious and self aware, do we have free will? or are we just coded such that we struggle to stay alive? If the later, then the purpose of our programming is simply to push our DNA on to future generations - the technology and all the other stuff we've created only exist to support that goal. Considering the former, how do we know if we actually have free will?
Determinism
How different am I from my desktop computer? We're both coded to operate and make choices based on input.

So, perhaps this is why AI is so often portrait as supercharged humans. Computers have the potential to think faster and hold more data in storage than we can. If they ever became self aware, it would probably be a bad day for us. They have the potential of becoming Human 2.0, and we would go the way of homo habilis.
 

Teinz

Back at it again.
Super Member
Registered
Joined
Oct 20, 2010
Messages
2,440
Reaction score
186
Location
My favourite chair by the window.
For me, the intriguing aspect of fictional AI is the concept of consciousness and self determination.

IMO being conscious = being self aware = self preservation. So, a lot of creatures must be conscious to some degree because they do their best to stay alive. There are varying degrees of this of course - I recall there was some research done with plants back in the 60's and 70's, and some researchers said there was some indication that they were aware of their surroundings and creatures that might cause them harm. Back then, the researchers could have been on an acid trip though, so who knows what's real and what isn't.

Not directly related to AI, but on plants.

https://news.nationalgeographic.com...volution-mabey-ngbooktalk/?user.testname=none

So plants try to preserve themselves, they remember, they form relationships, they even act altruïstically. Conscious? Self aware?
 

Marumae

Queen of Quixotica
Super Member
Registered
Joined
Oct 12, 2010
Messages
255
Reaction score
19
Location
Fantasia
Website
www.instagram.com
This thread I will say has been extra helpful in the planning of a novel that will have (not always featuring though) A.I. in it. My first foray into Science Fiction and I must say y'all have given me a hell of a lot to think about, as well as a plethora of books to research. Nothing more to contribute other than, it's helped more than just one person in writing their novel!
 

Unpolished

Super Member
Registered
Joined
Nov 18, 2018
Messages
56
Reaction score
3
Location
AZ, USA
In my story they can build a really good fake. The truely self aware systems generate at random from self modifying code, neural nets and stuff(not human systems I assume there is stuff we don't have names for). They don't understand what makes the self aware systems tick. The AI tend to die due to what appears to be suicide for unknown reasons or the risk taking related to being self aware. With certain loopholes hopped through they can become full citizens, of course some sort of deal may need to be made with the hardware's owners if they don't choose to migrate. Usually the deal would be keep up your old job, a new AI takes expensive time to train.

Don't ask me why they can't be backed up I don't think I made up a reason I could believe yet.
 

knight_tour

Fantasy Tourist
Super Member
Registered
Joined
Feb 27, 2009
Messages
957
Reaction score
62
Location
Rome, Italy
Website
tedacross.blogspot.com
Just finished reading Evolution's Darling by Scott Westerfeld, and it's heavy with AI in various interesting takes. Might be worth checking out. The main character 'Darling' is an AI.
 

Axl T

Registered
Joined
Nov 24, 2018
Messages
9
Reaction score
0
Location
Canada
Great discussion, has my juices flowing about the AI philosophy for my wip.

We know pretty soon AI will be in everything, but we don't assume that our fridge will be getting emotions, so the assumption that Intelligence has anything to do with being human is just that. And yes, brains are biological circuits, but their equation to computers ends there. They work completely different, with different origins and different purposes. Computers can be anything, humans can only be human, so writing "anthropomorphic" AI means limiting the computing potential to human parameters. It's not something anyone would ever want to exist... we always want them to be better, faster, smarter.

Because humans were created by particles surviving collisions over billions of years, the best we could hope for is to simulate that. Up until you simulate the entire universe and the processes that created humans, you are talking about a jesus/creation myth ... human-life spawning from nothing.

The problem I have with many AI stories is why does the AI suddenly start to care? It's struck by lightning, has something spilled on it, it builder is threatened ... so what? why would that cause extremely complicated programming/simulation to appear out of nowhere.

When you have something like Westworld, or the real Sophia AI you have a plausible reason to have emotions simulated and you can get into the real interesting question of how far do we want this tech to go and why are silicon computer emotions less real than biological computer emotions and are they a good idea at all.
 

Lehssner

Registered
Joined
Dec 8, 2018
Messages
28
Reaction score
1
I think the most realistic way is to just have it built into their code somehow. I feel if humans and AI were to coexist that would make the most sense. Humans knew they were creating something potentially incredibly powerful so they put in some sort of regulator that ensures AIs won't be able to do what they want.