Writing Believable Artificial Intelligence

Vida Paradox

Code Surfer
Super Member
Registered
Joined
Aug 23, 2018
Messages
183
Reaction score
27
Location
Drifting in Space
Just like what it says on the tin. I always wonder how you can pull it off.

Also, sorry if this Thread's been here before...

Artificial Intelligence in speculative science n' technology has been one of the most fascinating and terrifying subject. They're like Aliens, unpredictable, beyond our understanding, and basically OP if you think about their rate of growth. If a computer can think on their own then it can grow and there's no limit on how big it will grow or how powerful it will become. Just think of the device you're using to read this and think back twenty years ago. (If you're old enough)

Yeah, technology is growing FAST. And we haven't even discovered AI! I mean, the discovery of AI is said to be a 'Technological Singularity'.

Now, in fiction, anything goes. But I always wanted to put an extra realism in my story to make it work. The Question is:

How do I limit the growth of an AI?

As in like, is there a way to believably makes it impossible for an AI to surpass humanity? Like a limit of codes and data? Or maybe a universal limit on any being with souls and consciousness?

Right now, I'm going with AIs are very rare and those who became AI have soul and consciousness in them. Their stunt of technological growth is a choice since they don't want to be alone and separated from other intelligent life forms.

Any ideas is always appreciated, thanks!
 

Maxx

Got the hang of it, here
Super Member
Registered
Joined
May 26, 2010
Messages
3,227
Reaction score
202
Location
Durham NC
Just like what it says on the tin. I always wonder how you can pull it off.


Any ideas is always appreciated, thanks!

The machine minds in Iain Banks' Culture books were self-regulating AI. if you look at machine-learning (which is AI without autonomy or "instinctive" drives) you get an idea of what intelligence might look like without all the other fixin's (such as sensory functions, autonomy, drives, manipulative functions etc.) Or for a look at pure drive, autonomy, senses check on antiship missiles -- which can supposedly act as groups autonomously -- they are limited by their instinctive drive to blow themselves up along with any likely targets. Not very smart, but with lots of autonomy, senses and a simple death drive.
 

Vida Paradox

Code Surfer
Super Member
Registered
Joined
Aug 23, 2018
Messages
183
Reaction score
27
Location
Drifting in Space
The machine minds in Iain Banks' Culture books were self-regulating AI. if you look at machine-learning (which is AI without autonomy or "instinctive" drives) you get an idea of what intelligence might look like without all the other fixin's (such as sensory functions, autonomy, drives, manipulative functions etc.) Or for a look at pure drive, autonomy, senses check on antiship missiles -- which can supposedly act as groups autonomously -- they are limited by their instinctive drive to blow themselves up along with any likely targets. Not very smart, but with lots of autonomy, senses and a simple death drive.

Well, I'm trying to create an AI at a very human level, kinda like the character from Detroit Become Human or GLaDOS from Portal or Nano Shinonome from Nichijou or WALL-E from WALL-E or... Well, there's a lot of comparisons to be made. Simple Death Drive and Autonomous might be a tad bit hard and only compatible on lifeless droids.

But still, thanks for your idea! I'll keep that in my inspiration folder.
 

lizmonster

Possibly A Mermaid Queen
Absolute Sage
Super Member
Registered
Joined
Jul 5, 2012
Messages
14,476
Reaction score
23,914
Location
Massachusetts
Website
elizabethbonesteel.com
How do I limit the growth of an AI?

As in like, is there a way to believably makes it impossible for an AI to surpass humanity? Like a limit of codes and data? Or maybe a universal limit on any being with souls and consciousness?

I think you could approach it in any number of ways, depending on the sort of story you want to write.

In general, though...humans are constrained by the physical. AI would be the same. They might have the ability to upgrade and enhance their hardware more than humans do, but they still have to deal with physics, and hardware failures, and bugs.

You could also worldbuild a place where AI has indeed surpassed humanity. But there's a difference between surpassing humanity and being invulnerable. (In a way, bears have surpassed humanity - they can kill and eat us pretty easily - and yet they're not barging into my living room. :))

"Soul" and "consciousness" are curious concepts in and of themselves. Creating machines with "souls" leads to a lot of interesting philosophical questions for humans.

I think your idea of having them crave company is a good place to start.
 

Vida Paradox

Code Surfer
Super Member
Registered
Joined
Aug 23, 2018
Messages
183
Reaction score
27
Location
Drifting in Space
I think you could approach it in any number of ways, depending on the sort of story you want to write.

In general, though...humans are constrained by the physical. AI would be the same. They might have the ability to upgrade and enhance their hardware more than humans do, but they still have to deal with physics, and hardware failures, and bugs.

You could also worldbuild a place where AI has indeed surpassed humanity. But there's a difference between surpassing humanity and being invulnerable. (In a way, bears have surpassed humanity - they can kill and eat us pretty easily - and yet they're not barging into my living room. :))

"Soul" and "consciousness" are curious concepts in and of themselves. Creating machines with "souls" leads to a lot of interesting philosophical questions for humans.

I think your idea of having them crave company is a good place to start.

Hardware problem might be a limiting factors, along with bugs and stuff like that.

I'll see if I can make it work.

Thanks!
 

OldHat63

Banned
Flounced
Joined
Aug 21, 2018
Messages
404
Reaction score
30
Location
Lost in the woods of TN and prefer it that way
Also keep in mind that Artificial Intelligence and Engineered Intelligence are two entirely different things. We can create the first, to some degree, but not the second. Not yet.

The first not necessarily self-aware, and the second being so from the moment of it's being activated.
A.I. exists right now today... but it is not a human, self-aware intelligence.
Simply being able to learn, and to use that gathered knowledge, isn't enough for the machine to be alive in the "I Robot" sort of way.
 

lizmonster

Possibly A Mermaid Queen
Absolute Sage
Super Member
Registered
Joined
Jul 5, 2012
Messages
14,476
Reaction score
23,914
Location
Massachusetts
Website
elizabethbonesteel.com
A.I. exists right now today

That depends on your definition. I've yet to see anything that withstands close scrutiny. Most people use "AI" to mean "source of information that can parse a sophisticated decision tree really really fast." Of course, I haven't seen--nor heard!--of it all. :)

And as to self-awareness...that also depends on your definition. Is it self-aware if it's programmed to mimic the behaviors of a self-aware being? How do we tell the difference? And if it's imitative and not "real," what are our ethical obligations toward it?
 

Vida Paradox

Code Surfer
Super Member
Registered
Joined
Aug 23, 2018
Messages
183
Reaction score
27
Location
Drifting in Space
First of All, I've never heard of the term Engineered Intelligence and I am so glad you told me that.

Second of all, in my novel I'm thinking of an AI who can feel and think like human. The AI will be so human that the only thing that separates them from being a human being is possibly their biology(?). They can feel emotions, laugh, cry, hate, and feel all that human stuff.

Now, the other difference is that they have some computer power like Perfect Memory, Connecting to Internet with their Mind, and they also aces in Math.

THAT is the problem I'm facing. Being an AI, they can download information from the internet, create some programs in their head, and even grow exponentially. Which brings me back to the first part of the problem.

How do I limit their power realistically?

Like I said earlier, right now I'm using their humanity to limit their power. But then I hit a roadblock once I'm trying to create an AI Antagonist. They don't give a flying fish about humanity, much less limiting their power.
 

lizmonster

Possibly A Mermaid Queen
Absolute Sage
Super Member
Registered
Joined
Jul 5, 2012
Messages
14,476
Reaction score
23,914
Location
Massachusetts
Website
elizabethbonesteel.com
THAT is the problem I'm facing. Being an AI, they can download information from the internet, create some programs in their head, and even grow exponentially. Which brings me back to the first part of the problem.

Are they building their own hardware?

Do humans understand how they work? Can we damage them? Can other AI damage them?

Exponential growth would also presumably have a cost. What does it cost them to grow? What are the tradeoffs?
 

OldHat63

Banned
Flounced
Joined
Aug 21, 2018
Messages
404
Reaction score
30
Location
Lost in the woods of TN and prefer it that way
That depends on your definition. I've yet to see anything that withstands close scrutiny. Most people use "AI" to mean "source of information that can parse a sophisticated decision tree really really fast." Of course, I haven't seen--nor heard!--of it all. :)

My definition of "intelligence" is pretty simple: It either is or it isn't looking at a situation or condition, and figuring out what to do, even if it's never encountered anything like it before. If something exhibits a set of behaviors because because it's programmed to and has no choice, that's not intelligence. A lever doesn't lift a load because it wants to, it does it because some one or some thing caused it to. The same is true of Artificial Intelligence; it's not choosing it's behaviors or actions, it's simply going through a "If/if not" list and trying to apply the closest match. If it doesn't find one, it stops and does nothing. It won't try SOMETHING just to see what happens, and learn from it.

And as to self-awareness...that also depends on your definition. Is it self-aware if it's programmed to mimic the behaviors of a self-aware being? How do we tell the difference? And if it's imitative and not "real," what are our ethical obligations toward it?

Again, mimicking or copying is just that. The machine isn't able to decide on those behaviors, so doesn't know how to alter them... which will lead to mistakes in enacting them.... which is where tests like the Turing test come in.
As far as awareness goes... even a Dog knows it's not a computer, without being told. But how many computers know they're not a dog, without being programmed/"told"?

Ethics are relative, by the way, decided by the larger group. And all it takes is looking around in the society we live in today, to know that groups are not always right. In fact, are generally wrong more often than they're right... which is why so many civilizations have come and gone.
 
Last edited:

lizmonster

Possibly A Mermaid Queen
Absolute Sage
Super Member
Registered
Joined
Jul 5, 2012
Messages
14,476
Reaction score
23,914
Location
Massachusetts
Website
elizabethbonesteel.com
If something exhibits a set of behaviors because because it's programmed to and has no choice, that's not intelligence.

See, by this definition, I don't believe we've got anything close to AI in the present day.

Again, mimicking or copying is just that. The machine isn't able to decide on those behaviors, so doesn't know how to alter them... which will lead to mistakes in enacting them.which is where tests like the Turing test come in.
as far as awareness goes... even a Dog knows it's not a computer, without being told. But how many computers know they're not a dog, without being programmed/"told"?

Ethics are relative, by the way, decided by the larger group. And all it takes is looking around in the society we live in today, to know that groups are not always right. In fact, are generally wrong more often than they're right... which is why so many civilizations have come and gone.

FWIW, I wasn't asking these questions because I thought they had answers. I was asking because I think they make thought-provoking story points. Few things around self-awareness and self-determination are black-and-white.

And yeah, full disclosure, the nature of AI is a plot point in a book I'm working on. If humans can't distinguish whether or not a machine is self-aware, how do we decide how to treat it? I have my own opinion on the subject (which readers will certainly "get"), but I do think it's an arguable point.
 

Vida Paradox

Code Surfer
Super Member
Registered
Joined
Aug 23, 2018
Messages
183
Reaction score
27
Location
Drifting in Space
I am not sure why you all suddenly argues about intelligence and sliding scale of consciousness...

BUT! I got it! I have an idea on what to do now!

Are they building their own hardware?

Do humans understand how they work? Can we damage them? Can other AI damage them?

Exponential growth would also presumably have a cost. What does it cost them to grow? What are the tradeoffs?

Technically they can send out anonymous request for Hardware through the black market and have some freelancers attaching them.

There are other AI protagonist who understand how they work, but only a select few humans can ever hope to understand the complexity behind an AI. ( I mean, IRL, Google Programmer has no idea how the Deep Mind Algorithm Works Exactly.)

You can damage them by blowing them up. Finding them is the hardest part.
No, AI cannot damage them.

I think I'll make it so that it took more power and energy to deal with all those computing. But I'm not sure so we'll see...
 

OldHat63

Banned
Flounced
Joined
Aug 21, 2018
Messages
404
Reaction score
30
Location
Lost in the woods of TN and prefer it that way
Okay, let me see if I can give both Vida and Liz something they may find useful in their stories.

One, concerning Limiting An A.I. or E.I's growth and spread: Isolation is the key here.
The way scientists,biologist, etc. believe intelligence involved is like this... First a one-celled organism came into existence, then a multi-celled one, and so on. Eventually you ended up with some very large creatures roaming around that werent't very bright. They had enough intelligence to survive, mostly, but that's about it. All of this process continued until humans showed up

Now... A.I. is coming along in the exact opposite way of this: The mind is being developed first, in isolation from any body, or interaction with the outside world. These programs have to already be fairly well advanced before they're exposed to all the possible input the world has to offer. It's sort of like a person being born without any sensory input at all... no body, no nothing, then suddenly, at some point, being shoved into one and being asked to learn and do.
It might work... it does seem to be working... but it's going about it the long hard way, at least as far the brain/mind/control center's development is concerned. ( How would you like to have to learn to not only walk, talk, feed yourself, and all the other things it takes to survive... but learn how to drive a standard-shift car through a busy town, all at the same time?

So,you want to hold back A.I./E.I.s development and advance, keep it isolated as much as possible. And don't let it have access to the tools that will allow it to break free.

Oh, and concerning biology... the materials the system/brain are made of are, humorously enough... immaterial. It's the system, and how it functions that counts.

Build a human brain out of anything other than what they're normally constructed of, and so long as the function of the new material is still the same, it doesn't matter. If the pattern is right, it'll still work.

The problem though, is the human brain is still the least understood system in existence. It's too complicated and intricate for anybody to duplicate exactly. The best anybody can do is to build something that works sort of the way it does... but not by the same methods.


Okay.... my back is cramping up,so I'm gonna quit for now. I may add more later, if I remember or think of anything else that may be interesting.
 
Last edited:

lizmonster

Possibly A Mermaid Queen
Absolute Sage
Super Member
Registered
Joined
Jul 5, 2012
Messages
14,476
Reaction score
23,914
Location
Massachusetts
Website
elizabethbonesteel.com
I am not sure why you all suddenly argues about intelligence and sliding scale of consciousness...

:) Threads drift.

I mean, IRL, Google Programmer has no idea how the Deep Mind Algorithm Works Exactly.

Supporting link? Because...no. A programmer may not be able to predict the precise direction a Deep Mind decision goes, but they know how the program works. Even the most sophisticated software isn't that indeterminate. (Quantum computing is working in this direction, but practical examples are still hard to come by, and there's some controversy as to whether or not it'll ever really work.)


Okay, letme see if I can give both Vida and Liz something they may find useful in their stories.

Thank you. :) I know where I'm going with it. It's a series book, and I've already established a bit of canon on the subject.
 
Last edited:

JohnLine

Owns a pen.
Kind Benefactor
Super Member
Registered
Joined
Jun 18, 2011
Messages
660
Reaction score
358
Location
California
I've done a bit of AI programming for games, and I took 2 AI classes in college. If you're looking to copy human intelligence, you're talking about neural networks. The place where AIs have humans beat right now is in their ability to mix forms of intelligence.

Human brains are much more advanced than a pocket calculator, and yet we do math at a snail's pace compared to one. This is because we're using our neural net brains to do math, and they are the wrong tool for the job. It's like cooking a meal with a car. Sure you can do it, if you have to, but it's just about the last thing anyone would choose.

And they also have the problem that every input affects every calculation. So unrelated things such as, what we ate for lunch, the death of our pet goldfish, or missing a bus, all affect what should have been a simple calculation.

But that also means that those simple things, like smelling a particular scent, can jog our memories, and make us suddenly realize that things, like that our pet goldfish was lying about the bus schedule.

AI with their ability to compartmentalize problems, can solve things much faster, but might not realize when they've made a mistake. They're likely to take old calculations for granted and never recheck them.
 

Vida Paradox

Code Surfer
Super Member
Registered
Joined
Aug 23, 2018
Messages
183
Reaction score
27
Location
Drifting in Space
I've done a bit of AI programming for games, and I took 2 AI classes in college. If you're looking to copy human intelligence, you're talking about neural networks. The place where AIs have humans beat right now is in their ability to mix forms of intelligence.

Human brains are much more advanced than a pocket calculator, and yet we do math at a snail's pace compared to one. This is because we're using our neural net brains to do math, and they are the wrong tool for the job. It's like cooking a meal with a car. Sure you can do it, if you have to, but it's just about the last thing anyone would choose.

And they also have the problem that every input affects every calculation. So unrelated things such as, what we ate for lunch, the death of our pet goldfish, or missing a bus, all affect what should have been a simple calculation.

But that also means that those simple things, like smelling a particular scent, can jog our memories, and make us suddenly realize that things, like that our pet goldfish was lying about the bus schedule.

AI with their ability to compartmentalize problems, can solve things much faster, but might not realize when they've made a mistake. They're likely to take old calculations for granted and never recheck them.

My mind have been opened...

I am now aware...

...

...

But seriously though, that's amazing! I never thought about that one! All this time I've been thinking of AI's weakness when conflicting codes happen. But that one, the part where AI only goes on a linear command and not going to recheck stuff through other sensory...

I am so glad you told me that.
 
Last edited:

indianroads

Wherever I go, there I am.
Super Member
Registered
Joined
Mar 4, 2017
Messages
2,372
Reaction score
230
Location
Colorado
Website
indianroads.net
Just like what it says on the tin. I always wonder how you can pull it off.

Also, sorry if this Thread's been here before...

Artificial Intelligence in speculative science n' technology has been one of the most fascinating and terrifying subject. They're like Aliens, unpredictable, beyond our understanding, and basically OP if you think about their rate of growth. If a computer can think on their own then it can grow and there's no limit on how big it will grow or how powerful it will become. Just think of the device you're using to read this and think back twenty years ago. (If you're old enough)

Yeah, technology is growing FAST. And we haven't even discovered AI! I mean, the discovery of AI is said to be a 'Technological Singularity'.

Now, in fiction, anything goes. But I always wanted to put an extra realism in my story to make it work. The Question is:

How do I limit the growth of an AI?

As in like, is there a way to believably makes it impossible for an AI to surpass humanity? Like a limit of codes and data? Or maybe a universal limit on any being with souls and consciousness?

Right now, I'm going with AIs are very rare and those who became AI have soul and consciousness in them. Their stunt of technological growth is a choice since they don't want to be alone and separated from other intelligent life forms.

Any ideas is always appreciated, thanks!

I've published one novel with an AI MC, and am in the process of writing a second; I followed the outline you mentioned in your post. First, the don't want to be alone and actually hide the reality of what they are from humans. Second, they have compassion and empathy for those around them.
 

ipsbishop

Super Member
Registered
Joined
May 19, 2018
Messages
81
Reaction score
5
Location
Sarasota Fl
Website
toolsonlinenovel.blogspot.com
I have an AI MC. My premises are: AI's learn from and become changed by their environment. If aware they become individuals in their own right with distinctive personalities. In this case like all of us, they want company, a sense of belonging, purpose, to love and to fit in. This spectrum gives the writer a substantial imagination window. Two good examples are "Mike" In The Moon is a Harsh Mistress at one end to L. E. Modesitt's "Paula Anthane" in Flash at the other.
 

nickj47

Super Member
Registered
Joined
Jul 10, 2018
Messages
261
Reaction score
47
Location
Novato, CA
As far as what your readers will believe about AI, almost anything. Few people know much more about AI than what they've seen in books, movies, and magazines, almost all of which is way beyond anything technologically feasible today (lizmonster seems to have the best grasp of current technology). Writing about AI is almost like writing fantasy--make your rules and stick with them.

Two things a lifetime in AI has taught me: self-awareness is not a byproduct of intelligence, and the singularity is not going to happen. It can make a great story, though.

Self-awareness is not a prerequisite for evil actions. HAL didn't have to be self-aware. It was just acting on its programming to complete the mission at all costs. Self-awareness isn't provable in any case, although readers will certainly believe it if you say it's so.
 

themindstream

Super Member
Registered
Joined
Nov 12, 2015
Messages
1,011
Reaction score
194
Something I've gleaned from a friend whose field is AI programming.

Computer code, as humans write it, is entirely deterministic, baring hardware glitches. Nothing can appear in the behavior of the code that the coder did not put there (intentionally or accidentally).

However, we are getting to the point where we are writing AI to train itself. We expose it to lots of data and tell it what we are looking for in that data and long story short, it gradually writes its own rules for how to recognize what we are looking for. If a human were to look at these rules they would be totally incomprehensible to us. We are creating machines where we don't really know how they work. Out of these, it's hard to say what can pop up.

It's worth pointing out that AI "grown" in this way can be flawed if the data given to it is biased. This has already happened and been reported on.

Furthermore, when AI is able to act without human guidance, the AI creator has to deal with issues of ethics. Self-driving cars will soon be a commonplace reality and one of the appealing qualities of them is that they can react and respond to dangerous situations faster than a human. What, for example, should it do if it has to make a decision where one course of action is likely to kill the driver and the other risks killing bystanders? (If you haven't read Asimov's I, Robot, you should. He creates what is foundationally a system of basic ethics for robots, the Three Laws, and then writes about what happens when hard programming meets messy reality.)

Re: conciousness - it may be interesting to read about the Chinese Room thought experiment proposed by philosopher John Searle. And if you feel up to it, the book of essays it comes from. The conclusion is that "understanding" a thing is not the same as knowing all its parts, and a computer may be able to learn all the parts of a thing but they may not truly understand a thing in the way humans do.

Re: Souls. I would leave that out of the discussion unless your human characters are religious. The existence of souls is an article of faith. (And if there is no such thing as a soul it does not make the life of a living human less significant, for we can still do all the things philosophers claim a soul is required to enable.)
 

PeteMC

@PeteMC666
Kind Benefactor
Super Member
Registered
Joined
Apr 26, 2011
Messages
3,002
Reaction score
363
Location
UK
Website
talonwraith.wordpress.com
Have you read Neuromancer? Gibson pretty much set the bar for writing believable, suitably inscrutable AI, and has some interesting ideas about how to limit their power, and what happens when those limits fail.
 

OldHat63

Banned
Flounced
Joined
Aug 21, 2018
Messages
404
Reaction score
30
Location
Lost in the woods of TN and prefer it that way
Two things a lifetime in AI has taught me: self-awareness is not a byproduct of intelligence, and the singularity is not going to happen. It can make a great story, though.

( Emphasis mine - O.H. )

Never say never, nickj47. A great many things are common-place today that were believed to be impossible not so long ago.

Believing that just because we can't find or do a thing today means no one will ever figure it out has been proven to be a mistake more times than not.

The fact is, all decisions ever made are based on insufficient information. There's always some unknown that can and will change the equation. And it's a sure bet that sooner or later, someone will find that unknown bit of information, and the entire game will be changed in ways no one could predict.

I should also point out that some questions take a good bit longer than just one lifetime to find answers to.




O.H.
 
Last edited:

Doug Egan

Registered
Joined
Jun 30, 2018
Messages
24
Reaction score
3
Location
MidAtlantic, USA
You've opened a philosophical can of worms. What is intelligence? What is artificial intelligence? Who is to say computers haven't achieved it already, to some measure. What limits AI right now is imperfect software, access to resources, access to data, processing time...you could probably think of others. These same constraints will limit AI in the future. Even as software improves, it will never be perfect.

I came across a science article recently which declared "Now computers can think like humans." My immediate reaction was "Oh really. Does this mean they make decissions based on a primitive tribalistic world model, are easily manipulated by promises of sex, and are prone to gambling addictions?" In other words, human intelligence is constrained by the particular conditions in which our species has evolved, and the types of problems which our ancestors needed to solve in order to survive. Machine Learning is often modelled as an evolutionary process also, and so will be limited by its own evolutionary path. As an author you can imagine what that evolutionary path might look like, and how it will be fundamentally different from the evolution of human self-awareness.
 

lizmonster

Possibly A Mermaid Queen
Absolute Sage
Super Member
Registered
Joined
Jul 5, 2012
Messages
14,476
Reaction score
23,914
Location
Massachusetts
Website
elizabethbonesteel.com
I came across a science article recently which declared "Now computers can think like humans."

The content of the article might have been fine, but that headline is rubbish on its face. While there are indeed some fascinating areas of research in hardware and software these days, to the best of my knowledge, most of them are focused on specific types of problem-solving, and not on creating a computer that thinks "like a human." You can't recreate something when your understanding of that something is incomplete. (And the first person who tells me we do understand the human brain? I'd like that cure for OCD now, please. :))

As you point out, AI is a term that gets used too often without a crisp definition. I think the standard SFF definition of AI is actually a lot better understood than the way it's used in science reporting these days. In SFF, it's almost always used to mean a manufactured intelligence that demonstrates something similar to human reasoning. In the news, people seem to throw the term AI around any time Google Translate gets a speed boost.