• This forum is specifically for the discussion of factual science and technology. When the topic moves to speculation, then it needs to also move to the parent forum, Science Fiction and Fantasy (SF/F).

    If the topic of a discussion becomes political, even remotely so, then it immediately does no longer belong here. Failure to comply with these simple and reasonable guidelines will result in one of the following.
    1. the thread will be moved to the appropriate forum
    2. the thread will be closed to further posts.
    3. the thread will remain, but the posts that deviate from the topic will be relocated or deleted.
    Thank you for understanding.​

AI: Can robots (really: software) become self-aware?

Introversion

Pie aren't squared, pie are round!
Kind Benefactor
Super Member
Registered
Joined
Apr 17, 2013
Messages
10,726
Reaction score
15,139
Location
Massachusetts
Consciousness is a famously hard problem, so Hod Lipson is starting from the basics: with self-aware robots that can help us understand how we think.

Quanta Magazine said:
“I want to meet, in my lifetime, an alien species,” said Hod Lipson, a roboticist who runs the Creative Machines Lab at Columbia University. “I want to meet something that is intelligent and not human.” But instead of waiting for such beings to arrive, Lipson wants to build them himself — in the form of self-aware machines.

To that end, Lipson openly confronts a slippery concept — consciousness — that often feels verboten among his colleagues. “We used to refer to consciousness as ‘the C-word’ in robotics and AI circles, because we’re not allowed to touch that topic,” he said. “It’s too fluffy, nobody knows what it means, and we’re serious people so we’re not going to do that. But as far as I’m concerned, it’s almost one of the big unanswered questions, on par with origin of life and origin of the universe. What is sentience, creativity? What are emotions? We want to understand what it means to be human, but we also want to understand what it takes to create these things artificially. It’s time to address these questions head-on and not be shy about it.”

One of the basic building blocks of sentience or self-awareness, according to Lipson, is “self-simulation”: building up an internal representation of one’s body and how it moves in physical space, and then using that model to guide behavior. Lipson investigated artificial self-simulation as early as 2006, with a starfish-shaped robot that used evolutionary algorithms (and a few pre-loaded “hints about physics”) to teach itself how to flop forward on a tabletop. But the rise of modern artificial intelligence technology in 2012 (including convolutional neural networks and deep learning) “brought new wind into this whole research area,” he said.

In early 2019, Lipson’s lab revealed a robot arm that uses deep learning to generate its own internal self-model completely from scratch — in a process that Lipson describes as “not unlike a babbling baby observing its hands.” The robot’s self-model lets it accurately execute two different tasks — picking up and placing small balls into a cup, and writing letters with a marker — without requiring specific training for either one. Furthermore, when the researchers simulated damage to the robot’s body by adding a deformed component, the robot detected the change, updated its self-model accordingly, and was able to resume its tasks.

It’s a far cry from robots that think deep thoughts. But Lipson asserts that the difference is merely one of degree. “When you talk about self-awareness, people think the robot is going to suddenly wake up and say, ‘Hello, why am I here?’” Lipson said. “But self-awareness is not a black-and-white thing. It starts from very trivial things like, ‘Where is my hand going to move?’ It’s the same question, just on a shorter time horizon.”

Quanta spoke with Lipson about how to define self-awareness in robots, why it matters, and where it could lead. The interview has been condensed and edited for clarity.

...
 

Roxxsmom

Beastly Fido
Kind Benefactor
Super Member
Registered
Joined
Oct 24, 2011
Messages
23,116
Reaction score
10,870
Location
Where faults collide
Website
doggedlywriting.blogspot.com
I've often wondered if consciousness, as we and other complex animals experience it, isn't strictly speaking a function of the sophistication of a given thought process so much as it might be a mechanism of coordinating responses to the neurological and endocrine chaos that ensues when both external and internal stimuli are bombarding multiple sensory organs and being processed and prioritized by different brain regions in a less-than-orderly fashion.

Anesthetics appear to suppress consciousness by functionally deactivating communication between different brain regions, even though those regions remain independently active and actually receive sensory input (and when something goes terribly wrong under anesthetic, which happens rarely, someone can awaken and feel and remember what is happening, even if motor functions are paralyzed).

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2743249/

https://www.nytimes.com/2013/12/15/magazine/what-anesthesia-can-teach-us-about-consciousness.html

Is there a way to simulate this chaotic process in hardware and/or software? And how important is sensitivity to one's internal state, as well as to the environment, for consciousness? And I don't just mean the kind of proprioception one would need for a robotic body part, but the actual ability to monitor one's internal functioning that isn't strictly relevant to the immediate task at hand (like I have a bit of a headache right now, and I'm aware of it, and my back hurts a bit, even though other processes have kind of shoved these sensations to the background). I also don't know enough about computer hardware or software to know how much different compartmentalized processes or functions communicate with one another simultaneously, as opposed to performing their tasks in isolation.
 
Last edited:

JimmyB27

Hoopy frood
Super Member
Registered
Joined
Dec 29, 2005
Messages
5,623
Reaction score
925
Age
42
Location
In the uncharted backwaters of the unfashionable e
Website
destinydeceived.wordpress.com
https://www.youtube.com/watch?v=WjCytqku18M

Personally, I think if we ever do create a truly sentient AI, it will be by mistake - an emergent property of ever increasing complexity, much like our own sentience was. And, also, I'm of the opinion that if we do ever create something that appears sentient (much like Commander Data), we are morally bound to treat it as if it is, because there's no sure fire way of proving it, but to (even accidentally) treat a sentient being as property would be a terrible crime.
 
Last edited:

Albedo

Alex
Super Member
Registered
Joined
Dec 17, 2007
Messages
7,376
Reaction score
2,955
Location
A dimension of pure BEES
I think it'll probably be possible one day to simulate something like consciousness in software form (assuming technological civilisation survives another century -- a big assumption), but I don't know how you could prove that the software was actually self-aware. That's one for philosophers.

But in humans and other self-aware animals, there's no processor executing consciousness software. the hardware IS what's conscious. I feel a lot of SF in particular tends to ignore what I think's a hugely important distinction. Why would we assume it's the software driving the robot that's the conscious being, and not the robot itself being conscious? We've got plenty of examples of the latter, and none of the former.
 

JimmyB27

Hoopy frood
Super Member
Registered
Joined
Dec 29, 2005
Messages
5,623
Reaction score
925
Age
42
Location
In the uncharted backwaters of the unfashionable e
Website
destinydeceived.wordpress.com
But in humans and other self-aware animals, there's no processor executing consciousness software. the hardware IS what's conscious. I feel a lot of SF in particular tends to ignore what I think's a hugely important distinction. Why would we assume it's the software driving the robot that's the conscious being, and not the robot itself being conscious? We've got plenty of examples of the latter, and none of the former.

What is software, at the end of the day, apart from a bunch of zeroes and ones encoded in the hardware?
 

Kjbartolotta

Potentially has/is dog
Super Member
Registered
Joined
May 15, 2014
Messages
4,197
Reaction score
1,049
Location
Los Angeles
Is consciousness even verifiable? I think AI is a subject where philosophy is immensely valuable, since how can we build the machine until we have a better idea what it's supposed to be doing?
 
Last edited:

nickj47

Super Member
Registered
Joined
Jul 10, 2018
Messages
261
Reaction score
47
Location
Novato, CA
No, consciousness is not verifiable. You can't be sure that anyone else on the planet is sentient. You can believe it (I do) but you can never prove it. Marketing claims aside, AI is a lifetime away from anything resembling real intelligence. It'll be a long time before we ever have to worry about what a device might be 'thinking'.
 

Albedo

Alex
Super Member
Registered
Joined
Dec 17, 2007
Messages
7,376
Reaction score
2,955
Location
A dimension of pure BEES
What is software, at the end of the day, apart from a bunch of zeroes and ones encoded in the hardware?
I s'pose. But with our current model of computers, where software comes in distinct blobs that tell the hard bits of the computer what to do (this is the whole of my informed understanding of computer science), there's a layer of abstraction of command-control that just isn't there in organisms. DNA is the only kind of similar conception in biology.

Is consciousness even verifiable? I think AI as a subject where philosophy is immensely valuable, since how can we build the machine until we have a better idea what it's supposed to be doing?
I think Peter Watts's Blindsight had the best take on consciousness. It's an evolutionary impediment, most intelligent life in the universe does just fine without it, and the sooner we're rid of it the better.
 

Kjbartolotta

Potentially has/is dog
Super Member
Registered
Joined
May 15, 2014
Messages
4,197
Reaction score
1,049
Location
Los Angeles
I think Peter Watts's Blindsight had the best take on consciousness. It's an evolutionary impediment, most intelligent life in the universe does just fine without it, and the sooner we're rid of it the better.

Very possibly, but Watts does admit it was a thought experiment based on outdated information. I never read the sequel because I got too freaked out.
 
Last edited:

Kjbartolotta

Potentially has/is dog
Super Member
Registered
Joined
May 15, 2014
Messages
4,197
Reaction score
1,049
Location
Los Angeles
No, consciousness is not verifiable. You can't be sure that anyone else on the planet is sentient. You can believe it (I do) but you can never prove it.

Definitely agree, though I do think it's worthwhile to define a better model of what you're not sure exists or not.
 

JimmyB27

Hoopy frood
Super Member
Registered
Joined
Dec 29, 2005
Messages
5,623
Reaction score
925
Age
42
Location
In the uncharted backwaters of the unfashionable e
Website
destinydeceived.wordpress.com
I s'pose. But with our current model of computers, where software comes in distinct blobs that tell the hard bits of the computer what to do (this is the whole of my informed understanding of computer science), there's a layer of abstraction of command-control that just isn't there in organisms.

Does the fact that the model is different necessarily preclude it becoming sentient? And, even if it does, the key word in your post is *current*. It's possible we could change the model as our technology advances.