• This forum is specifically for the discussion of factual science and technology. When the topic moves to speculation, then it needs to also move to the parent forum, Science Fiction and Fantasy (SF/F).

    If the topic of a discussion becomes political, even remotely so, then it immediately does no longer belong here. Failure to comply with these simple and reasonable guidelines will result in one of the following.
    1. the thread will be moved to the appropriate forum
    2. the thread will be closed to further posts.
    3. the thread will remain, but the posts that deviate from the topic will be relocated or deleted.
    Thank you for understanding.​

AI: To Build Truly Intelligent Machines, Teach Them Cause and Effect

Introversion

Pie aren't squared, pie are round!
Kind Benefactor
Super Member
Registered
Joined
Apr 17, 2013
Messages
10,642
Reaction score
14,865
Location
Massachusetts
Judea Pearl, a pioneering figure in artificial intelligence, argues that AI has been stuck in a decades-long rut. His prescription for progress? Teach machines to understand the question why.

Quanta Magazine said:
Artificial intelligence owes a lot of its smarts to Judea Pearl. In the 1980s he led efforts that allowed machines to reason probabilistically. Now he’s one of the field’s sharpest critics. In his latest book, “The Book of Why: The New Science of Cause and Effect,” he argues that artificial intelligence has been handicapped by an incomplete understanding of what intelligence really is.

...

Are you suggesting there’s a trend developing away from machine learning?

Not a trend, but a serious soul-searching effort that involves asking: Where are we going? What’s the next step?

That was the last thing I wanted to ask you.

I’m glad you didn’t ask me about free will.

In that case, what do you think about free will?

We’re going to have robots with free will, absolutely. We have to understand how to program them and what we gain out of it. For some reason, evolution has found this sensation of free will to be computationally desirable.

In what way?

You have the sensation of free will; evolution has equipped us with this sensation. Evidently, it serves some computational function.

Will it be obvious when robots have free will?

I think the first evidence will be if robots start communicating with each other counterfactually, like “You should have done better.” If a team of robots playing soccer starts to communicate in this language, then we’ll know that they have a sensation of free will. “You should have passed me the ball — I was waiting for you and you didn’t!” “You should have” means you could have controlled whatever urges made you do what you did, and you didn’t. So the first sign will be communication; the next will be better soccer.

Now that you’ve brought up free will, I guess I should ask you about the capacity for evil, which we generally think of as being contingent upon an ability to make choices. What is evil?

It’s the belief that your greed or grievance supersedes all standard norms of society. For example, a person has something akin to a software module that says “You are hungry, therefore you have permission to act to satisfy your greed or grievance.” But you have other software modules that instruct you to follow the standard laws of society. One of them is called compassion. When you elevate your grievance above those universal norms of society, that’s evil.

So how will we know when AI is capable of committing evil?

When it is obvious for us that there are software components that the robot ignores, consistently ignores. When it appears that the robot follows the advice of some software components and not others, when the robot ignores the advice of other components that are maintaining norms of behavior that have been programmed into them or are expected to be there on the basis of past learning. And the robot stops following them.
 

MaeZe

Kind Benefactor
Super Member
Registered
Joined
Jun 6, 2016
Messages
12,772
Reaction score
6,476
Location
Ralph's side of the island.
Surely there are algorithms for cause and effect.

I'll put my repeating two cents in: until the mechanism of conscious thought is understood, we can't create consciousness artificially.

I have no doubt consciousness is a biological process. It's much more than data storage and retrieval and more than a good algorithm that can mimic a human on the phone. In addition, your brain is working subconsciously in addition to your conscious thought. We know this from studies of brain damaged people, but that's another issue.

Bottom line, there is a biological mechanism that results in conscious thought. We are getting closer to understanding it, but we aren't there yet.