Robotics & Artificial Intelligence: Team Zuckerberg or Team Musk?

Status
Not open for further replies.

Sarahani

Banned
Joined
Feb 24, 2018
Messages
81
Reaction score
13
Location
Connecticut
I am mesmerized and fascinated about this forthcoming industrial revolution. I am team Zuckerberg on that matter against Team Musk. I don't believe that AI will be a threat for humanity any more than computers have been. We all know that it will change our lives and blah blah blah but to claim like Elon Musk, Stephen Hawking and so many others that it can lead humanity to the end is really far-fetched. To avoid any kind of speculation I try to stay up to date about Robotics and AI and differentiate between facts and fictions which is a hard task. I am currently reading the short but quite powerful book entitled Living with Robots by Paul Dumouchel & Luisa Damiano.

What about you, Team Zuckerberg or Team Musk?
 

lizmonster

Possibly A Mermaid Queen
Absolute Sage
Super Member
Registered
Joined
Jul 5, 2012
Messages
14,683
Reaction score
24,617
Location
Massachusetts
Website
elizabethbonesteel.com
Oh, Team Musk, I think (on this point, at least). Not because AI will develop sentience and kill us all, but because we've become so sloppy with the technology we're continually inventing that it'll eventually do something fatal to our infrastructure.

Computers do exactly as they're told, and if they're told the wrong thing, they'll propagate that error with determined efficiency.
 

Albedo

Alex
Super Member
Registered
Joined
Dec 17, 2007
Messages
7,376
Reaction score
2,955
Location
A dimension of pure BEES
Team "Musk is right, but for the wrong reasons". We're all going to die, but it's from accelerating climate change, nukes, and the end of antibiotics, not because a malevolent robot god is going to rise up and quite reasonably exterminate us all. The fact that we'll all be jobless and impoverished due to AI is just gunna make our last years that little bit shittier.
 

Kjbartolotta

Potentially has/is dog
Super Member
Registered
Joined
May 15, 2014
Messages
4,197
Reaction score
1,049
Location
Los Angeles
I have a hard time thinking AI will kill us, because what could they possibly want from us?
 

Helix

socially distancing
Kind Benefactor
Super Member
Registered
Joined
Mar 31, 2011
Messages
11,747
Reaction score
12,182
Location
Atherton Tablelands
Website
snailseyeview.medium.com
Team "Musk is right, but for the wrong reasons". We're all going to die, but it's from accelerating climate change, nukes, and the end of antibiotics, not because a malevolent robot god is going to rise up and quite reasonably exterminate us all. The fact that we'll all be jobless and impoverished due to AI is just gunna make our last years that little bit shittier.

I'm with #TeamAlbedo
 

lizmonster

Possibly A Mermaid Queen
Absolute Sage
Super Member
Registered
Joined
Jul 5, 2012
Messages
14,683
Reaction score
24,617
Location
Massachusetts
Website
elizabethbonesteel.com
I have a hard time thinking AI will kill us, because what could they possibly want from us?

I'm kind of stuck on us being able to create "real" AI in the first place, but that's a more complicated conversation and I'm not going to challenge someone like Hawking on that front. :)
 

Boethius

Registered
Joined
Oct 1, 2017
Messages
20
Reaction score
2
Location
Pacific Northwest
Website
vinemaple.net
I'm a software engineer. I've built a few self-learning systems and read a lot of AI code. I've never built a robot, but I've studied how to build them. I'm not the least bit afraid that AI will turn on us. Why? Because I've tried to build monsters and failed every time. The problem with AI is it has no inherent self-interest. A computer system has no fear of being turned off, no desire for more toys, tropical vacations or Napoleon brandy. I can program those desires into a system, but there is nothing in a computer that can sustain them. Computers just don't care. They are inescapably passive. Without a wretched, scheming, human to drive them on, they do nothing. Turn them on, turn them off, grind them into dust, they are indifferent.

I do fear my fellow humans, as all humans have since we discovered that what is yours is not mine. And AI can be a formidable weapon, perhaps the ultimate weapon, but it's the person operating the weapon who is to be feared. Any law that supports arming any ego with a hankering for mayhem is stupid.

I also fear what AI and computing will do to society, what it has already done. In the 50 or so years that I have been hacking away, we've gotten so much more efficient and productive. Humans always want more so our appetite is always growing, but I've noticed that a person can live much better on fewer resources today than they could even ten years ago. And I expect it will me much much easier ten years in the future. Amazon, Google, Netflix, Uber, the list goes on, have been sucking the cost out of so many things. Jobs are disappearing, and they will continue to disappear, and it seems that the gap between the very wealthy and the less wealthy is widening. There will have to be some changes coming. I don't know what they will be, but I have great faith that the forces of civilization will, with an occasional setback, continue to prevail. They always have and past performance is still the best predictor of future performance.
 

Sarahani

Banned
Joined
Feb 24, 2018
Messages
81
Reaction score
13
Location
Connecticut
I'm a software engineer. I've built a few self-learning systems and read a lot of AI code. I've never built a robot, but I've studied how to build them. I'm not the least bit afraid that AI will turn on us. Why? Because I've tried to build monsters and failed every time. The problem with AI is it has no inherent self-interest. A computer system has no fear of being turned off, no desire for more toys, tropical vacations or Napoleon brandy. I can program those desires into a system, but there is nothing in a computer that can sustain them. Computers just don't care. They are inescapably passive. Without a wretched, scheming, human to drive them on, they do nothing. Turn them on, turn them off, grind them into dust, they are indifferent.

I do fear my fellow humans, as all humans have since we discovered that what is yours is not mine. And AI can be a formidable weapon, perhaps the ultimate weapon, but it's the person operating the weapon who is to be feared. Any law that supports arming any ego with a hankering for mayhem is stupid.

:hooray:

That's exactly what I said during a debate I have with some Team Musk. You can agree with Elon Musk only if you believe that robots will become sentient or self-conscious in the human definition of it. And that is out of the scope since it is not something that anybody can certify. In the present moment, Zuckerberg is right in my opinion. He considers the datas and knowledge we currently have, and our biggest threat now is just human beings, the same threat of yesterday and of tomorrow. Human beings display desires, greed, excess which are part of our nature.
 
Last edited:

lizmonster

Possibly A Mermaid Queen
Absolute Sage
Super Member
Registered
Joined
Jul 5, 2012
Messages
14,683
Reaction score
24,617
Location
Massachusetts
Website
elizabethbonesteel.com
:hooray:

That's exactly what I said during a debate I have with some Team Musk. You can't agree with Elon Musk if you don't agree that robots will become sentient or self-conscious in the human definition of it.

Well, you can, actually. You just don't have to agree with his underlying assumptions.

That humans will do ourselves in at some point is, I think, a given. It's also a given that the method will involve technology. And I think the Musk side of the argument suggests that runaway technology of some kind is going to be a big catalyst.

I spent 27 years writing software. I agree with Boethius; I'm not worried my toaster is going to become sentient any time soon. I am, however, continually horrified at how much of the internet runs on a programming language that was written in eight days and isn't thread-safe.
 

lizmonster

Possibly A Mermaid Queen
Absolute Sage
Super Member
Registered
Joined
Jul 5, 2012
Messages
14,683
Reaction score
24,617
Location
Massachusetts
Website
elizabethbonesteel.com
Go ahead I am listening.

I actually explained my position in post #2 on this thread, but okay.

I don't believe AI is going to become sentient and self-determining (at least, I believe we're centuries away from this, at best). I do believe that technology is going to become increasingly entrenched in our daily lives, and because errors are inevitable, it's extremely likely something is eventually going to happen that's going to dismantle civilization in a catastrophic fashion.

And I think Zuckerberg's assertion that technology is entirely neutral and humanity lives in a sterile bubble that isn't affected by what's around us is...weird, and probably not what he'd say if you talked to him about it today, given Facebook's recent woes.
 

Kjbartolotta

Potentially has/is dog
Super Member
Registered
Joined
May 15, 2014
Messages
4,197
Reaction score
1,049
Location
Los Angeles
Team "Musk is right, but for the wrong reasons". We're all going to die, but it's from accelerating climate change, nukes, and the end of antibiotics, not because a malevolent robot god is going to rise up and quite reasonably exterminate us all. The fact that we'll all be jobless and impoverished due to AI is just gunna make our last years that little bit shittier.

I'm more afraid of plutocrats than AI. Frankly, a sociopathic computer can do a better job running things than...[insert boring political rant here]

I'm kind of stuck on us being able to create "real" AI in the first place, but that's a more complicated conversation and I'm not going to challenge someone like Hawking on that front. :)

I'm entirely convinced we'll get AI eventually, perhaps within my lifetime, but I just can't conceptualize what an AI is yet. So it's hard for me to say what it will do.

As an aside, I saw a Charlie Rose episode about a decade or so (I know, Charlie Rose, but it was a while ago). He has a well-regarded Silicon Valley futurist on, who was then barnstorming for a somewhat obscure social media app known as Facebook. She got a lot right in her prognostications, but when Charlie asked her about AI, she got *really* evasive. Her point was that we don't know what AI is, and, what more, we don't know if it already exists yet. She made the rather cryptic point that perhaps there's an AI in the Internet as we speak. I dunno, that's weird and probably untrue. But a hell of a thing to think about.

And I think Zuckerberg's assertion that technology is entirely neutral and humanity lives in a sterile bubble that isn't affected by what's around us is...weird, and probably not what he'd say if you talked to him about it today, given Facebook's recent woes.

Yeah, I can't even with these techbros. I read all the same SF that they do, and can't understand how they reach their conclusions.
 
Last edited:

lizmonster

Possibly A Mermaid Queen
Absolute Sage
Super Member
Registered
Joined
Jul 5, 2012
Messages
14,683
Reaction score
24,617
Location
Massachusetts
Website
elizabethbonesteel.com
She made the rather cryptic point that perhaps there's an AI in the Internet as we speak. I dunno, that's weird and probably untrue. But a hell of a thing to think about.

It's interesting, but you'd have to firm up the definition before I'd opine on the possibility. I think of AI as something that makes conscious decisions, vs. a program reacting in a predictable algorithmic fashion. I don't think there's sentience out there on the net.

Of course, there are people who believe our brains, on some level, react in a predictable algorithmic fashion, and that free will isn't real, and if you start pinning it down it gets very cluttered and SFF. :)

Linda Nagata wrote a trilogy that deals with a network of computer programs that act and react interdependently with a specific goal in mind, but they're never truly sentient--they're just complex enough in their interactions that to humans their behaviors are difficult to predict. I find that more credible than the idea of Lt. Commander Data any time soon. (Also, Lt. Commander Data would totally destroy humanity, but by accident.)
 

lizmonster

Possibly A Mermaid Queen
Absolute Sage
Super Member
Registered
Joined
Jul 5, 2012
Messages
14,683
Reaction score
24,617
Location
Massachusetts
Website
elizabethbonesteel.com
Yeah, I can't even with these techbros. I read all the same SF that they do, and can't understand how they reach their conclusions.

So many of them are coming from the desperate need to believe that they really are objective, and they really have attained what they've attained purely on merit. It can be hard to question the foundations of your own success.
 

Sarahani

Banned
Joined
Feb 24, 2018
Messages
81
Reaction score
13
Location
Connecticut
I actually explained my position in post #2 on this thread, but okay.

I don't believe AI is going to become sentient and self-determining (at least, I believe we're centuries away from this, at best). I do believe that technology is going to become increasingly entrenched in our daily lives, and because errors are inevitable, it's extremely likely something is eventually going to happen that's going to dismantle civilization in a catastrophic fashion.

And I think Zuckerberg's assertion that technology is entirely neutral and humanity lives in a sterile bubble that isn't affected by what's around us is...weird, and probably not what he'd say if you talked to him about it today, given Facebook's recent woes.


If I follow your logic, tell me if I'm wrong, it isn't then Robotics nor AI that will dismantle civilization but human's error which will be aggravated by the exponential power of Robotics and AI. Sorry but the only way anyone can put the blame on technology for our own destruction is if technology became sentient otherwise it is just a tool.
Yes technology is completely neutral unless the programmer or the inventor decides to not make it so. Zuckerberg didn't asserted that "humanity lives in a sterile bubble that isn't affected by what's around us". What he said was that the cataclysmic and apocalyptic vision of robotics should stop because we are ultimately the only one responsible for our own destiny.
 

lizmonster

Possibly A Mermaid Queen
Absolute Sage
Super Member
Registered
Joined
Jul 5, 2012
Messages
14,683
Reaction score
24,617
Location
Massachusetts
Website
elizabethbonesteel.com
What he said was that the cataclysmic and apocalyptic vision of robotics should stop because we are ultimately the only one responsible for our own destiny.

I think what's wrong with this perspective is the idea that caution - "apocalyptic vision," if you will - is a bad thing.

Yes, technology could easily destroy us due to human error. We've had the potential to destroy humanity for at least 73 years now, and it's only civilization and human restraint that's kept us from doing it. It's vitally important to understand what technology is capable of, so we can make the right decisions about what we do with it.

In addition - there are always going to be errors in technology. To assert that we should brush off the possible significance of these errors because ultimately it's humans who are responsible for them is naive in the extreme.

There's an old programming joke: Every program contains at least one bug. Every program can be simplified. Therefore, every program can be reduced to a single line of code that doesn't work. Saying "Hey, it's our fault that programs are buggy!" doesn't change the fact that programs are buggy, and we need to be thinking about the consequences of that.

You ask me, I'd say both Musk and Zuckerberg are wrong, and neither are the sort of "personality" I'd want to wage my future on. That said, at least Musk is acknowledging that it might not be a bad idea for us to have a contingency plan, habitat-wise.
 

Sarahani

Banned
Joined
Feb 24, 2018
Messages
81
Reaction score
13
Location
Connecticut
I think what's wrong with this perspective is the idea that caution - "apocalyptic vision," if you will - is a bad thing.

Yes, technology could easily destroy us due to human error. We've had the potential to destroy humanity for at least 73 years now, and it's only civilization and human restraint that's kept us from doing it. It's vitally important to understand what technology is capable of, so we can make the right decisions about what we do with it.

In addition - there are always going to be errors in technology. To assert that we should brush off the possible significance of these errors because ultimately it's humans who are responsible for them is naive in the extreme.

There's an old programming joke: Every program contains at least one bug. Every program can be simplified. Therefore, every program can be reduced to a single line of code that doesn't work. Saying "Hey, it's our fault that programs are buggy!" doesn't change the fact that programs are buggy, and we need to be thinking about the consequences of that.

You ask me, I'd say both Musk and Zuckerberg are wrong, and neither are the sort of "personality" I'd want to wage my future on. That said, at least Musk is acknowledging that it might not be a bad idea for us to have a contingency plan, habitat-wise.

Thanks. I like the answer. :Hug2:
 

Sarahani

Banned
Joined
Feb 24, 2018
Messages
81
Reaction score
13
Location
Connecticut
It's interesting, but you'd have to firm up the definition before I'd opine on the possibility. I think of AI as something that makes conscious decisions, vs. a program reacting in a predictable algorithmic fashion.

AI is actually both of them. It is then divided into subcategories: the weak one, "program reacting in a predictable algorithmic fashion" and the strong one "conscious decisions." I personally don't think it's possible to create sentient robot since the embodied mind theory assert that consciousness in not located in our brain but in our entire body. Which means that the computer analogy concerning the human brain is erroneous. It is not sufficient to increase the power of a computer to create sentience.
 

cornflake

practical experience, FTW
Super Member
Registered
Joined
Jul 11, 2012
Messages
16,171
Reaction score
3,734
I can't take Zuckerberg's thoughts on anything of import seriously.

Musk has at least done things.

Also, iirc, Musk didn't suggest a Cyberdine type thing, where the AI would rise up; he said like, there'd be increasing AI, all held and controlled by a tiny group of companies, who'd have too much power over everyone and could tip over into doing who knows what, especially given the info they've got. The recent U.S. election suggests he has a point.
 

Kjbartolotta

Potentially has/is dog
Super Member
Registered
Joined
May 15, 2014
Messages
4,197
Reaction score
1,049
Location
Los Angeles
It's interesting, but you'd have to firm up the definition before I'd opine on the possibility. I think of AI as something that makes conscious decisions, vs. a program reacting in a predictable algorithmic fashion.

Ultimately, that's where the idea breaks down. It's a question for one smarter than I if all the complex interactions on the internet can lead to some some form of emergent behavior, but conscious decisions are a stretch. I remember enjoying Blindsight for being an exploration of how something can be intelligent but lacked consciousness. Wasn't sure if I agreed, but enjoyed it nonetheless.

Also, iirc, Musk didn't suggest a Cyberdine type thing, where the AI would rise up; he said like, there'd be increasing AI, all held and controlled by a tiny group of companies, who'd have too much power over everyone and could tip over into doing who knows what, especially given the info they've got. The recent U.S. election suggests he has a point.

Yeah, ok, I have nothing nice to say about the guy, but I guess it's Team Musk then. Dammit. I've already been under the impression (based, in part, on a weird conversation with a Space X guy I had once) that Musk is more into uploading than AI.
 
Last edited:

Introversion

Pie aren't squared, pie are round!
Kind Benefactor
Super Member
Registered
Joined
Apr 17, 2013
Messages
10,740
Reaction score
15,155
Location
Massachusetts
I don't believe AI is going to become sentient and self-determining (at least, I believe we're centuries away from this, at best).

I believe that if true artificial sentience is ever invented, it'll either be 1) an accident, or 2) a lab curiosity to base a PhD thesis on. There's zero economic gain to be made from making AIs that are possessed of a self-preservation instinct, xenophobia, and analogues of human emotions that influence their decisions. Hence, they won't be made on any scale that I worry about.

OTOH, I do think our software is increasingly going to be able to discern our emotions and manipulate us, because someone will build it that way to sell more stuff to us.

I also think there's plenty of room for sophisticated analytics software that makes wrong predictions at speeds beyond humans to correct. If that s/w is plugged into critical infrastructure (Wall Street already has this problem), it could be disastrous. But, that s/w won't have created chaos out of a motivation to exterminate us.

And, it's possible that weapons will be made that are "intelligent" enough to seek & destroy enemy human combatants. Couple that with buggy software, and I suppose there's Musk's nightmare scenario. OTOH, robots will have batteries, and won't be self-repairing, so there's not much chance of robotic Amok Time going on for long.
 

Sarahani

Banned
Joined
Feb 24, 2018
Messages
81
Reaction score
13
Location
Connecticut
I can't take Zuckerberg's thoughts on anything of import seriously.

Musk has at least done things.

Also, iirc, Musk didn't suggest a Cyberdine type thing, where the AI would rise up; he said like, there'd be increasing AI, all held and controlled by a tiny group of companies, who'd have too much power over everyone and could tip over into doing who knows what, especially given the info they've got. The recent U.S. election suggests he has a point.

1) Ad hominem attack towards Zuckerberg... That's not nice.
2) Whatever Musk had done is irrelevant. His chances of being right or wrong are equal to everybody else. The biggest discoveries in a field tend to come from unknown agents. It's very ironical that Musk is lately contributing to space exploration more than NASA, or that Musk is contributing to renewable energy like electricity and solar power more than the SuperCorporation. The dude was unknown to the general public 5 years ago.
3) It is naïve to think that AI will not be controlled by a small group of companies which by the way the billionaire Musk is part of, like the energy is, or computers are. The oligarchy has always held the power and did some weird crazy stuff. The recent U.S. election is just one among others (At least since G.W. Bush people became aware of some weird shit!).
 
Last edited:

cornflake

practical experience, FTW
Super Member
Registered
Joined
Jul 11, 2012
Messages
16,171
Reaction score
3,734
1) Ad hominem attack towards Zuckerberg... That's not nice.

Am I meant to be nice to Mark Zuckerberg? I must have missed that memo, along with the reasoning behind it. Also, not really an ad hominem attack; I said I don't take his thoughts seriously. That's just a fact.

2) Whatever Musk had done is irrelevant. How so?


His chances of being right or wrong are equal to everybody else. Seriously? Kevin, who didn't go to college, doesn't really follow the news, works at the supermarket, is as likely to be right about predictions of technological advances as Elon Musk?

The biggest discoveries in a field tend to come from unknown agents. What? Do you have backing for that claim, please?

It's very ironical that Musk is lately contributing to space exploration more than NASA, or that Musk is contributing to renewable energy like electricity and solar power more than the SuperCorporation. How is that ironic?

The dude was unknown to the general public 5 years ago. Where in the world are you getting that idea? Tesla and SpaceX have been around well more than a decade. He's been famous for a long time.


3) It is naïve to think that AI will not be controlled by a small group of companies which by the way the billionaire Musk is part of, What company that controls information is he a part of?

like the energy is, or computers are. The oligarchy has always held the power and did some weird crazy stuff. The recent U.S. election is just one among others (At least since G.W. Bush people became aware of some weird shit!).

There has been diversification, consolidation, back again. No, 'the oligarchy' doesn't always hold power, nor is it always the same. The recent election is just one what among others?
 

Zan75

Super Member
Registered
Joined
Feb 27, 2018
Messages
646
Reaction score
16
Either team you pick eventually humanity will go the way of the dinosaur and some other animal will become the dominate species until the sun explodes and takes everything with it. In the meantime, I'll go with Team Musk. I like the push in space and electric. The new solar roof tiles and power walls look pretty impressive.
 

Sarahani

Banned
Joined
Feb 24, 2018
Messages
81
Reaction score
13
Location
Connecticut
I am somewhat naïve to think that people will understand me and the need of explanation would not be required. Here we go (sigh).

1) No you're not meant to be nice to Mark Zuckerberg. He's a billionaire who doesn't care about you, me and anybody in this thread which I started yesterday about Robotics and AI. I was just being humorous.
But the reasoning behind it is correct. Ad hominem means that you attack someone based on who the person is or what he does instead of attacking his argument, what he said.
And you wrote: "I can't take Zuckerberg's thoughts on anything of import seriously. Musk has at least done things."
You then imply that Zuckerberg has done nothing and that's one of the reason why you can't take his thoughts seriously. You're not contradicting his words but attacking what he's done... or rather what you consider he hasn't done. Yes I confirm, ad hominem attack.

2) Because Robotics and AI are a forthcoming industrial revolution, it's not here yet. Which means that all people are saying about sentient robot, the danger and else are technically just speculation. It's like Michio Kaku's books, they are deeply entertaining, probable to happen but highly speculative. So Kevin's speculation and Elon Musk's speculation are statistically identical since none of them are experts... yet. FYI, going to college has long been an outdated metric to measure intelligence, success and the ability to be right in a topic. You'll be surprise to know how many high school/college dropout, not following any type of news and working in supermarket read books and develop some type of expertise in random area... I met a homeless guy who literally speak 7 languages fluently and a plumber who's fascinated by quantum mathematics. I know I have weird acquaintances.

Unknown agents with great discovery: Henri Becqurel discovered Radioactivity
Alexander Fleming serendipitously discovered penicillin.
Penzias & Wilson were cleaning poop on their radio before discovering the Cosmic Microwave Background
Alexander Friedmann, was a random Russian mathematician when he decided to double check Einstein's Special relativity (such audacity) and discovered that the Universe was in expansion.
Well, Einstein himself at 25/26 was somewhat of a teacher in Switzerland when he published his theory in 1905
Srinivasa Ramanujan was a poor indian dude with no formal formation in Mathematics (no college, no news, and I presume he had a shitty job) and is probably considered the greatest mathematician since Newton.
Charles Darwin theorized the natural selection as a completely unknown naturalist/biologist. He published his book 20 years after having written it. He actually was mocked and ridiculed.
Stephen Hawking was a still a student when reading the theory of Roger Penrose he developed the Big Bang Singularity.
Marie Curie was a woman... enough said.
Do you need more names?

There is a difference between being rich and globally known. Yes, Musk's been a billionaire for a while but he was not known to the general public 10 years ago. Tesla and Space X have been around more than a decade... I confirm. But first Musk didn't create the brand Tesla he became a shareholder and eventually the CEO around 10 years ago. And still, in 2008 with the Tesla roadster, nobody cared about the brand since electric cars weren't all that. It was in 2012 with the Tesla Model S which will become the company best selling car that we cared to know about Musk and his story. So it's been 6 years not 5... my bad (humor). Most people (besides the ones interested in aerospace technology) didn't know about Space X before knowing about Tesla. I can almost claim that people still know Musk more because of Tesla than Space X. So I confirm that Elon Musk is globally known for 5-6 years.

3)Never said he's part of a company that control information. You should reread my claim.

The oligarchy doesn't always hold power? Really? You just negated at least 4000 years of human history but that's another topic.
 
Last edited:
Status
Not open for further replies.