PDA

View Full Version : The A.I. Poll



Dario D.
10-14-2009, 10:37 AM
This is a redo of a previous 2-question poll I did on here. ...for some reason, nobody ever seems interested in taking Polldaddy polls on forums - not even the non-forum-member people, which were the biggest reason to use Polldaddy in the first place (so that they could vote too. More votes! ...or not).

There's always a much bigger turnout like this.

Dario D.
10-14-2009, 10:43 AM
Dang, I hate it when I need to edit a poll. (oh well, just aesthetic details)

RobinGBrown
10-14-2009, 04:18 PM
I find the questions quite strange, it looks like they're leading.

i.e. You have something in mind and you want to confirm it with these questions, as opposed to you want to ask some questions in order to build a theory

DeleyanLee
10-14-2009, 04:23 PM
My problem with your poll, honestly, is that I don't relate to any of the answers to any of the questions, so there's no option for me to select. The answers hit the various extremes and my views are more in the middle, so there's no way I can honestly answer the poll.

Dario, you are making me curious what you're trying to do with this poll since you're repeatedly trying to get answers to it. Do you mind sharing the reasoning behind it?

Dario D.
10-14-2009, 04:54 PM
I find the questions quite strange, it looks like they're leading.
Not sure what you see in the questions there. Two of them are, "Which statement do you most identify with?" :e2shrug: And the first question is basically just, "Which of these is your priority?"


My problem with your poll, honestly, is that I don't relate to any of the answers to any of the questions, so there's no option for me to select. The answers hit the various extremes and my views are more in the middle, so there's no way I can honestly answer the poll.
Hmm, I don't understand. Question 1 is asking which of the 2 choices you'd prefer as a top priority (also, these are the only 2 choices relevant to the novel about A.I. I'm writing)... Question 2 is asking if you're either a "try to connect with people" type, or "not exactly" (the latter option means you're either in the middle, or "negative" on the issue. Again, these are the only choices relevant to me; I don't need to know if one's answer is "neutral" or "negative"... I just need to know if they are or aren't "positive")... and the Question 3 choices visibly cover the whole spectrum, without anything implied in the wording of the choices.

Not sure where the confusion lies. :e2shrug:

DeleyanLee
10-14-2009, 05:10 PM
Not sure what you see in the questions there. Two of them are, "Which statement do you most identify with?" :e2shrug: And the first question is basically just, "Which of these is your priority?"

Hmm, I don't understand. Question 1 is asking which of the 2 choices you'd prefer as a top priority (also, these are the only 2 choices relevant to the novel about A.I. I'm writing)... Question 2 is asking if you're either a "try to connect with people" type, or "not exactly" (the latter option means you're either in the middle ground, or "negative" on the issue.

Perhaps this will help:


question 1) If you had full control over an all-powerful A.I., what would be your top priority?
Fixing the world, and bringing ultimate happiness to mankind.
Making myself god of all (then getting to the rest soon after).

Neither of these is a priority for me, so I can't answer the question. I don't believe that "an all-powerful A.I." would be able to "fix the world and bring happiness to mankind" and I find the idea of becoming a god abhorrent so I honestly can't answer the question.

(new question) Which statement do you most identify with?
I generally desire to connect with people in a very positive way (when/if possible, at least).
^I wouldn't go that far...

I identify with neither of these because "people" is too general a term. I prefer to deal with individuals because that's who is important in my life. "I wouldn't go that far" is one of those phrases that catches-all and, thus, doesn't mean anything to me so I can't identify with it.

(new question) Which of THESE statements do you most identify with?
I live for me. The cookie is mine.
I'm in the middle. I split the cookie in half with others.
I almost always give my cookie to others.

Depends on who the person is I would share the cookie with.


Again, these are the only choices relevant to me; I don't need to know if one's answer is "neutral" or "negative"... I just need to know if they are or aren't the positive answer)... and the Question 3 choices visibly cover the whole spectrum.

Might I suggest that the reason you're not getting responses is because what you've decided you don't need to know is what the majority of people would actually respond?


Not sure where the confusion lies. :e2shrug:

The confusion lies in that not everyone sees the questions the same way you do and there's little to no "give" in the answers provided.

DrZoidberg
10-14-2009, 06:13 PM
Why all powerful AI? Why not, "if you had god-like powers then..."? or "if you could seize control of the world in a coup then..."? Do others know I have this kind of power? The AI seems to be a very specific question. What is it's limitations? If it doesn't have any then why call it what you do? I think you need to set up the scenario more in detail.

Also, the moral question I think are seen from an awfully naive world view. It's like it's from a Chick tract, where it's either Satan, God or exactly in between. I can't imagine these kinds of moral questions are relevant to anybody?

Summonere
10-14-2009, 07:07 PM
If the A.I. is all powerful, how could I have control over it?

That I could control it would mean that it is not all powerful.

If it's not all powerful, it couldn't achieve any of the things listed.

Dario D.
10-15-2009, 05:44 AM
If the A.I. is all powerful, how could I have control over it?

That I could control it would mean that it is not all powerful.
lol, you aren't drawing the line between "all-powerful" and "all-free-thinking" (or "all-independent"). You don't think that a robot that could do ANYTHING could have a human that it takes orders from? (such as its designer) Where it gets its decisions from doesn't affect the physical limits of its power, and ability to accomplish probably any matter-based feat. (aside from creating atoms / new matter from nothing at all, such as if it wanted to "create more universe" somewhere... you know, if it needed more metal, or something)


I don't believe that "an all-powerful A.I." would be able to "fix the world and bring happiness to mankind"
You're entitled to your opinion, but, knowing what I do about the capabilities of even moderately good intelligence affecting man (just on the psychological level), I believe that an A.I. - such that had the ability to rip the brain out of your head, and replace it with a perfect, 1000x more powerful one (absolutely devoid of flaws like irrationality, lack of foresight, etc) - would have more than enough means to "fix the world", and make people happy... (even if it left the flaws of man intact, but restructured the workings of life to be non-permitting of natural errors and grievances, and then just removed one's ability to perceive the errors of others)

Even if one might argue that our existing human mind would go crazy in such a perfect world, well, who says our existing mind can't be reformatted to operate on a different plane?

I don't believe that The Singularity (http://en.wikipedia.org/wiki/Technological_singularity) is biblical (anyone who believes in the Christian God, and the Bible, doesn't believe the Singularity will happen), but I do believe that it's 100% possible, and would happen in a very short time (60 years?), if there was no God.

Perdoon
10-15-2009, 07:23 AM
to rip the brain out of your head, and replace it with a perfect, 1000x more powerful one (absolutely devoid of flaws like irrationality, lack of foresight, etc) - would have more than enough means to "fix the world", and make people happy...

I'll just point out that ripping the brain from your head = no more you. A computer now controls your body, your personality/soul/spirit/whatever is gone.

If you're planning to have something like this, I'd suggest the super-computer get attached to the brain (so the person could choose whether to accept its suggestions/orders or override them), rather than replacing it. I'd have a hard time believing someone was still them if they didn't have their own brain.

And there's no way I'd allow anyone to perform surgery that permanently removed my brain, regardless of the science behind it... And I'm a scientific person.

Dario D.
10-15-2009, 07:39 AM
I meant upgrade your brain's capacity to think better, not actually change its thoughts and identity. Think of it as upgrading your computer: making it 1000x more powerful, and bug-free, but with all the same files and programs still on it... The only difference with the programs would be that they'd be updated to their fullest potential (Photoshop 5 would become Photoshop 60 Final), so that you could then do exactly what it is you've always wanted to do (free will, etc), only now without barriers. (such as having pathetic tools... ie, like having a worthless, broken mind with barely-functional thought-processes)

Chasing the Horizon
10-15-2009, 09:10 AM
I can't answer your poll, because if I had access to an all-powerful A.I. I would disable and then destroy it. No good can come from something like that in the long run. Humans weren't meant to live in a paradise, and I like my brain the way it is.

STKlingaman
10-15-2009, 09:25 AM
Yea A.I. - artificial intelligence
will always be flawed, since it will
come from the hands of humans
who are the most flawed creatures
on the planet.
Unplug it, crush it into tiny little pieces
and make toasters, or something
useful for us flawed beings.

Dario D.
10-16-2009, 08:40 AM
Ummm... is this because you two above are thinking that A.I. would be likely to disobey, and do its own thing, since it would have an "intelligent" mind?

Summonere
10-24-2009, 01:05 AM
How does the AI world account for Turing's Halting Problem?

Dumb it down for me. Fifty words or less.

Just asking.

GeorgeK
10-24-2009, 03:11 AM
If we were all "upgraded infinitely", then we would know what everyone else knows. There would be no need for interaction because you would already know the outcome. Life would be reduced to betting on where a spiked football would land on rocky terrain...that or monkey fights. That is not my idea of utopia.

Ali B
10-24-2009, 08:58 AM
Plus, if we create an A.I. that can upgrade our intelligence to the point of being maxed out, we wouldn't need the A.I. anymore. Besides, there has to be a reason we only use 10% of our brain. Maybe humans can't handle being much smarter.

Nivarion
10-25-2009, 12:34 PM
Plus, if we create an A.I. that can upgrade our intelligence to the point of being maxed out, we wouldn't need the A.I. anymore. Besides, there has to be a reason we only use 10% of our brain. Maybe humans can't handle being much smarter.


That old one really needs to go. We're only using 10%, when were not doing anything. Like in the spots between REM sleep.

http://www.snopes.com/science/stats/10percent.asp

I'm not picking on you or anything. I actually used to buy that one too. Its just something that needs to be stomped into the dust of the earth, to never get back up again. Otherwise it'll keep coming back again and again.

Dario D.
11-03-2009, 03:45 AM
If we were all "upgraded infinitely", then we would know what everyone else knows.
No, we wouldn't be upgraded infinitely. We would be upgraded intelligently... as is most useful. An AI's idea of progress wouldn't be the blind, nonsensical raising of every bar... it would be a selective, smart raising of exactly just the bars that need to be raised (and not necessarily to their max potential, but their most useful potential).


Plus, if we create an A.I. that can upgrade our intelligence to the point of being maxed out, we wouldn't need the A.I. anymore.
No idea what you're claiming. :tongue However, we most-likely wouldn't be maxed out by any means (unless an AI did in fact decide that being maxed out would be best. Still, with a maxed out mind, our brains would have to think in an entirely new "flavor" in order for us to continue being happy... meaning to say, don't think an A.I. would be dumb enough to leave us with regular old, broken human minds. It could change the very dimension of thought, after which we'd see standard human thinking as a foreign concept... like the difference between morse code, and fluid, spoken language).


How does the AI world account for Turing's Halting Problem?
Assuming you mean that the Singularity could take too long (longer than the lifespan of the A.I.'s programmers), this is assuming that the A.I. wouldn't be in a responsive state during the Singularity. (point being: the Singularity could even take all eternity, for all an A.I. programmer would care, as long as it could still communicate and respond to people while compiling new data. For example: technically, Google's archiving of the web will never be complete... but its database is usable at all times, so it doesn't really matter)

Also, someone (or some group) smart enough to be programming A.I. in the first place probably wouldn't use a "Turing complete" programming language, if they thought it would be an issue.

I don't understand the halting problem deeply, but I think I have the concept. By design, an A.I. would never stop trying to understand, anyway. The only important thing would be that it remains responsive while it "thinks", which would be the FIRST thing its designers nail down.

Yeshanu
11-03-2009, 05:30 AM
The problem is that power corrupts, and absolute power corrupts absolutely. What we say in a poll, what we believe with all our hearts we'd do in a situation where we had absolute control, might very probably have absolutely no relation to what we end up doing.

And even if we do try to do our best for others, remember with what the road to Hell is paved... ;)

Dario D.
11-03-2009, 05:41 AM
The problem is that power corrupts, and absolute power corrupts absolutely.
I'm not entrusting YOU with the creation of this A.I. ;) But yes indeed, A.I. in the wrong hands would mean the complete fulfillment of the designer's wishes, good or bad. Luckily, REAL A.I. would require the input of untold hundreds (maybe even thousands) of programmers, so there would be an excessive amount of precautionary politics going into the design.

Rufus Coppertop
11-04-2009, 08:29 AM
My problem with your poll, honestly, is that I don't relate to any of the answers to any of the questions, so there's no option for me to select. The answers hit the various extremes and my views are more in the middle, so there's no way I can honestly answer the poll.


I concur.

Also, I don't think in terms of "the cookie".