The A.I. Poll

Answer all 3 questions. (plz dont vote for the "-" options, to keep results readable)

  • (question 1) If you had full control over an all-powerful A.I., what would be your top priority?

    Votes: 1 3.8%
  • Fixing the world, and bringing ultimate happiness to mankind.

    Votes: 21 80.8%
  • Making myself god of all (then getting to the rest soon after).

    Votes: 5 19.2%
  • -

    Votes: 1 3.8%
  • (new question) Which statement do you most identify with?

    Votes: 1 3.8%
  • I generally desire to connect with people in a very positive way (when/if possible, at least).

    Votes: 17 65.4%
  • ^I wouldn't go that far...

    Votes: 9 34.6%
  • -

    Votes: 1 3.8%
  • (new question) Which of THESE statements do you most identify with?

    Votes: 1 3.8%
  • I live for me. The cookie is mine.

    Votes: 4 15.4%
  • I'm in the middle. I split the cookie in half with others.

    Votes: 19 73.1%
  • I almost always give my cookie to others.

    Votes: 3 11.5%

  • Total voters
    26

Dario D.

This is a redo of a previous 2-question poll I did on here. ...for some reason, nobody ever seems interested in taking Polldaddy polls on forums - not even the non-forum-member people, which were the biggest reason to use Polldaddy in the first place (so that they could vote too. More votes! ...or not).

There's always a much bigger turnout like this.
 
Last edited:

Dario D.

Dang, I hate it when I need to edit a poll. (oh well, just aesthetic details)
 

RobinGBrown

New kid, be gentle!
Super Member
Registered
Joined
Oct 2, 2009
Messages
157
Reaction score
25
Location
London
Website
www.dogstarphotography.com
I find the questions quite strange, it looks like they're leading.

i.e. You have something in mind and you want to confirm it with these questions, as opposed to you want to ask some questions in order to build a theory
 

DeleyanLee

Writing Anarchist
Kind Benefactor
Super Member
Registered
Joined
Sep 6, 2007
Messages
31,663
Reaction score
11,414
Location
lost among the words
My problem with your poll, honestly, is that I don't relate to any of the answers to any of the questions, so there's no option for me to select. The answers hit the various extremes and my views are more in the middle, so there's no way I can honestly answer the poll.

Dario, you are making me curious what you're trying to do with this poll since you're repeatedly trying to get answers to it. Do you mind sharing the reasoning behind it?
 

Dario D.

I find the questions quite strange, it looks like they're leading.
Not sure what you see in the questions there. Two of them are, "Which statement do you most identify with?" :e2shrug: And the first question is basically just, "Which of these is your priority?"

My problem with your poll, honestly, is that I don't relate to any of the answers to any of the questions, so there's no option for me to select. The answers hit the various extremes and my views are more in the middle, so there's no way I can honestly answer the poll.
Hmm, I don't understand. Question 1 is asking which of the 2 choices you'd prefer as a top priority (also, these are the only 2 choices relevant to the novel about A.I. I'm writing)... Question 2 is asking if you're either a "try to connect with people" type, or "not exactly" (the latter option means you're either in the middle, or "negative" on the issue. Again, these are the only choices relevant to me; I don't need to know if one's answer is "neutral" or "negative"... I just need to know if they are or aren't "positive")... and the Question 3 choices visibly cover the whole spectrum, without anything implied in the wording of the choices.

Not sure where the confusion lies. :e2shrug:
 
Last edited:

DeleyanLee

Writing Anarchist
Kind Benefactor
Super Member
Registered
Joined
Sep 6, 2007
Messages
31,663
Reaction score
11,414
Location
lost among the words
Not sure what you see in the questions there. Two of them are, "Which statement do you most identify with?" :e2shrug: And the first question is basically just, "Which of these is your priority?"

Hmm, I don't understand. Question 1 is asking which of the 2 choices you'd prefer as a top priority (also, these are the only 2 choices relevant to the novel about A.I. I'm writing)... Question 2 is asking if you're either a "try to connect with people" type, or "not exactly" (the latter option means you're either in the middle ground, or "negative" on the issue.

Perhaps this will help:

question 1) If you had full control over an all-powerful A.I., what would be your top priority?
Fixing the world, and bringing ultimate happiness to mankind.
Making myself god of all (then getting to the rest soon after).

Neither of these is a priority for me, so I can't answer the question. I don't believe that "an all-powerful A.I." would be able to "fix the world and bring happiness to mankind" and I find the idea of becoming a god abhorrent so I honestly can't answer the question.

(new question) Which statement do you most identify with?
I generally desire to connect with people in a very positive way (when/if possible, at least).
^I wouldn't go that far...

I identify with neither of these because "people" is too general a term. I prefer to deal with individuals because that's who is important in my life. "I wouldn't go that far" is one of those phrases that catches-all and, thus, doesn't mean anything to me so I can't identify with it.

(new question) Which of THESE statements do you most identify with?
I live for me. The cookie is mine.
I'm in the middle. I split the cookie in half with others.
I almost always give my cookie to others.

Depends on who the person is I would share the cookie with.

Again, these are the only choices relevant to me; I don't need to know if one's answer is "neutral" or "negative"... I just need to know if they are or aren't the positive answer)... and the Question 3 choices visibly cover the whole spectrum.

Might I suggest that the reason you're not getting responses is because what you've decided you don't need to know is what the majority of people would actually respond?

Not sure where the confusion lies. :e2shrug:

The confusion lies in that not everyone sees the questions the same way you do and there's little to no "give" in the answers provided.
 

DrZoidberg

aka TomOfSweden
Super Member
Registered
Joined
Sep 11, 2009
Messages
1,081
Reaction score
95
Location
Stockholm
Website
tomknox.se
Why all powerful AI? Why not, "if you had god-like powers then..."? or "if you could seize control of the world in a coup then..."? Do others know I have this kind of power? The AI seems to be a very specific question. What is it's limitations? If it doesn't have any then why call it what you do? I think you need to set up the scenario more in detail.

Also, the moral question I think are seen from an awfully naive world view. It's like it's from a Chick tract, where it's either Satan, God or exactly in between. I can't imagine these kinds of moral questions are relevant to anybody?
 
Last edited:

Summonere

Super Member
Registered
Joined
Feb 12, 2005
Messages
1,090
Reaction score
136
If the A.I. is all powerful, how could I have control over it?

That I could control it would mean that it is not all powerful.

If it's not all powerful, it couldn't achieve any of the things listed.
 

Dario D.

If the A.I. is all powerful, how could I have control over it?

That I could control it would mean that it is not all powerful.
lol, you aren't drawing the line between "all-powerful" and "all-free-thinking" (or "all-independent"). You don't think that a robot that could do ANYTHING could have a human that it takes orders from? (such as its designer) Where it gets its decisions from doesn't affect the physical limits of its power, and ability to accomplish probably any matter-based feat. (aside from creating atoms / new matter from nothing at all, such as if it wanted to "create more universe" somewhere... you know, if it needed more metal, or something)

I don't believe that "an all-powerful A.I." would be able to "fix the world and bring happiness to mankind"
You're entitled to your opinion, but, knowing what I do about the capabilities of even moderately good intelligence affecting man (just on the psychological level), I believe that an A.I. - such that had the ability to rip the brain out of your head, and replace it with a perfect, 1000x more powerful one (absolutely devoid of flaws like irrationality, lack of foresight, etc) - would have more than enough means to "fix the world", and make people happy... (even if it left the flaws of man intact, but restructured the workings of life to be non-permitting of natural errors and grievances, and then just removed one's ability to perceive the errors of others)

Even if one might argue that our existing human mind would go crazy in such a perfect world, well, who says our existing mind can't be reformatted to operate on a different plane?

I don't believe that The Singularity is biblical (anyone who believes in the Christian God, and the Bible, doesn't believe the Singularity will happen), but I do believe that it's 100% possible, and would happen in a very short time (60 years?), if there was no God.
 
Last edited:

Perdoon

Registered
Joined
Jul 18, 2009
Messages
21
Reaction score
2
Location
Sydney, Australia
to rip the brain out of your head, and replace it with a perfect, 1000x more powerful one (absolutely devoid of flaws like irrationality, lack of foresight, etc) - would have more than enough means to "fix the world", and make people happy...

I'll just point out that ripping the brain from your head = no more you. A computer now controls your body, your personality/soul/spirit/whatever is gone.

If you're planning to have something like this, I'd suggest the super-computer get attached to the brain (so the person could choose whether to accept its suggestions/orders or override them), rather than replacing it. I'd have a hard time believing someone was still them if they didn't have their own brain.

And there's no way I'd allow anyone to perform surgery that permanently removed my brain, regardless of the science behind it... And I'm a scientific person.
 

Dario D.

I meant upgrade your brain's capacity to think better, not actually change its thoughts and identity. Think of it as upgrading your computer: making it 1000x more powerful, and bug-free, but with all the same files and programs still on it... The only difference with the programs would be that they'd be updated to their fullest potential (Photoshop 5 would become Photoshop 60 Final), so that you could then do exactly what it is you've always wanted to do (free will, etc), only now without barriers. (such as having pathetic tools... ie, like having a worthless, broken mind with barely-functional thought-processes)
 
Last edited:

Chasing the Horizon

Blowing in the Wind
Super Member
Registered
Joined
Nov 8, 2006
Messages
4,288
Reaction score
561
Location
Pennsylvania
I can't answer your poll, because if I had access to an all-powerful A.I. I would disable and then destroy it. No good can come from something like that in the long run. Humans weren't meant to live in a paradise, and I like my brain the way it is.
 

STKlingaman

Followed the Red Brick Road
Super Member
Registered
Joined
Apr 9, 2009
Messages
526
Reaction score
55
Location
lost in Arizona
Yea A.I. - artificial intelligence
will always be flawed, since it will
come from the hands of humans
who are the most flawed creatures
on the planet.
Unplug it, crush it into tiny little pieces
and make toasters, or something
useful for us flawed beings.
 

Dario D.

Ummm... is this because you two above are thinking that A.I. would be likely to disobey, and do its own thing, since it would have an "intelligent" mind?
 

Summonere

Super Member
Registered
Joined
Feb 12, 2005
Messages
1,090
Reaction score
136
How does the AI world account for Turing's Halting Problem?

Dumb it down for me. Fifty words or less.

Just asking.
 

GeorgeK

ever seeking
Super Member
Registered
Joined
Jul 17, 2007
Messages
6,577
Reaction score
740
If we were all "upgraded infinitely", then we would know what everyone else knows. There would be no need for interaction because you would already know the outcome. Life would be reduced to betting on where a spiked football would land on rocky terrain...that or monkey fights. That is not my idea of utopia.
 

Ali B

Just Hanging Around
Super Member
Registered
Joined
Feb 20, 2005
Messages
477
Reaction score
83
Location
Texas
Website
www.alinabradford.com
Plus, if we create an A.I. that can upgrade our intelligence to the point of being maxed out, we wouldn't need the A.I. anymore. Besides, there has to be a reason we only use 10% of our brain. Maybe humans can't handle being much smarter.
 

Nivarion

Brony level >9000
Super Member
Registered
Joined
Sep 6, 2008
Messages
1,679
Reaction score
151
Location
texas
Plus, if we create an A.I. that can upgrade our intelligence to the point of being maxed out, we wouldn't need the A.I. anymore. Besides, there has to be a reason we only use 10% of our brain. Maybe humans can't handle being much smarter.


That old one really needs to go. We're only using 10%, when were not doing anything. Like in the spots between REM sleep.

http://www.snopes.com/science/stats/10percent.asp

I'm not picking on you or anything. I actually used to buy that one too. Its just something that needs to be stomped into the dust of the earth, to never get back up again. Otherwise it'll keep coming back again and again.
 

Dario D.

If we were all "upgraded infinitely", then we would know what everyone else knows.
No, we wouldn't be upgraded infinitely. We would be upgraded intelligently... as is most useful. An AI's idea of progress wouldn't be the blind, nonsensical raising of every bar... it would be a selective, smart raising of exactly just the bars that need to be raised (and not necessarily to their max potential, but their most useful potential).

Plus, if we create an A.I. that can upgrade our intelligence to the point of being maxed out, we wouldn't need the A.I. anymore.
No idea what you're claiming. :tongue However, we most-likely wouldn't be maxed out by any means (unless an AI did in fact decide that being maxed out would be best. Still, with a maxed out mind, our brains would have to think in an entirely new "flavor" in order for us to continue being happy... meaning to say, don't think an A.I. would be dumb enough to leave us with regular old, broken human minds. It could change the very dimension of thought, after which we'd see standard human thinking as a foreign concept... like the difference between morse code, and fluid, spoken language).

How does the AI world account for Turing's Halting Problem?
Assuming you mean that the Singularity could take too long (longer than the lifespan of the A.I.'s programmers), this is assuming that the A.I. wouldn't be in a responsive state during the Singularity. (point being: the Singularity could even take all eternity, for all an A.I. programmer would care, as long as it could still communicate and respond to people while compiling new data. For example: technically, Google's archiving of the web will never be complete... but its database is usable at all times, so it doesn't really matter)

Also, someone (or some group) smart enough to be programming A.I. in the first place probably wouldn't use a "Turing complete" programming language, if they thought it would be an issue.

I don't understand the halting problem deeply, but I think I have the concept. By design, an A.I. would never stop trying to understand, anyway. The only important thing would be that it remains responsive while it "thinks", which would be the FIRST thing its designers nail down.
 
Last edited:

Yeshanu

Elf Queen
Super Member
Registered
Joined
Feb 14, 2005
Messages
6,757
Reaction score
2,410
Location
Up a Tree
The problem is that power corrupts, and absolute power corrupts absolutely. What we say in a poll, what we believe with all our hearts we'd do in a situation where we had absolute control, might very probably have absolutely no relation to what we end up doing.

And even if we do try to do our best for others, remember with what the road to Hell is paved... ;)
 

Dario D.

The problem is that power corrupts, and absolute power corrupts absolutely.
I'm not entrusting YOU with the creation of this A.I. ;) But yes indeed, A.I. in the wrong hands would mean the complete fulfillment of the designer's wishes, good or bad. Luckily, REAL A.I. would require the input of untold hundreds (maybe even thousands) of programmers, so there would be an excessive amount of precautionary politics going into the design.
 
Last edited:

Rufus Coppertop

Banned
Flounced
Joined
May 24, 2009
Messages
3,935
Reaction score
948
Location
.
My problem with your poll, honestly, is that I don't relate to any of the answers to any of the questions, so there's no option for me to select. The answers hit the various extremes and my views are more in the middle, so there's no way I can honestly answer the poll.

I concur.

Also, I don't think in terms of "the cookie".