PDA

View Full Version : Someone who's familiar with game theory and/or loves puzzles



nowmorethanever
05-08-2012, 08:28 AM
I'm looking for something an AI would ask a human character. If she guesses/gives the right answer, then the AI will give her information. If she doesn't, then her friends will die.

The idea is to put in something quirky, not from Wikianswers. Also, lets not rely on formal logic, probability is probably more interesting.

Sorry if this is vague... I need help :)

Drachen Jager
05-08-2012, 09:16 AM
It should probably relate to the story, no? If it ties back into something where the humans are emotionally attached, the computer could try to trick them into choosing the wrong path because of their entanglement.

sunandshadow
05-08-2012, 09:43 AM
What are the AI's motives?

RichardGarfinkle
05-08-2012, 12:09 PM
Do you mean an actual artificial intelligence or what we call AI today?

If the former, it's a character and has its own motives and ways of thinking.

If the latter it's a tool created by a programmer and reflects the motives and ways of the programmer.

In either case, what's the question for?
Does the AI have a need for the answer or is it simply testing the human

If it's a test. What is it testing? Is it a test of the person's knowledge or skill in something, depth of understanding, intelligence, suitability for a job, etc?

More information please.

.

Mac H.
05-08-2012, 03:02 PM
My quick thoughts:

(1) Perhaps have the 'real' test be something behind the obvious test.

eg: The A.I isn't seeing if you know the answer, but if you are going to take the opportunity to cheat.

But, if so, then what is the right response? If someone's life is at stake and you don't care enough for their life to cheat in a silly test - then perhaps you don't deserve to win.

(2) Perhaps the A.I could give the human some kind of Turing Test. After all - it is probably curious if humans are genuinely self-aware.

Just because humans are biologically programmed to act like they are self-aware doesn't mean they truly are self-aware - at least not in the same way that an A.I is.

It makes sense - would a human show 'mercy' on a toaster if it didn't believe that the toaster had true human emotions? Maybe the A.I wants to see if humans really have true A.I emotions .. and so be deserving of the same kind of compassion that he would naturally show a fellow A.I.

Imagine that someone goes fishing for a hobby - they might justify their actions on the basis that a fish doesn't feel pain in the same way a human does. Well - humans don't feel pain in the same way that A.Is do. So why should an A.I treat us any different than we treat a fish ... a creature that we would happily watch suffocate while thrashing around on the dock?

After all, just like we believe that fish don't have the understanding to really suffer 'true' pain .. the A.I probably believes that humans don't suffer either .. not in the very real way that an A.I does.

At least that makes it natural - and not just an puzzle for the sake of a plot point.

Mac
(PS: Could you convince a machine that humans should be treated well - on a moral basis rather than 'we will hurt you' basis?)

nowmorethanever
05-08-2012, 10:11 PM
Wow - thank you all! This makes me think what I really want from my character.

"(1) Perhaps have the 'real' test be something behind the obvious test."

Absolutely. So far the story is this:

A girl stands before a group of people who're actually meatsuits for a hive AI. The AI asks her something to test whether she realizes that the hive mind is based on the mind of a dead person who saved her life a long time ago. The real, deep answer the AI wants is why the dead person saved that girl at all. The realization might be something like - "he felt pity." Compassion is something the AI can't understand, but since it got its' answer, it gives the girl information that she needs to save her friends. And it's a lightbulb moment - the AI just did something compassionate, although it can't "feel" the internal reward.

Vague, I know.

Snitchcat
05-08-2012, 10:45 PM
So, building on Mac's suggestion (1), how complicated did you want this test? I might have missed that.

If you build a multi-layered test into this part of the story and it's near the beginning, you could provide yourself with even more depth. Obviously, it makes the story more complicated.

OTOH, if it's towards the middle or end, then you could save the rest of it for a sequel.

Or just keep it simple or dual-layered.

As a hive AI, I might ask a question that would focus on the psychological concept of "I", that is, the individual. A hive mind would be many thinking as one. So, why would the AI need a hive mind to think, yet a human requires only one (most of the time), and can easily form a "hive" so to speak at any given moment (if he/she so chose)?

Pardon me, please, if none of the above makes sense. It's late. :)

Michael Davis
05-09-2012, 12:43 AM
I used to work Operations Research for DOD/IC communities. If you want to make it real suggest you check out the web for background info on rule based logic (old AI theory), Neural networks (the way humans think and reach decisions) which is emulated by more recent systems, and set matching algorithms (like the IBM system created to play at Jeopardy(. Ref game theory (by Neumann) not very versatile to handle most loose formed or random structure problems where as NT and SMA are better suited toward such problems.

Hope that helps.

Torgo
05-09-2012, 12:54 AM
Perhaps some variation on the Prisoners' Dilemma? It's about cooperation... A hive-mind would split the difference every time, but a human would consider selfish or altruistic solutions, depending on personality.

Torgo
05-09-2012, 12:55 AM
Perhaps the AI would approach her on an internet forum, pretending to be a human, and ask her what a fictional AI would ask a fictional character if it wanted her to learn a lesson of some kind.

DrZoidberg
05-09-2012, 11:19 AM
If the hive mind is trying to figure out if the girl recognises the personality of the person their hive mind is based on, then why not just ask something personal? You could ask a question where the question involves a clue. Something like, "you're placed on a deserted island where you're allowed to bring one object to help your survival, a knife or red shoes". And the red shoes had some sort of significance in her relationship with this person.

nowmorethanever
05-09-2012, 04:15 PM
Thanks, everyone!

The variation of the prisoner's dilemma is interesting, especially because the hive always cooperates with itself... I would presume.

Torgo
05-09-2012, 04:32 PM
Thanks, everyone!

The variation of the prisoner's dilemma is interesting, especially because the hive always cooperates with itself... I would presume.

Thinking about it, in the classic PD scenario, the hive would walk off scot free every time, because members of the hive cannot be prevented from discussing strategy (I would assume). It's only humans who can't confer who might choose the solution which gives them both jail time...

nowmorethanever
05-10-2012, 03:55 PM
Thanks, Torgo,

However, this depends on what is defined as a win.