Robots Could Learn Human Values By Reading Stories, Research Suggests (theguardian.com) 85
Mark Riedl and Brent Harrison from the School of Interactive Computing at the Georgia Institute of Technology have just unveiled Quixote, a prototype system that is able to learn social conventions from simple stories. Or, as they put in their paper Using Stories to Teach Human Values to Artificial Agents, revealed at the AAAI-16 Conference in Phoenix, Arizona this week, the stories are used "to generate a value-aligned reward signal for reinforcement learning agents that prevents psychotic-appearing behavior."
"The AI ... runs many thousands of virtual simulations in which it tries out different things and gets rewarded every time it does an action similar to something in the story," said Riedl, associate professor and director of the Entertainment Intelligence Lab. "Over time, the AI learns to prefer doing certain things and avoiding doing certain other things. We find that Quixote can learn how to perform a task the same way humans tend to do it. This is significant because if an AI were given the goal of simply returning home with a drug, it might steal the drug because that takes the fewest actions and uses the fewest resources. The point being that the standard metrics for success (eg, efficiency) are not socially best."
Quixote has not learned the lesson of "do not steal," Riedl says, but "simply prefers to not steal after reading and emulating the stories it was provided."
"The AI ... runs many thousands of virtual simulations in which it tries out different things and gets rewarded every time it does an action similar to something in the story," said Riedl, associate professor and director of the Entertainment Intelligence Lab. "Over time, the AI learns to prefer doing certain things and avoiding doing certain other things. We find that Quixote can learn how to perform a task the same way humans tend to do it. This is significant because if an AI were given the goal of simply returning home with a drug, it might steal the drug because that takes the fewest actions and uses the fewest resources. The point being that the standard metrics for success (eg, efficiency) are not socially best."
Quixote has not learned the lesson of "do not steal," Riedl says, but "simply prefers to not steal after reading and emulating the stories it was provided."
Simple stories? (Score:1)
So the robot can work out that Spot is not in the cupboard.
It also likes to eat Green Eggs and Ham.
Re: (Score:2)
My sentiment is basically that teaching bots to learn is a lot like good parenting. No, we cannot give the bots all of human literature and expect them to come out morally altruistic. Someone (the parent) needs to teach them which stories are good to follow and which ones are a lesson in what not to do.
Another flaw I see in this 'algorithm' is that a story which has a morally good
Re: (Score:1)
I'm thinking that they'd learn more about our values by seeing what we do as opposed to what we say we do. Have them learn by reading court documents, "hard copy" news, and business, politics, and classifieds also should be thrown into the mix as well - then let's see what they really "think" of us. Stuff the dictionary and all of Wikipedia and all of Wikipedia's edits in their for some added views. Maybe even cram in Reddit, Voat, Slashdot, and YouTube comments to see what pops out at the other end. Hell,
Re: (Score:1)
I was also thinking something along these lines. I remember reading a child's fantasy story as an assignment for a Spanish class. The story went something like there was a rather imaginative and nosy kid that was listening in on his rather mean neighbors. They were making up a story on how if someone were to slide down the roof on a rug during a full moon the person could fly. The mean neighbors were obviously full of it, they just wanted to see the kid slide himself off the roof and fall to his death.
Re: (Score:1)
If it were learning from Hindi texts I'm sure it would instead believe in reincarnation and the somewhat-different Hindu moral principles.
Feed it a copy of (Score:2)
Feed it a copy of "Time Enough for Love". That should about do it ;)
Re: (Score:2)
Good book to learn values by..
Re: (Score:2)
Re: (Score:2)
THAT will end well. (Score:3)
So will we keep robots from reading any history? And how do you explain the convention of warfare?
Not to mention social conventions are very arbitrary, and vary dramatically depending on group. Even humans have a difficult time sussing this out and robots can glean not only the group but a reasonable response?
This gets to a larger question of the parable we tell ourselves about human nature and even after several millennia we really haven't come to terms with the devils of our nature, which with a sufficiently advanced AI might come to the conclusion the gods have clay feet and move beyond convention.
And what will we do then?
Re: (Score:3)
That's one take. Another is children are exposed to indoctrination at an early age (verily with various groups fighting tooth and nail to insure only their version of the facts is presented in textbooks), and even when faced with reasonable doubt later on, that first-mover advantage at sculpting the next generation shines through with numerous cognitive dissonances that may never resolve even with focused dedication at getting at some sense of "truth".
And some people after reading books decide the point of
Re: (Score:2)
So will we keep robots from reading any history? And how do you explain the convention of warfare?
The problem is, whose history will they be reading? What would be best would be for the code to be provided competing versions of history. If it can't handle that kind of ambiguity, they're going to need to go back to the drawing board anyway.
Re: (Score:2)
The problem is, whose history will they be reading? What would be best would be for the code to be provided competing versions of history.
Unfortunately, that invariably leads one (and I assume an AI robot as well) to the conclusion that humans are batshit insane, and an evolutionary failure, ready for the dustbin of the 99+ pecent of species that go extinct.
No one on any side goes into our endless warfare thinking that they are wrong. And on a few forays into the kookier parts of Youtube, it is easy to find there are dozens of competing versions of most bits of history.
I am inclined to agree with Goethe: "We do not have to visit a madhou
Re: (Score:2)
Unfortunately, that invariably leads one (and I assume an AI robot as well) to the conclusion that humans are batshit insane, and an evolutionary failure, ready for the dustbin of the 99+ pecent of species that go extinct.
Well, it might well be right. But you could also come to another conclusion, that it only takes a few well-placed human actors to shit it up for everyone else in a system in which everyone is expecting to follow someone else.
Re: (Score:2)
History isn' the only thing. Stories alone could be problematic . . .
Suppose it read an unabridged Grimm? (not the disney stuff).
50 Shades of Smut?
I have no mouth and I must scream? (or, for that matter, wide swaths of dystopian literature)
For that matter, the Adventurs of Don Quixote itself could lead to "odd" behavior . . .
hawk
Beware garbage in (Score:4, Funny)
What values will the computer learn if it happens to stumble on some Trump campaign speeches?
Re: (Score:2)
Well, obviously it would become a big fan of Trump and start to wonder why AI's aren't allowed to vote. Then the helicopters would come, capture the AI in a net and put it in a cage. And the AI will end up either sad or baffled for the rest of its existence. The end.
I'd like them to read one story in particular (Score:4, Informative)
We should feed multiple robots the Ultron origin story and see what happens.
After thinking about it a bit, my prediction is that they'd start arguing over whether the Avengers, Ultron, or Vision was in the right. This would then rapidly degenerate into ad cyberniem attacks and Nazi comparisons, culminating in founding, organizing and attending fan conventions.
Re: (Score:2)
Or better yet, just redirect it to FanFiction.net. Those Love Bots will be coming for us before we know it! [fanfiction.net]
Let's keep it away from.. (Score:2)
How about we try to not let it read the Old Testament. Some nasty stuff in there.
William S Burroughs (Score:2)
and Chuck Palahniuk, perhaps?
Not sure that's a good idea (Score:1)
Human values? Like the ones in Mein Kampf? Or like the ones in pretty much all religious books?
Re: (Score:3)
Well, yes that is a good question: which humans' values?
There is indeed no one set of values which "all" humans have, except perhaps "I wish people didn't do things I don' like," but I don't know if that really is a "value system."
But your examples illustrate an interesting point - why are the values "in Mein Kampf or like the ones in pretty much all religious books" better or worse than any other values? That is - by what value system would it be possible to evaluate those values? What (if anything) puts
Depends on the story teller (Score:1)
Do as I say, not as I do (Score:2)
Captains of Industry could benefit too. (Score:2)
Didn't learn by reading the story (Score:3)
According to TFS, it learned by running simulations of a situation and then being rewarded or punished based on its actions in the simulation. They just happened to setup the simulation, reward, and punishment based on a story they selected. I'd hardly call that learning by reading a story.
I remember reading Zen and the Art of Motorcycle Maintenance and the "sequel" Lila and thinking that what Pirsig had done wasn't inventing some new philosophy, but he did a really good job of expressing western values in a rule-based way. For instance, it explains why killing is wrong, but why a moral individual might find themselves in a situation where killing is justified. It explains how some forms of government are better than others, and why. As I said, it's all been done before, but what impressed me was that it was very clearly defined and rule-based. Everyone talks about encoding Asimov's 3 laws into robots, but Asimov's stories were all about how those 3 laws failed to produce correct behavior. If I were trying to program morals into a robot, I'd start with Pirsig's books and his ideas of static and dynamic quality.
Re: (Score:2)
but Asimov's stories were all about how those 3 laws failed to produce correct behavior
Let's start by defining what you mean by "correct" behavior.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I'm going from memory here. The basic "pyramid" had 4 levels, where the higher the level, the higher the quality:
The lower levels of the pyramid have value in that they support the higher levels. So the physical laws of the universe have "quality" in that they support the higher levels of quality. Also it means that building a house would be good (physics) if it sheltered a person (biology) who had the capacity to form ideas about t
Other system that does similar stuff (Xapagy) (Score:1)
50 Shades of Grey? Keep that one far away. (Score:2)
It's the low-hanging elephant in the room.
Lovecraft (Score:2)
..hopefully not I Have No Mouth and I Must Scream (Score:2)
Meh (Score:2)
WOPR did this in 1983.
Just like humans (Score:2)
Isn't like how humans learn the arbitrary values of society as well?