Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Robotics News Science Technology

Robots Could Learn Human Values By Reading Stories, Research Suggests (theguardian.com) 85

Mark Riedl and Brent Harrison from the School of Interactive Computing at the Georgia Institute of Technology have just unveiled Quixote, a prototype system that is able to learn social conventions from simple stories. Or, as they put in their paper Using Stories to Teach Human Values to Artificial Agents, revealed at the AAAI-16 Conference in Phoenix, Arizona this week, the stories are used "to generate a value-aligned reward signal for reinforcement learning agents that prevents psychotic-appearing behavior."

"The AI ... runs many thousands of virtual simulations in which it tries out different things and gets rewarded every time it does an action similar to something in the story," said Riedl, associate professor and director of the Entertainment Intelligence Lab. "Over time, the AI learns to prefer doing certain things and avoiding doing certain other things. We find that Quixote can learn how to perform a task the same way humans tend to do it. This is significant because if an AI were given the goal of simply returning home with a drug, it might steal the drug because that takes the fewest actions and uses the fewest resources. The point being that the standard metrics for success (eg, efficiency) are not socially best."

Quixote has not learned the lesson of "do not steal," Riedl says, but "simply prefers to not steal after reading and emulating the stories it was provided."
This discussion has been archived. No new comments can be posted.

Robots Could Learn Human Values By Reading Stories, Research Suggests

Comments Filter:
  • by Anonymous Coward

    So the robot can work out that Spot is not in the cupboard.
    It also likes to eat Green Eggs and Ham.

    • by KGIII ( 973947 )

      I'm thinking that they'd learn more about our values by seeing what we do as opposed to what we say we do. Have them learn by reading court documents, "hard copy" news, and business, politics, and classifieds also should be thrown into the mix as well - then let's see what they really "think" of us. Stuff the dictionary and all of Wikipedia and all of Wikipedia's edits in their for some added views. Maybe even cram in Reddit, Voat, Slashdot, and YouTube comments to see what pops out at the other end. Hell,

  • Feed it a copy of "Time Enough for Love". That should about do it ;)

    • by wbr1 ( 2538558 )
      Ahhh yes that great tome wherein Lazarus Long gose back in time, fucks his mother and nearly dies in WWII. And that is just the greatest of the ethical and moral quandaries in that book.

      Good book to learn values by..

      • Actually it was WW1, LOL. So is engaging in intercourse with his progenitor better or worse than doing so with his female clones or the human incarnation of a computer or a participant in the lunar rebellion or the marriage and children of a brother an sister?
    • How about "Stranger in a Strange Land"?
  • by quintessencesluglord ( 652360 ) on Saturday February 20, 2016 @08:54PM (#51550719)

    So will we keep robots from reading any history? And how do you explain the convention of warfare?

    Not to mention social conventions are very arbitrary, and vary dramatically depending on group. Even humans have a difficult time sussing this out and robots can glean not only the group but a reasonable response?

    This gets to a larger question of the parable we tell ourselves about human nature and even after several millennia we really haven't come to terms with the devils of our nature, which with a sufficiently advanced AI might come to the conclusion the gods have clay feet and move beyond convention.

    And what will we do then?

    • So will we keep robots from reading any history? And how do you explain the convention of warfare?

      The problem is, whose history will they be reading? What would be best would be for the code to be provided competing versions of history. If it can't handle that kind of ambiguity, they're going to need to go back to the drawing board anyway.

      • The problem is, whose history will they be reading? What would be best would be for the code to be provided competing versions of history.

        Unfortunately, that invariably leads one (and I assume an AI robot as well) to the conclusion that humans are batshit insane, and an evolutionary failure, ready for the dustbin of the 99+ pecent of species that go extinct.

        No one on any side goes into our endless warfare thinking that they are wrong. And on a few forays into the kookier parts of Youtube, it is easy to find there are dozens of competing versions of most bits of history.

        I am inclined to agree with Goethe: "We do not have to visit a madhou

        • Unfortunately, that invariably leads one (and I assume an AI robot as well) to the conclusion that humans are batshit insane, and an evolutionary failure, ready for the dustbin of the 99+ pecent of species that go extinct.

          Well, it might well be right. But you could also come to another conclusion, that it only takes a few well-placed human actors to shit it up for everyone else in a system in which everyone is expecting to follow someone else.

    • by hawk ( 1151 )

      History isn' the only thing. Stories alone could be problematic . . .

      Suppose it read an unabridged Grimm? (not the disney stuff).

      50 Shades of Smut?

      I have no mouth and I must scream? (or, for that matter, wide swaths of dystopian literature)

      For that matter, the Adventurs of Don Quixote itself could lead to "odd" behavior . . .
      hawk

  • by Tough Love ( 215404 ) on Saturday February 20, 2016 @09:09PM (#51550787)

    What values will the computer learn if it happens to stumble on some Trump campaign speeches?

    • What values will the computer learn if it happens to stumble on some Trump campaign speeches?

      Well, obviously it would become a big fan of Trump and start to wonder why AI's aren't allowed to vote. Then the helicopters would come, capture the AI in a net and put it in a cage. And the AI will end up either sad or baffled for the rest of its existence. The end.

  • by Krishnoid ( 984597 ) on Saturday February 20, 2016 @09:35PM (#51550889) Journal

    We should feed multiple robots the Ultron origin story and see what happens.

    After thinking about it a bit, my prediction is that they'd start arguing over whether the Avengers, Ultron, or Vision was in the right. This would then rapidly degenerate into ad cyberniem attacks and Nazi comparisons, culminating in founding, organizing and attending fan conventions.

  • How about we try to not let it read the Old Testament. Some nasty stuff in there.

  • and Chuck Palahniuk, perhaps?

  • Human values? Like the ones in Mein Kampf? Or like the ones in pretty much all religious books?

    • Well, yes that is a good question: which humans' values?

      There is indeed no one set of values which "all" humans have, except perhaps "I wish people didn't do things I don' like," but I don't know if that really is a "value system."

      But your examples illustrate an interesting point - why are the values "in Mein Kampf or like the ones in pretty much all religious books" better or worse than any other values? That is - by what value system would it be possible to evaluate those values? What (if anything) puts

  • If the story teller is humanist, the robot also learn to be humanist. How if the robot learn from extrimist ? Will the robot be an extrimist ? Must add a function to avoid robot become an extrimist
  • Gotta be careful just how true to life the stories are. To quote Starman [imdb.com] (an alien who becomes human and tries to get around), "I watched you very carefully. Red light stop, green light go, yellow light go very fast."
  • They could read stories about how pollution, ripping people off, ignoring safety were 'bad things'. Oh, wait...
  • by RobinH ( 124750 ) on Sunday February 21, 2016 @08:05AM (#51552069) Homepage

    According to TFS, it learned by running simulations of a situation and then being rewarded or punished based on its actions in the simulation. They just happened to setup the simulation, reward, and punishment based on a story they selected. I'd hardly call that learning by reading a story.

    I remember reading Zen and the Art of Motorcycle Maintenance and the "sequel" Lila and thinking that what Pirsig had done wasn't inventing some new philosophy, but he did a really good job of expressing western values in a rule-based way. For instance, it explains why killing is wrong, but why a moral individual might find themselves in a situation where killing is justified. It explains how some forms of government are better than others, and why. As I said, it's all been done before, but what impressed me was that it was very clearly defined and rule-based. Everyone talks about encoding Asimov's 3 laws into robots, but Asimov's stories were all about how those 3 laws failed to produce correct behavior. If I were trying to program morals into a robot, I'd start with Pirsig's books and his ideas of static and dynamic quality.

    • but Asimov's stories were all about how those 3 laws failed to produce correct behavior

      Let's start by defining what you mean by "correct" behavior.

      • by RobinH ( 124750 )
        That's quite simple. The behavior you'd expect a moral human being to take in the same situation. In the case of Asimov's stories, the failure is usually quite obvious. My point is that Pirsig's framework actually gives you a good way to determine what that "correct" behavior should be.
        • Sounds like there is circular reasoning going on between "correct behavior" and "moral human being". I suppose there's no handy little reference to Pirsig's framework somewhere ?
          • by RobinH ( 124750 )

            I'm going from memory here. The basic "pyramid" had 4 levels, where the higher the level, the higher the quality:

            1. Intellectual thought/logic/ideas (highest)
            2. Society
            3. Biology
            4. Physics (lowest)

            The lower levels of the pyramid have value in that they support the higher levels. So the physical laws of the universe have "quality" in that they support the higher levels of quality. Also it means that building a house would be good (physics) if it sheltered a person (biology) who had the capacity to form ideas about t

  • Well, my system Xapagy effectively does similar stuff, with the caveat that it is not directly designed towards acquiring a value system, rather by having its behavior fully determined by stories that it read or experienced. https://www.youtube.com/watch?... [youtube.com] For a fancy discussion of what this means in a potential superintelligence scenario you can peak at the beginning of this talk https://www.youtube.com/watch?... [youtube.com] cheers Lotzi
  • It's the low-hanging elephant in the room.

  • Don't let them read any Lovecraft.
  • We don't want them getting any grand ideas.
  • WOPR did this in 1983.

  • Isn't like how humans learn the arbitrary values of society as well?

If all the world's economists were laid end to end, we wouldn't reach a conclusion. -- William Baumol

Working...