Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Movies Media

Review - Bicentennial Man 226

Robin William's "Bicentennial Man" is a rare Hollywood offering, a mainstream sci-fi romance. Syrupy and a bit circular, it's true to the Isaac Asimov story that inspired it, and is actually thoughtful about some of the issues humans may have to confront if -- as so many futurists predict -- AI machines evolve into some sort of species in the 21st Century. Like "Toy Story 2," this movie has an absurd plot, but is sometimes graphically dazzling, showing how computer animation is becoming an art form of it's own.

In the last few years, Robin Williams has gone from being one of the funniest movie stars in Hollywood to one of the sappiest, so there is reason to be suspicious of "Bicentennial Man," which was previewed in some theaters across the country this weekend.

The movie is plenty syrupy, but also surprisingly faithful to the Isaac Asimov story that it's based on, and to many of the issues it raised. Like the evolution of artificial-intelligence (AI) machines, questions about whether they can possibly have emotional lives, and their relationship with human beings.

Williams plays a household-appliance robot named Andrew who develops human-like characteristics - friendship, loyalty, humor, creativity, and faces some tough questions as a result? Is he a human or a robot? Are his feelings real, or simply the pre-programmed responses of neural pathways? Is he an appliance or some new form of life? Does he have rights and can he in any way be called human?

Probably the central question, and one Asimov often raised in his wirtings, was how exactly, creations like "The Bicentennial Man" are supposed to live in a culture with enough gee-whiz technology to create them, but that typically hasn't given a though as to how they'll get along in the world.

Andrew is programmed to live forever, at least theoretically, and this puts him in conflict with the life he wants to lead, especially the fact that he feels much more human than robotic, and that everyone he loves grows old, then dies.

Most humans see him as a machine, but spurred by a sympathetic and ethical owner and his daughter, Andrew sets out to hone his evolving skills and - one can see this coming from the first scene - ends up having a lot of heart and wanting a real one (in one hilarious and self-knowing scene, a fellow robot taunts him by singing the "Tin Man" song from the "Wizard of Oz)." Andrew decides that he needs to be free to figure all of this out, and he sets off on a quest to find his place in the world. It's at this point - three-quarters of the way through, that the plot begins to unravel, and stops being even remotely plausible.

The movie is at times gorgeously-shot, and makes innovative use of computer graphics to render cities, hospitals and offices of the future. It also deals sensitively and intelligently with a lot of the issues many suspect are coming, if even only a fraction of the predictions about AI machines come to pass.

Williams can't help but lapsing into the most wide-eyed, saccharine dialogue and character-development.

But this doesn't keep the movie from being surprisingly thoughtful and touching. And prescient, raising issues about technology and the future that hardly anyone in the United States really wants to talk about.

This discussion has been archived. No new comments can be posted.

Review - Bicentennial Man

Comments Filter:
  • Just for you info...
    Ebert and his co-host gave bicentenial man two thumbs down. They felt the first hour was good and the last hour was too depressing, as all the humans die off. His co-host was saying they over-emphasized the human death part and forgot the whole part in the middle called life, which made it a rotten movie for children.

    I'm sure if you just want to see a neat sci-fi idea its probably ok, but I haven't seen the movie myself though. If you disagree with Ebert and his co-host please don't call me stupid. I'm just the messenger.
  • I disagree that the movie is faithful to Asimov's original. I just reread the short story a week before seeing the movie. Perhaps you're thinking of the longer bookified version? I'll have to go back and look, but I don't remember the short story (which is what I would deem "original") having: the love story at the end, that whole Rupert guy, the quest for others of his kind. And, unlike any Asimov original which would have focused almost entirely on the restrictions of the three laws (I remember the scene from the story in which some jerks almost ordered Andrew to take himself apart, because after all, protecting his own existence is third law and second law says that he must obey orders), this movie doesn't mention them at all except for a quick comedy scene in the beginning. Arguably, the ending contains a sequence that is so *contrary* to the laws that Sir Asimov is likely spinning in his grave.

    Having said that, I was surprised at how good the movie turned out. I knew that it wouldn't be true to the original, because I don't expect the mass of moviegoers to understand the three laws and their implications. Take, as an example, my non-geek fiance. She loved this movie. Heck, she cried. I thought she was crying over the romance, and the father/daughter story, but was very surprised when she said that she loved the robot.

    d

  • Hardly anyone wants to talk about!?

    Sounds to me like the exact same questions that "Star Trek:The Next Generation" drove into the ground over and over and over.

    I just love it when the mainstream finally notices questions that SF-readers have been tossing around for fifty years.
  • Which Asimov short story was this based on?
  • From the pages of Isaac Asimov came a real company that called itself US Robotics, named after Asimov's US Robotics and Mechanical Men. It's about time we had another movie based on Asimov's work. The man was very nearly the most prolific author of the 20th century, and one of the most underrated. I personally hold over 100 of his books, and that is only a small portion of what he wrote. The man was a genius. Despite the presence of Robin Williams and that annoying little brat from the Pepsi ads, I will go see the movie. No matter what the movie does, it is still Asimov. I will be loyal to Asimov to the very end. Any movie or work that is made based on his stuff, I will feel obliged to check out. Asimov was just that cool. Any person who considers him or her self a sci-fi fan has not earned that title until they have at least read 10 or more Asimov works. If you don't like Asimov, that is your privilege, but do me a favor and stop wasting the free oxygen in my atmosphere.
  • Being a great fan of Isaac Asimov, I wanted to see that movie.

    So I went last Saturday to see it.

    I was impressed that the movie was really close to the novel. Often when novel are made into a movie, the substance that make it good just disappear. That was not the case this time.

    There were good humor (but it was not hilarous - expect for 1 or 2 scenes) and a lot of sensibility. The questions raised by this movie where that same as is the novel. It make you think about ethics and what make a human being.

    There were some error in the movie. In particular even if the 3 laws of robotics are enumerated in the begining, they were violated at some occasions.

    I really recommend this movie to any Isaac Asimov fan and everyone else. My wife like that movie very much even if she's neutral abou science-fiction.
  • Bicentennial Man :)
  • Bicentennial Man; later made into a book called the Positronic Man, co-authored by Robert Silverberg, I believe
  • I thought the story was called "Bicentennial Man" as well...

  • The Positronic Man
  • They were excellent. Very thought-provoking stuff.

    I haven't seen the movie yet, but my expectations are not very high (though it gives me a reason to check out the new THX theatre that just opened up near me).

    One, that super-annoying girl from the Pepsi commercials is in it. That's an indication that this is designed to be all cutesy-pie.

    Two, Chris Columbus. He did the Home Alone movies. 'Nuff said. Not exactly thought-provoking, or even entertaining, stuff.

    Three, Robin Williams. Look at the movies he's done recently, and notice the Mork connection. That's enough for me.

    If you're looking for a really good movie right now, I'd recommend Sleepy Hollow. Burton even managed to throw his continuing obsession with gadgets into it. Really cool, and visually amazing.
  • JonKatz wrote: Like "Toy Story 2," this movie has an absurd plot,...

    Well, sure, I suppose, if you refuse to accept the basic premise that toys are actually animate objects that talk and move and feel, then yeah, the plot was absurd... but that doesn't make for a very enjoyable movie, then does it?

    Unless, of course, he was referring to some other part of the plot which most every professional reviewer seemed to miss...

  • In my option Ebert and his sidekick are couple morons that woundn't know a good movie if it bit them on the ass.

    I and I saw Bicentenial Man and I liked it, alot.

  • US Robotics was named after USR and MM, cool, I always thought it was just a coincidence. During the opening credits when "based on 'The Positronic Man' by Isaac Asimov and Robert Silverburg" appeared on the screen I applauded.
    The movie is touching and goes right to the heart of the themes behind the book, the essence of humanity. Although there are obviously differences, and the three laws aren't given the same importance so as to appeal more widely to the general public, it is, nevertheless, a solid interpretation of a masterwork.
    The movie does intorduce new characters and plot elements and omits many old ones: these are done because of the difference between a visual medium and a verbal one. I congratulate the screenwriter for a job well done.
  • Wow, you mean Hollywood made a movie out of a book and ACTUALLY KEPT THE TITLE THE SAME??? I'm thuderstruck! The last time they did that was Gone With the Wind, right? :_)
  • Asimov do rules...

    He wrote more than 500 hundred books, most of them where not science-fiction at all. He wrote science books for adults and children. That's not counting the miriads of smaller piece of work.

    He became a Nebula Grand Master in 1987 (or 1986).

    Also his foundation series sparked a bunch of other writers who wrote stories related to the foundation series.

    I don't know of anybody else in the history that wrote so many piece of work in his life that Isaac asimov did.

  • In this movie the company was NA (North America) Robotics.

    Wonder if they couldn't get the rights th the US Robotics name from 3com? Or didn't want to given them free product placement?
  • I think Asimov would have liked it. The film covers the kinds of issues that Asimov brought forward in his robot stories, although there are a few big gaps in the story. For example, what happened to the three laws of robotics? Did Andrew just outgrow them or what?

    Still, the film is basically sound. The science is, as always with film, its weakest point. There will not be household robots to do your cooking and cleaning by 2005, but what the hell, this is fiction.

    "Robert Burns N6 and ZC series robots and Harley Davidson Paraphenalia" The sign on the shop in San Francisco is the best sight gag in the film.

    This is a safe movie - it won't challenge any of your beliefs and it's quite safe to bring children to. The references to sex are few and very tame - there's no real bad language. The view of the future is presented very simply and without real change to society except that neckties look even stupider.

    Whatever special effects crew did the robot effects - the masks and/or CGI - deserves an Oscar. It's amazing to see a bulky metal robot that is still clearly and obviously played by Robin Williams, not by some animatronics master or a computer program.
  • Yet Another Movie That Will Take Months To Get To Europe... I hate it when I read movie reviews on SlashDot for movies that won't be out here in Belgium for months... Why oh why don't these movie people wise up and send those reels earlier ? And don't give me that subtitling/dubbing argument : I understand enough English, thankyouverymuch. They'll beat the 'moviez' crowd that now runs rampant in Europe too.

  • Look script kiddie (I assume it's your script that's currently spamming Slashdot) by acting this way, you're only going to ruin things for the rest of us.

    Eventually, Rob will kill AC access - much the detriment of those ACs that actually post content. Poof! You're gone.

    And then if you start spamming with actual IDs, well, then comment posting will get killed for everyone. Poof! No more Slashdot.

    My, then you'll have _accomplished_ something, won't you?

    And of course, there's always the possibility that Rob could just post a list of all your IP addresses, and turn us loose on you. That'd be fun, wouldn't it?

    So do us all a favour, and go away. Nobody cares about you and your kiddie games.
  • C/mon, Bender is a way more fleshed out character than any of the bots from Asimov. The homage to Charles Dickens and the Santa/Punisher-bot was priceless.

    On an aside, I tried re-reading some of Asimov's stuff a while back (6 months?), and found it incredibly difficult to get into. I've been finding the same problem with some AC Clarke and Hubbard stuff as well. Just incredibly dated and...hmmm, not sure how to describe it. I;m wondering if this is a post-cyberpunk reaction to pre-cyberpunk writing, in that everything prior has been colored by what has come since.

    I;m just curious if anyone else out there in /.-land has thge same feeling to pre-80's Sci-Fi?

  • i read the book when i was about 9 and i loved it. 12 years later i see the movie based on the book and while it is NOT NOT NOT NOT NOT NOT NOT the same.... it is VERY good. I enjoyed the movie for what it was, and i did not pick it appart for its differences to the book. books are NOT movie, movies are NOT books. unless you are going to see a Michael Chriton (yeah.. i dunno how to spell his name, sue me.) movie, the book will almost ALWAYS be different(better) than the movie. this movie was one of the best science fiction movies i have ever seen. i don't like scifi for the big guns and monsters, i like scifi for the social interactions, and interesting concepts. as it goes i give the book an 8 on a 1-10 scale, and the movie gets a 9 on the same scale.
  • Oh yeah, for those of you out there who are not Asimov fans, the name of his fictional company (which appears in very many short stories and novels) was the reason that a certain modem company chose a certain name for themselves.

    That freaked me out I realised it when I was ~13.
  • I won't call you stupid. I'll call Ebert and his crony stupid. They're so used to sci-fi being for children (witness the insanity that was SW:TPM) that they seem to have not thought about this movie being for adults. Over the lifespan of an effectively immortal robot, people will die. If the robot became attached to them, he has to come to terms with that, and that's called "character development", something you rarely see in movies anymore. If the movie treated this in a somewhat mature fashion, it would add to the experience, not detract from it. Sorry for not sugar-coating the world for you, Ebert. Who listens to movie critics anyway?
  • She was also in "The Insider" and did a bang-up job.
    I have a feeling that she's a good actress, and it's just unfortunate that we all know her from the friggin' Pepsi commercials.
    (OT)I'm so glad I have a Mute button on the TV remote. That, and those new Gap commercials. Oy..

    Pope
  • ... before *not* recommending it. Thinking that the Pepsi girl is annoying (aside from being a bit mean spirited) is not a good nor intelligent reason to see a movie.

    BTW - I also saw Sleepy Hollow. I think Bicentenial man is a much, much better film. It might have tried to make you cry to hard, but it had a decent story and a lot of thought proviking ideas.
    Sleepy Hollow was a mess. After the third beheading I was hoping the hero would be the next victim, usually not a good sign. Oh, and the scenes with the horseman with his head on !!! ARGH ! Was he on drugs or what ?
  • by Merk ( 25521 ) on Monday December 20, 1999 @09:38AM (#1458546) Homepage

    Just because it's likely to be a big part of this discussion I'll mention Asimov's 3 laws of robotics.

    1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
    2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    Asimov's robot books dealt a lot with these laws and the conflicts arising from them. His primitive robots had trouble understanding the subtleties of the laws and dealing with the problems when the laws conflicted. The more advanced robots knew how to weigh the importance of the order. For example, for a robot to destroy itself, the order to do so would have to be very forceful otherwise the third law would prevent it.

    Anyhow, I loved all Asimov's books, esp. his robot novels and highly recommend them to anyone who likes *good* sci-fi, detective stories, and deep thinking about what it means to be human/alive/sentient. I doubt this movie lives up to the amazing quality of his books, but maybe it will at least be a way to introduce people who wouldn't otherwise read an Asimov book to his work.

  • That's odd... I just moderated this down, and then I did post a comment in this thread so my moderation was undone. But the 'troll' rating stuck ! I think I found YASB (Yet Another Slashdot Bug). Or is it a feature ;-)

    Possibility : downmoderate a +5 interesting post to +4 troll, then comment in the thread. Result : +5 troll :-), and the metamoderators won't even get to review it. I think... oh well.




  • True to the vision? I'm not so sure...


    First off, before going any further, let me state that Bicentennial Man is a pretty damn good movie and an excellent example of sci-fi effects-laden blockbuster without a single gun or explosion I can remember. That part -- Asimov's trait of writing "humans being, not humans doing" -- is remarkably intact for a movie coming out of Hollywood.

    The original story itself (and the lengthier
    collaborative version "The Positronic Man" with Robert Silverberg that the movie is based on) deals with humanity on several levels; the emotional, as seen in the movie, and the social, which is just barely touched on, mostly during the last act of the movie. The original dealt at length with the legal and social ramifications
    of a machine joining society (the infamous scene
    where two men command Andrew to dismantle himself
    on a roadside springs immediately to mind) which
    are excised completely from the story, leaving
    basically a huge tear-jerker of a film. I left
    ever-so-slightly disappointed at the fact that
    Asimov's excellent fable had been mutated a little
    too close to its Pinocchio origin. I wanted more depth, I'm afraid.

    Nevertheless, this is an excellent film with
    very subtle and interesting special effects. My med-student fiancee couldn't stifle an "oh, COOL!"
    during one scene with closeups of artificial
    organs...

    In short, see this movie, read the book.

    d.
  • by konstant ( 63560 ) on Monday December 20, 1999 @09:41AM (#1458549)
    I haven't seen Bicentennial Man, so I'm probably going to put my foot in my mouth, but what I object to about the premise of this movie (and also with the similar quest of Data in Star Trek TNG) is the sheer arrogance of the notion that robots and cyborgs would want to become human!

    This is a very common theme in Sci-Fi. Man creates robots. Robots develop self awareness, introspection, and thought. Robots (for some reason) lack "emotion" and "sensation". Robot seeks to become more human.

    In my mind this is insufferable anthrocentrism. Humans, completely without proof, cling to the idea that they are unique and special in some vague and undefinable way. Even as we push the boundaries of self-definition through such methods as philosophy, natural science, and hi-tech, we continue to relish a feeling of superiority over the rest of the universe, that, as far as I can tell, is completely foundless in any empirical fact.

    Mark my words: some day there will be programs that can write stories as well as humans. Programs that can put that delicate twist on Chopin as well has humans. Programs that can paint marvellous painting that express deep meaning as well as humans. You know this is true - we already have mechanisms that can translate like a third-grader and write stories like a fourth-grader. How much longer can it be before our marvellous intellects are mimicked by an algorithm.

    I adhere in some ways to the Behaviorist notion that what matters about intelligence is a) what goes into the machine and b) what comes out. There is nothing else. If you feel that there is more going on inside you than what can be summarized by your external stimuli and your external reactions, then you are mistaken. You are only observing an internalized output to external stimuli. The feedback you would normally express in the outside world is instead being piped directly to your brain's input valve.

    Please tell me why machines cannot do this.

    Sooner or later we will be confronted by the fact that everything we do is completely replicable, from the works of great geniuses to the droolings of cretins. (Hey - we already have the latter down :) What will humans do when confronted by irrefutable evidence that they are not special.

    My guess is, they will ignore it. They will continue to posit superiority over their made mechanisms, even if those mechanisms can produce beauty and song superior to those of the most spiritual human. We're like that - intolerably arrogant and blind.

    So, remind me again why a self-aware machine would crave to be human?

    Ok, that's my rant. Now I have to get some real work done :)

    -konstant
  • I was really impressed with the movie. Visually, dramatically, even as far as writing goes it was good. And a lot closer to the original than Starship Troopers. Light years. Especially considering it was an idea, not the F/X the film wanted to convey. It would make a great date movie provided your date doesn't own a copy of any "Nitpickers' Guide." Unfortunately, it's been years since I read the original so I cannot comment on it's "truthfulness" to the original as far as details but it almost captured the essense of humanity and independence I got from the original. Consider where it's coming from. Don't be a grinch. It's a good movie. Just not for big brains or speculative-fiction elitists.

    If what I said is nonsense,
    I'm making a point with it.
    If what I said makes perfect sense,
    you obviously missed the point.
  • Ebert is an idiot. All he does is compare a movie to classic movies. If it is different, it sucks. If it is artsey, it is great even if it really does suck. If it is fun to watch, it sucks. According to Ebert, you cannot have fun at a movie, you must have to think.

    Of course if all you had to do all your life was to watch movies, I guess I would want to see ones that make me think. But I think enough at my job and I sometimes like to watch a movie that is simply fun to watch.

    Ebert usually gives the opposite of my opinions on movies.
  • First off, I liked the movie. That said, the ending unarguably has a First Law violation.
    I counted three (maybe four) violations of the Three Laws:
    • One of Asimov's robots would have a nervous breakdown if it were in the presence of a human when it died -- Andrew was present for at least two deaths. I'm quibbling here, so I'm not sure if this should count as a violation.
    • Andrew violated the Third Law when he arranged for his own death. But, the Good Doctor wrote this into the original story, so we can conclude the decision was sufficiently separated from the results that the positronic potentials were below Third-Law threshold.
    • Gallitea violated the 2d Law when she deliberately dropped a box of delicate equipment she was told to handle carefullly.
    • Gallitea violated the 1st Law when she took deliberate action to end a human life.

    Christopher A. Bohn
  • I would have to agree with the Futurama observations! Futurama is the one show each week I try not to miss ...

    In terms of pre-80s sci-fi, I can't say I totally agree. I think some sci-fi is very dated, irrelevant and doesn't push the boundaries anymore. However, I think that there are many stories/novels that deal with issues that remain relevant today. Like other books, if the story is good, the appeal endures (IMO). (I still read some Asimov, Bradbury, Herbert, Wyndham, and am amazed that the stories are still relevant and captivating, even if some of the details are dated.) In a lot of sci-fi, the stories are based on societal and relationship themes, which seem to hold up reasonably well for the most part.

    When I first read "Neuromancer", I thought it was an incredible book. I find it interesting that when I re-read it a while ago, I was much more critical of it. (This may have to do with the fact that I have learned a lot more about computers in this time, and that cyberpunk has been around for a while ...) Anyhow, I think that anything written some time ago may be regarded as "dated", depending on your perspective.

    This is a *highly* subjective topic, tho ... ;-)

    YS
  • I normally don't watch his show, so I normally don't know what he says about most movies.

    However I had a great time last night watching him and his co-host (I guess he has a different one each week from some big paper from accross the US). He and his co-host rarely agreed on anything and had opinions as different as night and day. His expressions were great when he disagreed with her, it was hilarious - clown like.
  • For those interested, "Bicentennial Man" is available in the short-story collection Robot Visions.
    Christopher A. Bohn
  • I totally agree about trying to read pre-80's Sci-Fi. One Clarke novel that I still love is The Fountains of Paradise, largely because of the setting, Sri Lanka. But many of his other books seem almost quaint.

    Asimov I find amazingly dated in such a short amount of time--the Foundation series reads like cold war propaganda (Although I have yet to read the new ones, by Greg Bear, Gregory Benford and I think Ben Bov).

    On, the other hand, many Phillip K. Dick's books I've reread several times: The Three Stigmata of Palmer Eldritch, Ubik, and The Divine Invasion to name a few. Dick also did many cold war influenced books, but within them were insights and speculations that are still relevant.

    This all leads me to wonder how long before Gibson seems dated (If not already?).

    On a side note, Greg Bears's new book, Darwin's Radio is an interesting look at evolution and bioethics, definitely worth a read.
  • The wife ordering the robot to unplug her... It violates the first law of robotics.... No robot with those three standard laws would of ever unplugged her...

    Not necessarily. They could be making a statement about assisted suicide. The first law doesn't say "do not kill". It says "do not harm".

    -konstant
  • While they wouldn't bother with it, Asimov did it to give the robots a base moral code to prevent robots from turning evil. If you made an AI, you'd watch the seventy million movies about AI, and read Neuromancer, and come to the realisation that maybe limits are a GOOD THING. Anyways, like the virtues of Ultima, you have to follow them the best you can, and sometimes following one rule sets up a little tension with another rule, thus adding drama. I think that's at a good way of introducing conflict.
  • Well, I'll admit that Asimov is really dry but fascinating once you start rolling on it. Clarke is way too prejudiced once you read about a dozen of his things. (Even so, 3001 is a good read, especially the part in the forward or afterword where Clarke says "I thought of it first!" It's priceless. I wonder if he knows he's become a charicature of himself?) You see it like the sun in the noonday sky. Read Heinlein. Now that's cool, if sometimes weird, stuff. I find the trouble with CP stuff is that the authors sometimes get too wrapped up in the "en nui" of the genre. They lose their own humanity in the writing and it's hard to relate to any characters or ideas or anything because it just doesn't matter. Oh, and Dune. That's a great read if difficult to pick up, but not nearly so hard as Robots or Foundation. A lot harder to put down though.

    If what I said is nonsense,
    I'm making a point with it.
    If what I said makes perfect sense,
    you obviously missed the point.
  • Can we PLEASE keep Katz on the social issues and get someone else to review movies?

    And can we PLEASE get someone to edit his articles for punctuation and typographical errors? It shouldn't be that hard...

    • Andrew was present for deaths in the original story, too, so I can't really argue that this is a violation. I will presume that sufficiently advanced robots understand age, disease and mortality.
    • The way Asimov wrote it in the original story is that Andrew chose between the death of his body, and the death of his hopes and aspirations. The latter is the "greater death", and thus he prevented that.
    • Good point, I hadn't thought of that one. I suppose you could argue that with her new "personality chip" that she had more freedom to do that, but even the *thought* of a single chip being able to override the laws would again have the good doctor reaching for an airsick bag.
    • That's the one I was referring to. I meant arguably because I was allowing people the possibility of arguing that euthanasia is a way to end harm by ending life. When a robot sees a suffering human, and through inaction the human continues to feel harm, that's a violation. That would have to be one hell of an advanced robot, though.
  • I see, so you are 1E3T because you wasted the effort to manually spam Slashdot. Wonderful, glad I got that cleared up. Do us all a favor and go do something truly constructive. I'm sure the only stupidity you're showing Rob and the rest is your own. Please air your childishness elsewhere. Danke.
  • It's throwaway comments like this that bug me most about Katz's writing.

    "Computer animation is becoming an art form of its own"? Computer animation has been its own art form since Fantasia, or even before.

    Of course I guess this doesn't bug me quite as much as his technocratic dogmatism, which is for the most part thankfully absent from this piece.

    BBB

  • Asimov had three reasons for the Three Laws:
    • As an author, he wanted to avoid the "Frankenstein" syndrome of man's creations out-of-control. The Three Laws were his way of making sure he'd never stoop to that level.
    • As an author, it gave him a lot to write about.
    • The Three Laws were the USRMM's way of trying to reduce the public's fear of the robots. It didn't entirely work.

    Christopher A. Bohn
  • Of course the faithful Robots Daneel and Giskard did eventually come up with the zeroth law which was something along the lines of the following, with the other laws amended accordingly:

    0: A robot should harm or allow to come to harm through inaction humanity.

    Of course Giskard had a nervous breakdown and died as a result of this though.... but Daneel went onto guide human kind to the stars. (We only find out how this ends with Foundation and Earth)

    This was a bit of a leap for robot stories becuase it suggested a leap that had not really been seen in the books. Robot spontanously developing morals.

    Where there any more rule added by others? In other contexts. I know that in Lost Little Robot the first was weakended.
  • I occasionally worked for Canada Robotics, named (I believe) after the modem company, in turn named after the Asimov corporation.
  • You raise a good point in your post. One which I'd like to address.

    Human beings as a species have been the top of the local food and evolutionary chains for as long as the collective conciousness can remember. It has become something that is taken for granted. This being said, it is a well know phenomenon that humans hate the unknown.

    Sentient robots and aliens represent this unknown. They are another species that we could communicate with, people we could have conversations with, same as the next door neighbor. As a race, we are afraid of how we will measure up to them.

    It is my opinion that humans (in groups at least) feel a driving urge to be superior to everything around them. That's great for evolution and survival in the early stages of species maturation, but as we become "civilized" these urges tend to promote warfare and strife.

    We are afraid of races that may feel the same way, and might be able to subjugate us. We want to be the top of the heap, and they best non-violent way to do that is to be the race everyone wants to be like. It's the classic Football Hero/Head Cheerleader (popular people) in high school story all over again.

    Now I'm done, and I have work to do too. =)

  • I usually see eye to eye with Ebert when it comes to dramas and serious movies, but I swear the man has no sense of humor. When it comes to comedies, he's an idiot.

    I usually like Robin Williams movies though (What Dreams May Come was soooo underrated IMO, while his lesser achievement last year, Patch Adams, got tons of attention). Anyway, I'll probably go see Bicentennial Man too. If Katz gives it the nod, it must have something going for it.

  • Sorry, Jon, but BM was a five-minute Hallmark card stretched out into 2+ hours. I have a lot of patience for cheese and heartstring-plucking movies, but this was the most deliberate yet boring movie I've seen in a while.

    The portrayal of the human characters was so boring and non-challenging that I honestly wondered in the middle of the movie why Robin Williams wanted to be human at all. Existence as a robot seemed much more exciting than these people's lives. The little Pepsi girl was more annoying than the female robot with the personality chip, who bugged the hell out of me.

    Sure, the computer-generated backgrounds of San Francisco and other cities were quite purty. But I didn't come to this movie for the special effects - we're all becoming more immune to effects-driven movies now - I came to see a study of technology and humanity, the pervasive theme behind most of Asimov's work.

    Instead I left covered in cheese - and I checked, it wasn't spillage from my nachos.
  • Cyberpunk is not the issue. With Asimov, he fell into a vat of saccharine sometime about 1970. Coupled with the fact that he wrote no novels from Sputnik until about 1973 (?) you get either syrup or dated from him.

    Other than a few problems with being pre-feminist, Asimov's first two "Caves of Steel" novels have held up OK. Other authors wrote stuff that have held up well over time.

    For example, any of the first decade of Niven/Pournelle collaboration novels have held up well. (Ignore the PC references in Hammer -- they are not important to the plot).
  • Apologies for the multiple post. A moment's confusion plus the distraction of my phone ringing off the hook has caused my error. =) Sorry.
  • So, remind me again why a self-aware machine would crave to be human?

    Because we programmed them? It's that Pinocchio subroutine. ;-)

    Serously, because they "grow up" in human company, they're likely to take on the prejudices of humans (at least in a vague subconcious way) in the same way that minority classes of humans living in proximity with others often pick up the prejudices of the majority classes. It's not necessarily right, nor is it usually good, but it's so.

    One could argue that a mechanistic personality could use the superior logical processors in its brain to overcome this social programming, but then it might not be recongnised as sentient. After all, its ability to continue evolving depends on our forbearance (as you no doubt would see in the movie, though I haven't been yet either), so it must evolve in a manner that we find acceptable, or face disassembly.

    Thus we can posit the "positronic principle" in parallel with the antrhopic principle. The anthropic principle states that the universe that we observe can support life like us because if it didn't we wouldn't be observing it. The positronic principle asserts that mechanical intelligence will closely resemble human intelligence because if it doesn't we'll probably decomission it.

  • by vlax ( 1809 ) on Monday December 20, 1999 @10:16AM (#1458578)
    I adhere in some ways to the Behaviorist notion that what matters about intelligence is a) what goes into the machine and b) what comes out. There is nothing else. If you feel that there is more going on inside you than what can be summarized by your external stimuli and your external reactions, then you are mistaken. You are only observing an internalized output to external stimuli. The feedback you would normally express in the outside world is instead being piped directly to your brain's input valve.

    There are serious problems with behaviourism.

    First of all, if behaviourism were true, we could teach pigs to sing. We can't. There are built-in functions in the brain that make a difference in intelligence.

    Second, human children learn language in a fashion which behaviourism can't account for. Children will learn whatever language they are exposed to. They learn the rules of their language without ever understanding them as rules. They do not make the type of mistakes a trial-and-error behaviour renforcement model would require of them. They always group words into structures, even in highly inflected languages and even when they get the fine points of syntax wrong. Furthermore (and most damningly) a human child can become fully functional in a foreign language in under a year, while few adults can do so under any circumstances. The most parsimonious theory that includes these facts is that humans, like other animals, have biological mechanisms in the brain that enable them to do these things.

    Thirdly, there are growing bodies of evidence that large areas of human behaviour are biologically influenced. Several forms of psychiatric disease can be clearly traced to biochemical roots. Human sexual behaviour has clear biological roots (I doubt anyone would much bother with sex if their brains didn't force them too.) Even areas like anti-social behaviour are increasingly believed to have partially biological origins, possibly hereditary ones.

    That means that humans are not tabula rasa as the behaviourists believed. What goes on in our brains is not a simple function of external stimula.

    Now, that does not mean it isn't possible to understand these parts of the brain and program computers to emulate them effectively, but if we do so, we are emulating a human, not creating a truly new machine intelligence.

    I can easily imagine a machine pretending to be human wanting to become fully human. Such a machine would likely have emotional states, since we are unlikely to be able to separate these genuinely human conditions from an abstract intelligence. We don't even have a good definition of intelligence, and even if we fully understood the biology and functioning of the brain, we are unlikely to be able to discuss intelligence apart from it's structural framework.

    An AI that thinks like a human is likely to want the range of experience and the level of autonomy that humans enjoy. It's not implausible that it would want to be seen and treated as an equal to humans. It is conceivable that it would view itself as superior, but I find it hard to believe that any probable AI would wish to reject the ensemble of human experience in the way you suggest.

  • The three laws are bunk anyway. An AI that could conceivably be advanced enought to navigate them would have to be godlike anyway.

    The first law alone implies that robots would have to weigh every action nearly endlessly to be sure that their action caused no harm to others.

    For example, a robot would never be able to drive a car because the laws of physics dictate that an object as massive as a car cannot be stopped by (brake) friction alone should a human jump in front of it. By driving a car, a robot would be endangering the lives of any humans that, accidentally or purposefully, stepped in front of the car.

    Human beings call that sort of thing "acceptable risk", but a robot hardwired never to harm humans would never be able to accept that risk. There are a multitude of tasks that are of a similar nature, such that it would be difficult to use a robot for anything besides a doorstop.

    The 3 laws are excellent sci-fi (or perhaps that's just Asimov's writing) but implementationally impossible.
  • You don't get it at all.

    Slashdot is a community, and in a community, there are standards of behavior that are created and enforced. The restrictions on posting are there because we, the community, would rather be subject to rating and moderation than have to read many hundreds of lame, pointless posts like yours.

    Your suggestion that you're plastering your crap all over this discussion in order to "SHOW ROB AND THE REST OF THE GANG THEIR STUPIDITY" and "[show] WHAT A POS THE SYSTEM IS" is complete bunk. This is like a vandal smashing windows to demonstrate the fact that the police cannot stop every crime - true, but it's a rationalization for juvenile behavior that completely lacks a logical backing.

    If you want to be a part of this community, you're going to have to follow its rules voluntarily - this is why the Slashdot community works, and this is why society works; most people follow the rules despite the fact that they know that the rules may not get enforced.

    You suggest the system is broken. That comes from the premise that the system should be able to stop any type of lameness that comes into it, and that's a completely wrong interpretation of what the moderation system is there for. The problem isn't the (very well-thought-out) technology that Rob et al have given us to rate one another, it's the hostile element which doesn't want to contribute, the people who just want to force their immature ranting onto everyone else.

    You aren't pointing out what's wrong with the system in your post. You are what's wrong with the system.

  • I think you missed a "not" in your Zeroth Law. Basically, robots must protect /humanity/ above protecting individual humans. (Although I can't find a list of Robot novels, so I don't know which ones to read to discover this.)

    Roger Allen McBride has couple of really poorly-written novels that rip up the Three Laws, point out their problems and their resulting effects on humanity (!), and then proceed to create four New Laws.

    The actual examination of the Three Laws is really well done, but unfortunately it's the only good piece of writing in the novels.

  • Note: I haven't seen the movie yet, but I have read the book at least a hundred times. (along with everything else Asimov)

    One of Asimov's robots would have a nervous breakdown if it were in the presence of a human when it died -- Andrew was present for at least two deaths. I'm quibbling here, so I'm not sure if this should count as a violation.

    This occurs in the book as well. Andrew is argued into accepting that human mortality is unavoidable, in the long run. He did take a bit of a hit by it but was okay in the long run.

    Andrew violated the Third Law when he arranged for his own death. But, the Good Doctor wrote this into the original story, so we can conclude the decision was sufficiently separated from the results that the positronic potentials were below Third-Law threshold.

    In the book, Andrew chooses the lesser of the two deaths. The death of his self, or the death of his hopes, apirations, dreams, etc.. I was never sure I bought that argument, exactly, but that at least is true to the story. The big thing in the book was Andrew trying to argue the robot doctor into performing the detremential operation. First Law was weak enough once he proved he was a robot, and second law was re-enforced enough (because Andrew looked human) to make the doctor go through with it.

    ---
  • 0: A robot should harm or allow to come to harm through inaction humanity.

    Screwed that up a bit didn't you? :-)

    0. A robot may not harm humanity, or, through inaction, allow humanity to come to harm.

    and the other laws became subserviant to it.. so first law depended on zeroth, etc..

    Anyway, all that happened because the robots (starting with Giskard) learned to read minds, and to see humanity as a whole, more important than any single human. It was sort of a logical extension of the first law.

    Eventually it leads to a schism in the robot world, with those who believe in the zeroth law, and those who don't.

    Anyway, "The Bicentennial Man" was a damn good book, and to me it looks like the movie will just screw it up. Robin Williams is just the wrong guy to play Andrew. Maybe I'll be proven wrong when I see it, but I doubt it.

    ---
  • Oh yeah, for those of you out there who are not Asimov fans, the name of his fictional company (which appears in very many short stories and novels) was the reason that a certain modem company chose a certain name for themselves.

    Asimov's company is US Robots, the modem company is (was actually, 3COM now) US Robotics .

  • Anyone who actually builds a robot wouldn't bother to put in those stupid moral codes.

    ... Have you actually read any of Asimov's books? The 3 laws weren't created out of thin air. Asimov asked the question "What would it take for people frightened by technology to allow robots into their homes?" (remember, he started writing a long time before the West became so machine happy.) His answer was the three laws. Even better, those laws allowed for the creation of a series of novels based on logic puzzles: Given the 3 laws, how can a robot commit murder? How do you identify a robot that does not have the 3 laws installed, but which is pretending that it does? What are the consequences to humanity of being surrounded by semi-immortal, nearly-omnipotent nurse maids?

    As literature, the 3 laws were an excellent tool for exploring the relationship between man and machine. They were never meant to be principles for actual robotic design.


    --
  • Our obnoxious little friends are trying to make a point.

    The only speech worth having is free speech. One of the great things about slashdot was how everybody's voice was heard. The current caste system in place has some rather undesirable side effects that run counter to this.

    The real problem has always been idiots like you who insist on feeding the trolls. That and the morons who insist on posting nearly the exact thing as everyone else. Read the comments. If you have something new to add, then post it. Otherwise STFU and the signal to noise ratio will improve dramatically.

  • No, Asimov wrote the three laws when it was assumed that robots would one day soon be able to be constructed easily and cheaply, but those robots would not possess the information required to function. The Three Laws of Robotics were simply a way to ensure that they didn't try to kill everyone, particularly when the world was starting to see technology as a very real threat.
  • Yeah, that's kind of what I was thinking when I saw the previews. Haven't seen the movie itself yet, so I can't comment on its quality.

    But, to quote Tori Amos, "There are only ten ideas under the sun. What makes the difference is how you spice them." After all, it's the execution of the idea that counts. (Witness The Matrix. The points brought up in its plot have been around in scifi novels for years, but the movie worked because it did such a good job instantiating them.)

  • by Otto ( 17870 ) on Monday December 20, 1999 @10:28AM (#1458590) Homepage Journal
    Actually no..

    The movie sounds like it's based more on the book than on the short story..

    Okay, background info:
    The short story, called "The Bicentinial Man" is in "Robot Visions" and a few other of Asimov's robot compilations.

    The book, called "The Positronic Man" was made by Asimov and Silverberg and based on the short story, but more fleshed out and longer.

    I believe there was a book also called BM by Asimov alone, but he re-vamped it into PM later. I'm not sure of that though.

    ---
  • Just want to post a note to everyone to not waste the 3.5 hours or so reading the 'novel.'

    Remember when you were in 6th grade and had to do a report on bats or something? You took the encyclopedia entry and stretched it to 2 pages or some such... The 'novel' is exactly the same thing. Silverberg just inserted words into the story and added at most 2 scenes. A complete waste of effort. They had to use large print and thick paper to get the 'novel' to look like a whole book.

    Just read the original story and if you're a CGI fan, go see the movie. But avoid the 'novel' at all costs.
  • Minor correction:

    In my copy of the book (and all the short stories), it was U.S. Robots and Mechanical Men. Not "robotics"..

    However, Asimov is credited for inventing the term "robotics" when he used it for the first time in a short story back in the 30's (40's?). Of course, he thought the word already existed, and didn't realize that he'd made it up.

    Another thing in the book that may not have made it to the movie: world government. The world was divided into dictorates, and the book mainly occured in the North American Dictorate, as I recall.

    ---
  • "Computer animation has been it's own art for since Fantasia, or even before."

    This must be some weird definition of 'computer animation' that I've never heard of before.

    _Fantasia_ premiered in 1940, decades before the first examples of CG animation. Perhaps you were thinking of _Tron_, the first commercial film to use extensive CG animation. (1979)

  • The idea that intelligence is as simple as mimicking reactions has already been proven to be false. The machine may at first appear to be intelligent, but only if you behave in a way that the machine's creators have anticipated. It is unable to adapt, on it's own. (You can program it to adapt, but that's not the same thing.)

    To truly create an AI you must make it self-aware (At least, that's what's believed now.) and that is harder than any behaviourist's algorithm.

    If you are interested in topics like this I'd recommend you to pick up Gödel, Escher, Bach by Douglas Hofstadter. There are many other books that approach the idea of sentience from a scientific angle, (I'd call psychology a "soft" science, they are generally not interested in trying to tie their models into the actual construct of the brain, naturally that is the hardest part.) but I'd recommend this one anyways, because it's just so damned good! :-) (It's been reviewed here on /.)

    On the topic of robots and life: As I remember to short-story (Nope, didn't read the book, nor have I seen the film.) it's not so much that he wants to be human, he wants to be alive, and to be able to die. (Most people don't want to be too individual neither, particularly not if it's not their choice.)

    That's all this AI construct have to say about that! ;-)
  • But I don't think there's too much risk of my karma getting hit, since the moderators would do much better to mark down the idiot I'm replying to. So here goes...

    This is the kind of nonsense that -1 is for. If I could just block out this kind of flooding, I'd be happy to just lower my threshold and let the moderators take care of it (assuming they have enough points, which looks to be a bit of a problem in this case).

    Unfortunately, interesting posts like those from grits boy, the naked and petrified troll, and other totally irrelevant comedy posts would be filtered as well. So, I surf at -1 and put up with it.

    It would be cool if there was some way to ditch the blatently redundant flooding, but still be able to read the oddballs. I don't even mind reading first posts, as they're kind of a Slashdot tradition of sorts. A selective filter based on moderation category would be way cool, assuming the moderators were trustworthy. Also, maybe add another reason for marking down. Call it "flooding" situations specificly such as what we have here.

    Well, now that I've added to wasted bandwidth, I think I'll quit. Carry on.

  • Okay, I decided to scroll down to the bottom of this post, and saw the "Moderation Slurp" going on..

    That's it. It's over folks. I swore I never would do it, but from now on I read at 1 and up.

    Goodbye Anonymous Cowards, we hardly knew ye..

    ---
  • As I understood it, the robots didn't obey a set of black-and-white rules. Rather, the instructions guiding them were incredibly intricate and deeply ingrained within them. A robot may not harm a human being, but if a human orders a robot forcefully enough, a robot could probably be instructed to cause a very mild amount of pain, or to place a human being in a situation of slightly more risk of harm than the robot would otherwise permit. Likewise, a trivial instruction that a robot kill itself could probably be ignored.

    In these situations, the robot would probably be under a bit of duress, but the point is that these situations tend to be represented as *potentials*, or "voltage levels", if you will. "Acceptable risk" is an acceptable synonym, in my opinion. Without this ability, I agree, robots obeying these laws would probably be useless.

    A robot would theoretically be capable of a tremendous amount of observation and prediction. If a human were to run and jump out in front of a car driven by a robot, the robot would either be able to see this and prevent it, or there would be nothing he could do about it. A sufficiently advanced robot would survive either way. Since (in the Asimov world), most (all?) cars were driven by robots, and the robots could communicate between each other, it's easy to see that the act of navigating by car was relatively safe. Pedestrians alongside the road are another matter, but you're right -- a robot wouldn't do it if there was such a large chance of harming a human being. The logical conclusion is that the robots didn't see such a chance for harm, or if there were a small chance, the potential introduced by orders from the 2nd law would override the 1st law concern (but only to a point).
  • Is that the robots took those laws and postulated the Zeroth Law (which is, essentially, that robots are more responsible for protecting HUMANITY than they are any one person)... and then humanity was ultimately enslaved by robots, for a time.

    The idea being "I have to protect you, whether you like it or not, and controlling you is the most efficient way to do it."

    At least, that's what the later Foundation books seems to suggest. That's why robots are illegal in the Galactic Empire and post-Galactic Empire...

  • I adhere in some ways to the Behaviorist notion that what matters about intelligence is a) what goes into the machine and b) what comes out. There is nothing else. If you feel that there is more going on inside you than what can be summarized by your external stimuli and your external reactions, then you are mistaken. You are only observing an internalized output to external stimuli. The feedback you would normally express in the outside world is instead being piped directly to your brain's input valve.

    So what goes into the machine? And what comes out? And does the machine come with some internal state?

    When you were born had you already begun to process inputs? Are you an extension of your mothers original state?

    What you assert is fine in theory, but now try to nail it down, and apply it. The behaviorist tried, and failed miserably. (B.F. Skinner the behaviorist managed to screw up his kids by trying to teach them with stimulus response pairs).

    Consider, in order to know all the inputs and outputs to a human being we need to know everything that is impacting them at this very moment. The new state of the human machine is a combination of its current state (lets say fedback inputs) and its current inputs. Thus we also need to know everything that has ever impacted them in the past. If you suppose that a child shares part of the mothers state, you must know all the inputs she had (and her mother, and so on). Even if you don't then the amount of data you need is monsterous. Now throw in physics theory that says we can't know the precise position or momentum of particle simultanously, or that electrons exist with certain probablilities in certain orbits, and none of this huge pool of data is certain. Now, on top of this your machine is self programming, on unknown stimuli (i.e. the same stimuli affect different human machines differently under identical external conditions)... And behaviorism appears to be a dead end until such time as we learn more about our universe.

    It may be that it is through arrogance that we assume that we can know everything.

    --locust

  • personally i havent seen the movie yet. it did look quite hilarious and i was planning on seeing it. Since I've heard such wonderful reviews of it and am a Robin Williams fan, I guess I'll go take on of my girlfriends to go see it sometime.
  • There were plenty of references to Asimov with respects to Data in Star Trek. The whole idea of a *positronic* brain came straight from Asimov. It kinda sounds like you want to say "Data came first" when really, the story this movie is based on is much older than Star Trek. :)
  • There are three Allen robot books, and I'm of the opinion that they are not as poorly written as you say.

    My feeling is that Allen has done a good job of replicating the Asimov feel. It'll never be as good as the original, of course.

    You want poorly written, take a look at Greg Bear's Foundation novel (yeughh!)
  • by Anonymous Coward

    This moronic spammer does not speak for me, and I doubt very much that he speaks for naked'n'petrified or open source natalie portman. I am morally certain that he doesn't speak for 70% (the Plausible Right-Wing Troll), either. We've discussed idiots like this, and it's a pisser because it's a damned abuse. One troll (preferably entertaining) in a discussion will get moderated down to -1 and it won't bother anybody who doesn't choose to have a low threshold. We can all live with that, but this is different. For fuck's sake, it's not even funny. All it does is give the real trolls a bad name. I refuse to do anything the moderation system can't cope with, just as a matter of simple decency.

    This guy is just some random jackass with no sense of humor.


    Regards,
    80 Million Dead
  • My wife like that movie very much even if she's neutral abou science-fiction.

    My wife thinks Sci-Fi is "uninteresting" except for Stargate and Stargate SG1, oh umm and sometimes SeaQuest (She thought Darwin the Dolphin was cute.. and whoever that spotted guy was)

    She also likes the usual Mainstream stuff like ID4, MIB, Alien(s)

    What suprised me is she want's to see this, so we are going sometime this week. I hope it's good, Robin Williams bothers me.
  • Fatbrain.com links to Robot Visions [fatbrain.com] and Positronic Man [fatbrain.com] (latter is out of print although publisher may reprint).

    Or check the card catalog of your local libraries [netscape.com].

  • this kinda trash doesn't take points to get rid of. I send the page to the slashdot team, they delete it w/o using up my moderator points. they'll also probably check the IP logs and see that yet another script kiddie needs to have his ISP revoke his access.

    read the 'moderator how to' to see exactly what moderators do, kid.
  • Ok, I haven't seen the movie or read the book (though I've read and enjoyed many Asimovs in the past) but it seems to me quite unlikely we'll have any high-tech machines any time in the near future that regularly outlive us. How old is that machine on your desktop? Your car? Your household appliances? The only significant artificial object you frequently associate with that is likely to be older than you is your house/apartment, and even that is not a given. Humans have been dwelling in houses for thousands of years, and we still build so many new ones each year, and tear down the old. My guess is it'll be a thousand years or more before robot design stabilizes to the point where models a hundred years old are not obsolete, and by then us humans will probably be living a lot longer too.
  • Gasp. Another Williams vehicle that aims only to warm your heart and ignore your brain. Like its competitor The Green Mile this movie begs for Oscar attention with its brain numbing simplicity. Remember when Robin Williams was cutting edge? Remember when he was funny? Yeah, neither can I.

    What's this KatzSpeak about computer animation becoming an artform of its own? That would be nice if it was viewed as fine art, but its mostly used for movies which are about as far as you can get from fine art. Snazzy animation has replaced the only thing worthwhile in SciFi - the story. I've seen bubblegum anime with stronger plots than most big budget sci-fi flicks. Great graphics in the hands of today's filmmakers has more or less ruined the genre. I say they rename Sci-Fi Com-Ani and be done with it.


  • by NME ( 36282 ) on Monday December 20, 1999 @11:08AM (#1458622)
    This was a movie. It had some special effects, was based on a book and raised some issues. Robin Williams was in it.

    C'mon, admit that all you've seen is the trailer.


    *smirking*

    -nme!
  • Just to keep the record clear: Asimov may have added the standard "-ic" ending to "robot", but "robot" already existed:
    Etymology: Czech, from robota compulsory labor; akin to Old High German arabeit trouble, Latin orbus orphaned -- more at ORPHAN Date: 1923
    (Merriam-Webster Dictionary) [m-w.com]
  • by Hard_Code ( 49548 ) on Monday December 20, 1999 @11:14AM (#1458628)
    I agree with you to a large extent. Intelligence is emergent behavior. We percieve it as something unique and special, (well, because it is rather unique and special). However, there is nothing preventing non-organic systems from becoming intelligent. It is possible, if not probable, that non-organic intelligence would be based on a neural-network (like its biological counterpart). Neural networks are just humongous pattern matchers, with fuzzy logic. Given that, it may be possible for machines to "feel" certain things, or have quasi-emotions, or intuition. However, as a followup poster noted, humans themselves are not purely behavioristic. Therefore expecting machines to be entirely capable of becoming like humans, simply because both systems are behavioristic might not exactly follow. Humans have all sorts of weird non-logical biological influences on our "behavioristic" nature. We do stupid irrational things. We also make unaccountable stupendous and original leaps of innovation and thought. We have state which effects our outputs, starting before we were born. I am not saying that machines/robots/computer/non-organic systems /can't/ ever be human, I'm just saying that using a behavioral argument, it might not exactly follow. On a pure empirical basis, of course there is nothing stopping it, after all, we're all atoms. Humans are special. That's not to say they are more or less good or bad than anything else, but they are unique.

    Jazilla.org - the Java Mozilla [sourceforge.net]
  • "I can easily imagine a machine pretending to be human wanting to become fully human. Such a machine would likely have emotional states, since we are unlikely to be able to separate these genuinely human conditions from an abstract intelligence. We don't even have a good definition of intelligence, and even if we fully understood the biology and functioning of the brain, we are unlikely to be able to discuss intelligence apart from it's structural framework." Well, if a machine didn't have emotion, it couldn't /want/ to have emotion. Organic systems have a goal - survive. That goal gives them will, which translates into wants, desires. An artificially-maintained intelligence, having no will to survive, may have to reason to care about anything. The various emotional nuances may simply not have a place. This behavior wouldn't be emergent. Artificially implant a "goal" or survival in the intelligence and these things may emerge. I like ice cream because it is sweet, it is sweet because it has sugar, sugar is sustainence, and sustainence keeps me alive, which has been hardcoded as GOOD in my brain.

    Jazilla.org - the Java Mozilla [sourceforge.net]
  • Wow. A slashdotter pounding his chest with his materistic mechanical view of the universe while making bold predictions of an AI future.

    Maybe we can stop being naive for 10 seconds and see you've fallen straight into the 'futurist making predictions' trap that's just laughable.

    What we know about consciousness is next to nothing and out currently theories badly fit the data, especially Behaviorism. Behaviorism, really now, you might as well unearth Aristolian physics while you're at it.

    Your 'humans aren't special' belief and Hollywood's 'humans are special' belief about AI wanting to be like people are both fiction to me.
    Here's my bold prediction for you: The future will be utterly unpredictable because past predictions are always wrong. For some reason modern 'thinkers' know better because they 'know' today's accepted science is the unalterable perfect truth. What surprises are in store for you? Who knows at least you'll be surprised.

    In the end its just a lame plot device for a lame movie. Even if AI, today, was that advanced and content to be just a robot this idea would still fly with a lot of writers.

    Why? Cause anthrocentrism sells tickets and books. Just ask Williams's or Asimov's accountants.

  • Isaac Asimov is one of my favorite writers, but the 3 laws themselves are a bit dated.

    Specifically, they - although they're supposed to be encoded at the lowest level of the robot's "positronic brain" - are stated in terms of high-level concepts; "human", "harm", "orders", "inaction", "protection", "conflict" and so forth. (Needless to say, Asimov himself was quite aware of this, and all of his Robot stories involve juggling the exact definition and application of these concepts.)

    When the 3 laws were first written out - in the early 50's AFAIK - the prevailing view of consciousness, the mind, and AI was upside-down from what we think today. Namely, concepts, analogies and calculations were supposed to be low-level "intelligence" operations, and robotic (or human) consciousness was built with these as building blocks.

    Instead, today we view consciousness, concepts, analogies and even mental calculations as an emergent property of a great number of low-level functions which seem to be simple feedback loops, pleasure/pain learning circuits, perceptual functions, and what linguist George Lakoff calls "conceptual metaphors" [edge.org]. One of the points to the modern view is that, probably, an AI would have to be taught to do mental calculations, and probably would do them with same speed (and the same accuracy) as a human.

    So, when practical robots come about, they'll be built on physical metaphors, basic learning circuits, and will have to "learn" the equivalent of the 3 laws once they can grasp the abstract concepts involved - and they'll probably want to argue a lot about the implications.

  • Well, yes. They are bunk. That's why it's fiction. I'm not sure anybody, even Asimov, necessarily expected them to be taken as a serious model. For an interesting slam on them, check on the Harrison/Minsky book "The Turing Option", where the characters discuss their failed attempts to mimick the three laws.

    Having said that, the three laws are an excellent model on which to build the huge base of work that Asimov did. Three simple laws, and yet he found enough material to write numerous stories, several novels, etc... He even evolved the idea, commenting in his later stories to the effect that "Earlier robots were of the 'if x more harm than y then do y' variety", while later robots were better able to weigh potentials.

    The entire idea behind the three laws is the notion that human beings are not comfortable with their own creations unless they are convinced that there is some sort of built in protection. Sort of like a frankenstein-complex clause. We are afraid of being harmed, therefore the first law MUST be do no harm (hell, we even make our doctors swear that in the hippocratic oath!). The risk to Andrew Martin throughout his entire life is "You are human, I am not, therefore my 'life' is worth less than yours." At the root of all Asimov's robot stories is: "We are capable of creating better than our equals, yet we will deliberately cripple them to be beneath us."

    d

  • Remember in the book when Andrew's law firm decides to not to pay one of their janitors "...because he's obviously a robot..." due to the fact that he has a prosthetic heart. An absurd claim indeed, but deliberately made to set a precedent that blurs the line between human and robot, thus facilitating Andrew's growing claim on his own humanity. Too bad they just cut the scene out; it would have been useful to the plot engine to ambiguate (is that even a word?!) the boundary between Andrew and the human race.


    Solomon Kevin Chang
    Database Design and Programming
    Disney Televentures
    (Yeah, sorry, it was my parent company that did the film)
  • Well, if a machine didn't have emotion, it couldn't /want/ to have emotion. Organic systems have a goal - survive. That goal gives them will, which translates into wants, desires.

    I disagree. The evolutionary value of emotion is as an additional stimulus to act, one not based on the rational processes developed in human brains. When we lack enough data to rationally decide, we fall back on earlier mechanisms: emotion/instinct. Emotion itself is a combination of impulses based on past history not conciously processed, plus biochemical impulses that have been subject to the process of natural selection for thousands if not millions of years. There's an evolutionary value to acting without sufficient data, and a process that mimicked this in computers might be equally useful.

    That's my theory at least.

    --LP
  • Actually, in the Asimov universe, robots never controlled humans. Daneel Olivaw (later Deneezel in the Foundation series) developed the ability to manipulate minds, but never used it for explicit control.

    I like the fact that Asimov didn't go down the route of many a B movie and make the robots "take over the world." Far too many people fear technology today because of this, and see it as an inevitability. I think Asimov investigated the consequences of his 3 laws of robotics quite thoroughly, and at the very least demonstrated that robots won't inevitably turn on their creators (provided the creators have a little common sense).

    Doug
  • You might possibly be correct given a self-aware machine in an isolated environment.

    Andrew was relegated to a very low (lower than human slave) class due to what he was. I would consider it a very natural instinct for any intelligent being to see their situation, compare it to their persecutor's situation, and come to the conclusion that it was better to be the persecutor rather than the victim.

    Doug
  • There are serious problems with behaviourism.

    ...if behaviourism were true, we could teach pigs to sing ...There are built-in functions in the brain that make a difference in intelligence...

    ...Children will learn whatever language they are exposed to. They learn the rules of their language without ever understanding them as rules. They do not make the type of mistakes a trial-and-error behaviour renforcement model would require of them...

    ...large areas of human behaviour are biologically influenced...

    ...humans are not tabula rasa...


    I'm fairly certain konstant was using a broader definition of behaviourism than you imply. I don't think any modern (mainstream) psychologist would argue against the above assertions, but given that behaviourism isn't just about classical Pavlovian conditioning, it is not only still very much alive but arguably the only school of psychology which is remotely scientific.

    I refer to the sort of behaviourist stance set out by Daniel Dennett: roughly, that there is no "Cartesian Theatre"; that we are pure mechanism and any mystical notion of self awareness beyond that of simple internal feedback is merely an illusion; and that it is useless to speculate about unmeasurable experiental qualities (of internal states) which aren't necessary to explain observable behaviour.

    The strength of this behaviourist hypothesis is that it has the advantage of simplicity (no deus ex machina) and yet still can't be disproved. I'm not asking you to take my word for it, there's hardly space here to paraphrase the position let alone justify it. Do read Dennett though.

    Consciousness is not what it thinks it is
    Thought exists only as an abstraction
  • Good point. Though I think a better analogy than your ethnic groups adopting the mores of their host society would be the domestication of animals. Farm animals and pets were selectively bred according to clear preferences on the part of the breeders (and the markets they were selling to). In the case of dogs and cats, bred for companionship, this resulted in a set of behaviours which seem at least quasi-human even when you allow for the anthropocentrism of the pets' owners.

    The same thing applies to domestic AI's right now. Just look at Aibo and the domestic robot under development by NEC. Both are playing to highly emotional preferences of the target market.

    If the latter example is any indication, then in future the domestic robots with the most sophisticated intelligence will be those intended to provide companionship for the owner and interact with other humans in the manner of a servant. Just as in Asimov's fiction I suppose. Simply because that's what people will most likely be prepared to pay a lot of money for.

    Consciousness is not what it thinks it is
    Thought exists only as an abstraction
  • In fact, Mr. Asimov is credited by the Oxford English Dictionary, with the creation of 3 words:

    'Positronic'
    'Psychohistory' - which has a different meaning then that given in the Foundation series.
    'Robotics'

    This information comes from the "Yours, Isaac Asimov: A Lifetime of Letters".

    Personally, with regards to movies, I'm hoping to see the "I, Robot" movie done. The script, written by Harlan Ellison, is pretty good.
  • One thing I've been noticing is that a lot of post-cyberpunk (even the most brilliant stuff, for instance Pat Cadigan's newest book) seems trapped in oversophistication. I've read multiple books by Cadigan and William Gibson (really _good_ writers) and see the same pattern- a groundbreaking initial book that was really clear and powerful- and then, without necessarily lowering the quality much, the works get _baroque_, so ornate and hard to follow and cynical/detached that you can't latch onto anything. Pat Cadigan's 'Fools' used _typefaces_ as a literary device to depict a multiple-personality viewpoint. It's like as this literature progresses, the writers try to make bigger and bigger points until they're so big as to be meaningless. By comparison, Clarke, Asimov etc. were from an older school of literature. It's tempting to say they were trying to write for the reader instead of just for high art's sake- but the modern writers are also trying to write to be read- it's just that if the reader wants to be wowed, overwhelmed and left stunned, tying things up in neat little endings won't make it anymore...

    Compare Neuromancer with, say, 'Imperial Earth' by Clarke. The tones are utterly unlike. Both books conclude with a clear finishing point, but again they're totally unlike. The Gibson book concludes with a big conceptual leap played very deadpan, and the idea is on a cosmic scale, also nihilistic (as it will mean very little to the protagonist who's left behind by the lessons of the narrative). The Clarke book concludes with a small choice played up for effect, and the idea is small and personal and rather sentimental- but will affect everyone in the story, and (it's suggested) for the better.

    Why would the latter be _worse_? It's hard to argue that the William Gibson universe is better than the perhaps sentimental Clarke universe (or indeed Asimov's universe). It is as if people wearing Nike sneakers and waiting for their stock options to vest want to find a vicarious nihilism through modern SF writing, a bleakness that they are looking for and not finding in their own lives. One might well wonder whether there will be a recurrence of hope and meaning in SF literature in the next five years- since the Real World is poised to deliver another wake-up call.

    Of course, if you read and believe 'The Long Boom' voodoo happytalk, you might as well get heavily into reading the most nihilistic and meaningless cyberpunk you can possibly find: it might be your subconscious trying to tell you not to be too much of an idiot :)

  • Well, the "robots take over humanity" thing was never actually written about in any of his books... but it _is_ implied that at one time some robots interpreted the zeroth law in such a way as to justify _controlling_ humanity.

    item the first is the fact that robots are illegal in the Empire.

    item the second is when, in the later Foundation books, that guy starts searching for earth and finds all the original planets (from the Daneel and Elije Bailey novels)... there are some scraps in each of those visits where Asimov implies that some of those planets went under due to a conflict between robots and humans...

    item the third is Asimov's assertion that it is possible for robots to interpret the laws so radically that it makes life difficult for humans. Take for example the planet of recluses (can't remember the name) where robots define "human" as _only_ people living on that planet alone...

    At any rate, it was my impression that at some point in Foundation prehistory, the robots were running the show because it made it easier for them to obey the zeroth law, the humans got pissed and have a massive uprising, as a result the empire outlawed robots, and robots had to go into hiding in order to continue protecting humanity.


  • Post says: Asimov's accountant, which if you think about it might refer to his estate.

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...