Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Technology Books Media Book Reviews

Flesh and Machines: How Robots Will Change Us 202

Peter Wayner writes: "A long time ago, I posed for a portrait at a church fair. The priest wandered by, paused for a second, and then caught up to me later. "Do you like the picture?" he asked. When I said it was fine, he told me, "Oh, I think its terrible. It doesn't look like you at all. But that doesn't matter. The artist is supposed to create a picture of what you think you look like." Read on to see what this has to do with robots as Peter reviews Rod Brook's new book.
Flesh and Machines: How Robots Will Change Us
author Rod Brooks
pages 260
publisher Pantheon Books
rating 8
reviewer Peter Wayner
ISBN 0375420797
summary A charming look at an unconventional (and powerful) way to think about and design robots.

In a way, robots are portraits of humans. Machines are just machines and assembly lines are just assembly lines. The buckets of bolts don't become robots until they start to take on some of the characteristics and a few of the jobs of humans. A drill for tightening a bolt may replace a biceps, but it's just a motor until it's on the end of a fancy mechanical arm that positions it automatically. Then it's a robot ready for a call from central casting.

Defining just what is and is not a robot is not an easy job for technologists because the replicants and androids are a touchstone and a benchmark for measuring our progress toward the future. It's 2002 and everyone is asking: Where's mad Hal steering a space craft to oblivion? Or more importantly: Why am I still vacuuming the floors and mowing the lawn by myself?

If you are asking these questions, then you might want to read the answers Rod Brooks, the director of MIT's Artificial Intelligence Laboratory, offers in his charming book, Flesh and Machines: How Robots will Change Us. The book is half a thoughtful biography of the various robots created by his graduate students and half a philosophical explanation of what to expect from the gradual emergence of robot butlers.

The biographical part is probably the most enjoyable. He and his students have produced more than a dozen memorable robots who've crawled, rolled and paced their way around MIT. One searched for Coke cans to recycle, one tried to give tours to visitors, and another just tried to hold a conversation. Brooks spends time outlining how and why each machine can into being. The successes and more importantly the failures become the basis for creating a new benchmark for what machines can and can't do.

An ideal version of this book should include a DVD or a video cassette with pictures of the robots in action because the movement is surprisingly lifelike. Brooks is something of a celebrity because a film maker named Errol Morris made a droll, deadpan documentary that cut between four eccentric geniuses talking about their work. One guy sculpted topiary, one tamed lions, one studied naked mole rats, and the fourth was Rod Brooks, the man who made robots. Brooks minted the title for the film, Fast, Cheap and Out of Control, a phrase he uses to describe his philosophy for creating robots. The movie tried to suss out the essence of genius, but it makes a perfect counterpoint for the book by providing some visual evidence of Brooks' success.

One of the stars of the movie was a six-legged robot called Genghis, a collection of high-torque RC airplane servo motors that Brooks feels is the best or most fully-realized embodiment of this fast and cheap approach. The robot marches along with a surprisingly life-like gait chasing after the right kind of radiation to tickle the IR and pyro-electric sensors mounted on whiskers. If you've seen the film, it's hard to forget his gait.

Brooks says that the secret to the success of Genghis is that there is no secret. The book's appendix provides an essential exploration of the design, which is short and very simple. The soul of the machine has 57 neuron-like subroutines, or "augmented finite state machines" in academic speak. For instance, one of the AFSMs responsible for balance constantly checks the force on a motor. If it is less than 7, the AFSM does nothing and if it is greater than 11, the AFSM reduces the force by three. That's doesn't seem like very much intelligence be it artificial or real, but 57 neuron-like subroutines like this are all it takes to create a fairly good imitation of a cockroach.

Brooks calls this a "subsumption architecture" and the book is most successful describing the days that he spent with his graduate students building robots and seeing what the architecture and a handful of AFSMs could do. He half mocks the roboticists who load up their machines with big computers trying to compute complex models of the world and all that is in it. In his eyes, the lumbering old-school machines just move a few inches and then devote a gazillion cycles to creating a detailed, digital description of every plant, brick or wayward child in the field of view. After a few more gazillion cycles, the machine chooses a path and moves a few more inches. Even when they find their way, time passes them by.

There are no complex control mechanisms sucking down cycles on the machines from Brooks' lab, the source of the claim that they're "out of control". It's just AFSMs wired together. One of the robots fakes human interaction by tracking fast motion and flesh colored pixels. Brooks marvels at how a few simple rules can produce a machine that is remarkably life-like. If you're not sure, they have video tapes of lab visitors holding conversations with the machine, who apparently takes part in the conversation with the patient interest of a well-bred host. As if by magic, the AFSMs are creating enough human-like movement and visitor in the tape begins treating the robot like a human!

If you're still not sure, you might buy a "My Real Baby" doll designed by Brooks with the help of the adept mechanical geniuses in Taiwan. The story of taking a highbrow concept from MIT to the local toy store is a great part of the book. The so-called toy is filled with AFSMs that tell it when to gurgle, when to pout, when to sleep, and when to demand sustenance. Alas, the toy makers tell Brooks that the market can't stomach so much innovation. One new thing at a time.

So are these machines truly successful simulacra? Are they infused with enough of the human condition to qualify as the science-fiction-grade robots or are they just cute parlor tricks? Some readers will probably point to the AFSMs and scoff. Seeing the code is like learning the secret to a magic trick.

Brooks, on the other hand, is sure that these machines are on the right track. In a sense, he makes it easier for his robots to catch up with humans by lowering the bar. On the back of the book, Brooks ladles out the schmaltz and proclaims, "We are machines, as are our spouses, our children and our dogs... I believe myself and my children all to be mere machines." That is, we're all just a slightly more involved collection of simple neurons that don't do much more than the balance mechanism of Genghis. You may think that you're deeply in love with the City of Florence, the ideal of democratic discourse, that raven-haired beauty three rows up, puppy dogs, or rainy nights cuddled under warm blankets, but according to the Brooks paradigm, you're just a bunch of AFSMs passing numbers back and forth.

If you think this extreme position means he's a few AFSMs short of a robot professor though, don't worry. Brooks backs away from this characterization when he takes on some of the bigger questions of what it means to be a human and what it means to be a machine. The latter part of the book focuses on what we can and can't do with artificial intelligence. He is very much a realist with the ability to admit what is working and what is failing. His machines definitely capture a spark, he notes, but they also fall short.

He notes with some chagrin that his robot lawnmower leaves behind tufts of uncut grass. Why? It uses a subsumption-like algorithm that doesn't bother creating a model of the yard. The robot just bounces around until the battery runs out. Eventually the laws of random chance mean that every blade should be snipped, but the batteries aren't strong enough to reach that point at infinity. A model might help prevent random lapses, but that still won't solve the problem. Alas, the machines themselves are limited by the lack of precision. One degree of error quickly turns into several feet by the other end of the yard. A robot wouldn't be able to follow a plan, even if it could compute one.

What's missing, Brooks decides, is some secret sauce he calls "the juice". Computation and AFSMs may work with cockroaches, but we need something more to get to the next level. Faster computers can do much more, but eventually we see through the mechanism. Genghis looks cool, but learning about the 57 AFSMs spoils the trick.

The standard criticism of Brooks' machines is that they don't scale. There is no superglue juice that can save a scaffolding built of toothpicks. The AFSM may produce good cockroaches, but that's just the beginning of the game. Humans are more than that. Eventually, the AFSMs become too unwieldy to be a stable programming paradigm. In fact, Brooks sort of agrees with this premise when he suggests that Genghis is his "most satisfying robot." It was also one of the first. The later models with more AFSMs just don't rank.

But humans and other living creatures don't scale either. We may be able to run 20 miles per hour, but only for 100 yards. We may be able to troll for flames on five bulletin boards, but eventually we get our pseudonyms confused. Limits are part of life and we only survive by forgiving them. To some extent, the lifelike qualities of his robots are direct results of the self-imposed limits of the AFSMs.

Your reaction to these machines will largely depend upon how many of the limits you are willing to forgive. Stern taskmasters may never be happy with a so-called robot, but a relaxed fellow traveller may ignore enough of the glitches to interface successfully. Some will see enough of themselves to be happy with the whirring gizmos as a portrait of human and others may never find what they're looking for. That's just the nature of portraits. For me, this book is an excellent portrait of a research program and the collection of questions it tried to answer. You may look in the mirror and want something different, but it's worth taking a look at these machines.


Peter Wayner is the author of two books appearing this spring: the second edition of Disappearing Cryptography , a book about steganography, and Translucent Databases , a book about adding extra security to databases. You can purchase Flesh and Machines from Barnes & Noble. Want to see your own review here? Just read the book review guidelines, then use Slashdot's handy submission form.

This discussion has been archived. No new comments can be posted.

Flesh and Machines: How Robots Will Change Us

Comments Filter:
  • by qurk ( 87195 )
    Robots are fun, they a little too expensive for my tastes tho.
  • by eaddict ( 148006 ) on Tuesday March 26, 2002 @09:24AM (#3227925)
    Until robots get to the price of a washer/dryer we won't see them much of anywhere. Look how long it is taking to get HDTV going in the states! And DVD players might overtake VCRs this year. And forget about the DVD recorders! Everytime I see or hear about a new gadget that claims it is priced near that of a luxury car I cringe. Maybe my great-great grandkids will get to play with them.
    • Until robots get to the price of a washer/dryer we won't see them much of anywhere. Look how long it is taking to get HDTV going in the states! And DVD players might overtake VCRs this year. And forget about the DVD recorders! Everytime I see or hear about a new gadget that claims it is priced near that of a luxury car I cringe. Maybe my great-great grandkids will get to play with them.

      It does rather depend on how useful the gadget is. I believe that luxury cars have always been priced near the price of a luxury car, but they seem to sell a few million of them per year.

      I can certainly see some consumer robotics applications that people would pay that kind of money for. Even some that aren't sex-related.

    • Lets face it. Without a W/D set you would be whacking your clothes on a washboard in a tub and wringing them out by hand.

      Instead you drop your clothes and soap into a box and give some instructions (turn indicator knob). No human labor involved. Sounds like just as much of a robot as the other items mentioned above

      • I know it's not any kind of offical defination, but I consider a 'robot' something that doesn't need special input and output, or any help in the middle.

        A robot washer/dryer would grab my clothes from the hamper (We'll assume the hamper is on top of the robot, I won't require it to walk around the house.), empty the pockets, sort the whites and colored, wash, dry, and fold. And removing any clothes that I've indicated require decisions on my part from the process, and keep the load balanced.

        Only then will I call it a robot. Until then, it's just two tools sitting next to each other.

        There are things out there I would almost call a robot. Some of the high-end copiers, the ones that can fold, staple, sort, etc. That's the cheapest thing I can think of that I would call a robot. And it still can't handle documents that start stapled.

        In other words, the main difference between a robot and a simple tool is that a robot doesn't need you to hold its hand. You give it a task and it can do it without you needing to make sure everything is set up correctly every step of the way, just like a person. And if it can't handle something, it has to be able to realize that and stop. Otherwise it's a complicated hammer.

        • A robot washer/dryer would grab my clothes from the hamper (We'll assume the hamper is on top of the robot, I won't require it to walk around the house.), empty the pockets, sort the whites and colored, wash, dry, and fold.

          Basically, you're saying that if you can teach it to make and serve coffee you can marry it and call yourself lucky...:)
    • Make a distinction between humanoid, human-like robots and just plain robots.

      The "just plain" variety are all over the place, manufacturing, sewing, blending, cooking... something with an programmable motor is more or less a robot, no?

      In French, a "Kitchen robot" is a variable speed multifunction blender... they only cost a few bucks.

      AI is a whole other ball game...
  • by Anonymous Coward on Tuesday March 26, 2002 @09:25AM (#3227929)
    Robots will change us into a race of fugitive creatures scheduled for liquidation, forever running from our own creations. Seriously. On August 29, 1997 this will happen.
  • by Leven Valera ( 127099 ) on Tuesday March 26, 2002 @09:31AM (#3227960) Homepage Journal
    All humans are machines, built up to amazing complexity in the tools of flesh, sinew, bone and chemicals instead of steel panels, rivets and framework.

    Oh, and humans run the single most complicated OS ever. :) And we're just now beginning to find the bugs. Maybe the human race just doesn't scale well?

    LV
    • Oh, and humans run the single most complicated OS ever. :)

      I'll believe that when somebody can upgrade or replace it.
      • We do that all the time... Sometimes as learning (upgrade), sometimes as psychotherapy (upgrade or replacement at least in part, especially via the use of drugs with psychotherapy), sometimes with what is known as 'life experience'...

        That we can't just remove it as a whole & take apart it's raw code yet to rewrite portions directly doesn't mean it isn't true. We've just gone around our limitation (of lack of source code) by interaction...
    • Sayeth Leven Valera:
      " Oh, and humans run the single most complicated OS ever. :) "

      And look what happens if you get a crash... Just go to your nearest mental hospital. And there's no rebooting.
    • All humans are machines...


      While many of us might agree with this statement, it is in no way a proven fact. And there's lots of people out there who'll tell you that a human is more than a just body.
      • Even if there is more to us than this body, we are made of same building blocks of life as all other life on this planet.

        If it wasn't for nucleic acids, tiny machine like molecules - we wouldn't be here.

        Scientifically all life on this planet started trying to compete and consume all organic molecules. In turn they developed some pretty cool tools such as chloroplasts, mitochondria and other tools to deal with the environment around us.

        We are machines, but we can study that fact.
    • And we're just now beginning to find the bugs.

      I don't know what people you live with, but I think "bugs" in humans have been visible for quite some time now. Think of all of the phobias that people suffer from. Think about all of the times that you've said, "Why'd I do that, I should've known better." Just turn on your TV and watch the Jerry Springer show. I mean, come _on_, we're so full of bugs it's ridiculous.

    • > All humans are machines, built up to amazing complexity in the tools of flesh, sinew, bone and chemicals instead of steel panels, rivets and framework.

      "So... what does the thinking?"

      "You're not understanding, are you? The brain does the thinking. The meat."

      "Thinking meat! You're asking me to believe in thinking meat!"

      "Yes, thinking meat! Conscious meat! Loving meat. Dreaming meat. The meat is the whole deal! Are you getting the picture?"

      "Omigod. You're serious then. They're made out of meat." [setileague.org]

  • I can imagine that the first use of robots will be in espionage and other survelliance applications.

    the little cockroach and fly robots with tiny cameras that peek in on people.

    The time to really worry is when these show up as radio shack kits in about 10 to 20 years.

    No one get all paranoid now.

    • yea, after the cockroach robots go obsolete we can just use them for stepping practice
    • On the other end of the scale, I can picture giant construction robots, as in Kim Stanley Robinson's Mars series, doing all sort of automated work.

      It costs too much to build a house these days and most of it is not the raw materials... so I'm really looking forward to this!!

    • I can imagine that the first use of robots will be in espionage and other survelliance applications.

      Would the surveillance drones in use in Afghanistan right now fall into this caregory? I havn't heard much about them as yet but it seems at first glance that they'd fall into such a category.

    • I suspect that the US government (and some others) have been using what they euphemistically term "biologic sensing platforms/delivery systems" for some time.

      Maybe not at the insect size range, but perhaps at the dog/dolphin/avian level.

    • Memories of a sci-fi book called "The men from P.I.G. and R.O.B.O.T." which I read many years back. The second half of that (the "ROBOT" bit) covers something similar, with the lawman using swarms of insect-like monitoring probes.

      Grab.
  • "Until we understand the physics of conciousness artificial intelligence is impossible" And its 2002, Peter is right. I Remember an old maths puzzle. A Monkey climbs a pole, covering half the distance left in every step. When will it reach. The answer is of course never. And Thats what Robotics seems to be doing. Going near and near, but never reaching the Holy Grail. Well Holleywood has made much more progress!!
    • The monkey analogy is true, but in the real world, the monkey might get pretty close to the top of the pole and then quit the game to reach the top because it can. Maybe the issue is not is "artificial intelligence possible", but rather how close do we need to get before the difference between machine intelligence and human intelligence is negligible.
    • Old indeed. That's Xeno's Paradox [bbc.co.uk], restated very slightly.
    • I'm in two minds about this. I think he's right, but I also think we are going to get a hell of a lot closer to sentience than we are now, even with strictly deterministic, non-quantum devices.

      I think humans are capable of something fundamentally impossible for deterministic computers, but at the same time I think that most of us barely use these facilities. Most of what we do is mundane, and perfectly possible to mimic on a computer. We may not be able to mimic consciousness life, but we may be able to prove that most of us spend most of our lives in a zombified state.

      Two things to emphasis: all turing machines are equivalent; speed and intelligence are independent. If it is ever possible to produce consciousness on a deterministic computer, then it is possible on today's hardware. If you had a radio onversation with an intelligent alien who lived 100 light years away, it would take 200 years to get each response - that doesn't mean he is stupid. Similarly, if it is possible to mimic humanity on tommorow's hardware, we should be able to do it, slowly, on today's.
    • A Monkey climbs a pole, covering half the distance left in every step. When will it reach. The answer is of course never.

      Aah, the old paradox. It's based on a false premise though.

      To climb a pole, the monkey must move. To move, it must displace molecules of one substance (say air) with that of another (say a monkey hair molecule). In other words, although movement appears to be constant, it is actually a series of discrete steps.

      The monkey will reach the top of the pole when its next step cannot be broken down any further - ie. when it has only one molecule of another substance left to displace with its own.

      Unless you're into nuclear monkey of course, where it could start splitting up the molecule, then the atoms underneath it and then have a crack at the sub-atomic particles beneath that...

      Cheers,
      Ian

      • we always broke it down with time. While the distance decreased by half, so did the time to cross that distance. No paradox there at all.
        • "we always broke it down with time."

          You know, that's a hell of a lot easier than the way I learnt.

          Cheers,
          Ian

      • except the movement doesn't really occur in discrete steps - the molecules are moved in a continuum.
    • To simple say that machine intelligence will be eventually asymptotic to human intelligence is meangingless - it need only be close enough that we are unable to tell the difference by any discernable means. Scale matters in all things human - your asymptote argument doesn't hold. We don't live on a graph.
    • "I Remember an old maths puzzle. A Monkey climbs a pole, covering half the distance left in every step. When will it reach. The answer is of course never."

      No, the monkey will fall on Xeno's head and kill him. Now what will we do with all our Thetans? His courage was to change the world!

      graspee

      classical allusion: 1
      classical allusion is not funny: -1
      bizarro segue: 2
      lame Xena reference: -1

      total: 1

      graspee

  • Ack! If I remember correctly, there was an article about chips that could re-wire its own gates. Essentially it was self learning. Then there was a poster about that topic that mentioned a Hypercomputer (the OS learns at a fast rate).

    Well, biological creatures don't scale well at all, right? We have access to the code that Rod Brooks made, right? Well, using other technology, lets evolove the code. If for some reason that doesn't work, we have most (all?) of the human genome done. How about other dna strings?

    We either "evolove" the creature or we model it after the dna it came from. Anyways, score 1 for robotics.
  • Why Human? (Score:5, Insightful)

    by Mister Transistor ( 259842 ) on Tuesday March 26, 2002 @09:36AM (#3227987) Journal
    Why is it that robots must be envisioned as humanoid? Specialized robots look very little like a human, such as industrial handling robots. A more generalized design for multi-purpose applications need not look or act anything like a human being to get it's tasks accomplished. I think a lot of fear and paranoia from the ignorant might be avoided by specifically making them NOT look humanoid. Who says that the human form is the be-all and end-all general purpose vehicle? The only "pro" for them being humanoid is they must negotiate a world build for humanoids.
    • yeah! Look at R2D2! He got lots of shit done by being a rolling trash can!
      • Look at the Star Wars movies again.

        You'll note that George Lucas failed to conceptualize anything closely resembling steps (at least that droids are shown transversing).
    • Asimov already explored this. I wish I could recall the title. U.S. Robots hired a maverick who realized that the key to alleviating public fears of robots was not to make them more complex and human-like, but to make them *less* complex and more specialized. He designed robo-worms and suchlike.

      One of the later "Foundation" books has nonhumanoid robots, too, and there is a brief discussion of what it is that makes a machine a robot.

      Hmmm, probably the only facet of robots-in-society not explored by Asimov is the possibility that people never would really come to fear them. The two attitudes I've seen toward robots are basically (a) "so what?" and (b) "cool!"
      • It was actually more complex than that. US Robots used a *robot* to work out how to make humans less opposed to robots. The robot worked out that if you used robo-worms and robo-birds to help maintain the ecological balance, humans would get used to robots and tolerate them. As with you, I can't remember the title! :-)

        Thing is, we don't fear them bcos they're locked away. I think if an 8 foot humanoid robot came stomping down the road tomorrow, ppl *would* be scared. And then there's the whole slavery thing which Asimov was into (and which Pratchett also covered in Feet of Clay) - if they're sentient, can we force them to work? And there's a third attitude you've missed which came along in the 1970s and 1980s, which is "f***ing robots stole my job", and Asimov made this one of the underlying causes of the anti-robot movement.

        Asimov certainly has covered a lot of this area. The Caliban/Inferno series covered a bit more by hypothesizing robots which *weren't* forced to work but could choose their actions. It's a shame all these books are quite bad fiction - damn good ideas, but bad novels. Ho hum.

        Grab.
    • The important part is not that robots look humanoid, but rather look and move in a way people will ascribe human-like qualities to them. Take a Furby for example. It is not humanoid, but looks and acts in a way that people will read things into what it does in such a way that it takes on human-like characteristics.
    • Re:Why Human? (Score:3, Insightful)

      by Hard_Code ( 49548 )
      'The only "pro" for them being humanoid is they must negotiate a world build for humanoids.'

      That's a pretty damn big "pro". I don't care if the robot is a freakin genius...if it can't open a door or walk up stairs it's not going to be able to do much.
  • Don't you wish somedays you had a few robot cockroaches for a little covert reconnaissance mission?

    Zorg: Take this glass for instance. Sterile. Pristine. Boring. But if it's broken...
    [glass breaks and many small robots come zooming out to clean up the mess]
    Now look at that, a variable ballet ensues so full of form and color!
  • As a teenager I was fascinated by anything robotic. This led me to a study of the fundamentals of AI (Hofstadter, Lisp--the whole schmeiel). But after two semesters I realized the whole field is fooling itself. AI just won't work.

    Biological neurons have been shown in the laboratory to grow new connections based on information learned. In a robot, what possible mechanism could guide such growth? Programming is the only answer, but keep in mind that "programming" is just shorthand for "the intelligence of the programmer". In other words, the AI itself isn't self-contained, as it were.

    There is no other way for "mental" activity to be guided, thus AI will always be as unattainable as the Philosopher's Stone.

    • There's no machine I've ever heard of nor seen that could generate a truely random number the way humans can. While this is a "simple" operation, it often plays into our lives without us realizing it at the most critical conjunctions such as split-second decisions between two equal alternatives.

      Possibly this point is moot if pseudo-random based on some external element that is more or less random in nature is an acceptable alternative to internally generating a random number.

      I choose 156 -- why? Dunno. It sounded good at the time
      • Humans don't make good random number generators. For lotteries, etc. Most people pick repetitive numbers like birthdays, SSN's, phone numbers. From another person's point of view, having no clue what the other person's motivations are, their choices seem random. To the person picking them they aren't. This is further evidenced by a stage magic trick I once saw done, where a random number guessed was picked by the performer, and It was explained to me afterward that 95% of people will pick something like 6 (don't quote me here - I don't remember the EXACT details of the trick) but it relied on the fact that people act in very predictable ways. The other 5% are insane and unpredictably erratic...
      • And you think that a number that you think of is truly random?

        If a computer cycled through numbers and chose one of them the next time disc I/O was requested, this would be as random as anything you like. Sure, it can be repeated, but so can anything that goes on in my head.

        What I'm saying is just because you don't understand why you choose a seemingly random number doesn't mean it's actually random. You know all those tricks about people being made to pick a random number (David Blaine style) and it's known to other people -- well, they would have though their choice is random.

      • There's no machine I've ever heard of nor seen that could generate a truely random number

        It's true that no computational algorithm can generate truly random numbers, without input from some random physical process. The real test would be whether you could look at the history of numbers generated and predict the next number. This would mean inferring the state bits of the algorithm and deducing its inputs, if any. Cryptographic hashes are algorithms specifically designed to make that difficult.

        In physics, you don't get real randomness without quantum effects, but statistical processes can give you highly unpredictable numbers, unless you're prepared to do faster-than-real-time molecular dynamics on 10^23 particles.

        Here's a random bit generator suitable for use with a crypto hash algorithm to make good random bits: http://willware.net:8080/hw-rng.html [willware.net]

    • I can see where this would be true on a large scale (ie a human), but what about animals that function mostly on instinct (insects, fish, etc)?

      I'll bet it's possible to create a cybernetic "animal" that functions on 95% instinct and 5% learning. The recursion problem could therefore be contained and studied.

      Such a thing won't be "HAL" or "C3PO" by any stretch of the imagination, but it'll be a start.
    • by DavidpFitz ( 136265 ) on Tuesday March 26, 2002 @10:22AM (#3228236) Homepage Journal
      "AI just won't work"

      Crikey, you figured that out after two semesters. I guess I wasted 4 years of my life doing a degree in it all then... I must never have cottoned on to how well expert systems such as Mycin [surrey.ac.uk] and Dendral [mit.edu] actually perform.

      You think programming is just the "intelligence of the programmer"? Guess again -- many people have AI systems running which program themselves, coming out with emergent behaviour which the programmer never expected.

      Do you really think that a person can simplify circuit boards to their simplest form by themselves? I thought not. I know that Julian Miller [bham.ac.uk] can't, but that using his Cartesian Genetic Programming [bham.ac.uk] he's managed to wirte programs that do just that. Thus proved that a computer program can ultimetaly be more than the sum of its external inputs.

    • Biological neurons may have been shown to grow new connections based on information learned, but programs can do much the same thing.

      People seem to think that there is something "magical" about the human brain but this need not be so.

      Critics of AI point to the most complex program and say, "see, it's mechanical -- given the same input it produces the same output." The problem with this argument is that we cannot make the same test on a human brain. I'd bet that if we could save the state of a human brain, run a series of tests, then reload the old state and run the tests again, we would begin to see the mechanism underlying our program.
    • "Biological neurons have been shown in the laboratory to grow new connections based on information learned. In a robot, what possible mechanism could guide such growth?"

      Self-reprogramming FPGAa perhaps? Dedicated genetic algorithm circuits which evaluate the behavior of the rest of a chip and reprogram it? Why do think this is so impossible? We may end up actually using biological processes for this "growth" anyway if/when we arrive at biological computing (DNA/molecular computing) etc. I fail to see what is so darn impossible about the process. It took evolution billions of years to produce us through random change...knowing this, I think we can definately speed that process up a bit to create AI.
    • > Biological neurons have been shown in the laboratory to grow new connections based on information learned. [...] Programming is the only answer

      There are people who use this as an argument to prove that intelligent biological life must have been designed. So all we need for working AI is to play god.

      Alternatively, we just accept that the programmers' guiding is a more effective equivalent of the natural selection that led to biological life, and that the AI will be just as self-contained as biological life.
      After all, your brain wouldn't exist without your parents, and wouldn't work the same way without years of training. That doesn't make human intelligence unattainable.
    • by Yokaze ( 70883 ) on Tuesday March 26, 2002 @10:57AM (#3228476)
      An neuronal network can be simulated by an adjacence matrix and a activation-function. The growing weights are symbolising the growth of the dendrits.

      Problems:
      O(n^2)-structure
      Learning (Growing)

      Current learning algorithms include (among others):
      Various backpropagation algorithms, AFAIK not observed in biological systems. A fairly mathematical approach.
      Self Organising Maps (SOM), especially Kohonen-networks: a similar strucure has been observed in the visual cortex.

      Both algorithm do not include a temporal component although biological neurons rely heavily on temporal information, but IRC there are some neuronal networks out there that employ a temporal encoding.

      Of course, all existing networks rely heavily on the knowledge of the programmer, who tailors the system to the problems (and partly the other way around). Partly, this is due to the prohibitivly expensive costs of large neuronal networks and partly nature does the same.
      Humans are pre-wired, so may AIs.

      Furthermore, it is quite interesting that an "AI", programmed to learn articulating words, made similar errors to those of a baby learning speaking.

      Have a look at Ghengis, AFAIK the only programmed knowledge is: "contact with ground -> bad", "moving forward -> good", and how to learn.

      > In other words, the AI itself isn't self-contained, as it were.

      This reminds me somehow at an AI Koan:

      In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.

      "What are you doing?", asked Minsky.

      "I am training a randomly wired neural net to play Tic-Tac-Toe" Sussman replied.

      "Why is the net wired randomly?", asked Minsky.

      "I do not want it to have any preconceptions of how to play", Sussman said.

      Minsky then shut his eyes.

      "Why do you close your eyes?", Sussman asked his teacher.

      "So that the room will be empty."

      At that moment, Sussman was enlightened.
      • Help me out here. (Score:3, Interesting)

        by Decimal ( 154606 )
        "Why do you close your eyes?", Sussman asked his teacher.

        "So that the room will be empty."

        At that moment, Sussman was enlightened.


        I may seem a bit foolish here for asking, but what does this mean? I don't understand. Is it that Sussman learned to start with all 0s instead of random inputs? Or that cutting out all preconceptions is only counterproductive?
        • Re:Help me out here. (Score:3, Informative)

          by awaterl ( 85528 )
          For me, the koan evokes the realization that just as when I close my eyes the room does not become empty, when I randomly wire a neural network it does not become free of preconceptions: it just has random preconceptions.

          That is, it is impossible to free a system of preconceptions. By making parameters random rather than hand-picked, I am simply trading one set of preconceptions for another.

          Of course, if it is a true koan, it will probably evoke as many different thought-paths as it has readers. Hope the above helped, though.
  • by Wingchild ( 212447 ) <brian.kern@gmail.com> on Tuesday March 26, 2002 @09:47AM (#3228044)
    Back in the 50's, people dreamed feverishly of flying cars and robot maids, of amazing advances in science over the next decade. But what we're moving towards, ever so slowly, is more along the lines of "the kitchen that cooks meals by itself" - an integrated system where computers are so tightly woven into the construction of appliances that the appliances themselves become intelligent and teachable. (Programmable, teachable, use your own word or metaphor here.)

    The human element can't be ignored in favor of fully robotic solutions. People enjoy feeling involved in what it is they're doing. Personally I'm all for having an entire race of robot slaves that do all the work for everyone, leaving people free to create Art, Science, and Music (and giving *me* time to finish Final Fantasy 10).. but I don't see it happening any time soon.

    Flying cars would rock. Talking cars that remember your favorite radio stations, seat settings, A/C settings, and possibly directions to drive to your parent's house are far more likely.
    • Flying cars would suck. When I see the stupidty of the average driver these days, I cringe thinking about putting them behind the yoke of a 4 tonne flying missle.

      The only way around that would be to automate the guidance, and the first thing people would do is hack the systems so THE MAN wouldn't be able to tell them where to go.

      The rest of that sentence, remembering seat/AC etc. are here already.
  • Definition of a Robot:

    Your plastic pal who's fun to be with!

  • I'd agree with your contention that a DVD would be a welcome addition; last year some time I saw a documentary on robotic cockroaches - probably the self-same bugs referenced herein - and I was astonished by how such apparently complex behaviour could be achieved with so few rules. You've got to see them scurrying to believe them.

    As for the 'non-scaling' criticism: to quote Dogbert, 'Pah!' They do what they're supposed to do. I never criticised my Spectrum because it didn't have dolby sound; I wouldn't criticise my roaches because they don't write operas.
    • As for the 'non-scaling' criticism: to quote Dogbert, 'Pah!' They do what they're supposed to do. I never criticised my Spectrum because it didn't have dolby sound; I wouldn't criticise my roaches because they don't write operas.

      It is not a critism of the robots in themselves, but of the methodology. Humans become capable of manipulating new ideas when they develop a symbology for modeling those ideas mentally. The various finite state automata that Rod Brooks has developed are useful in themselves, but unless the reviewer failed to mention it, they do not portend a new way of designing complex life-like systems. This is not because there is no merit to his ideas, but because his ideas are not new, FSA's are widely recognized as an excellent way of implementing real-time limited-adaptability behavior. I used multiple software-implemented FSA's to control my easter-egg-hunting robot back in college years ago, and I definitely wasn't breaking any ground. He has simply applied them (very!) well to the tasks for which they are best suited: simple machines with limited behaviors.

      If he had provided a set of equations or even just a pseudo-algorithm for breaking down complex, adaptive behaviors into multiple interlinked AFSA's, he would have significantly advanced AI, but I saw no such evidence in this book review.

  • by 3ryon ( 415000 )
    Flesh and Machines


    Slashdot and Pr0n in one easy to swallow pill.

  • by SloppyElvis ( 450156 ) on Tuesday March 26, 2002 @09:50AM (#3228058)
    Think about it.

    Much of what you do each and every day occurs in spite of the ability I just asked of you. Your brain is not responsible for thinking about how to walk (at least not after you learn how). You peripheral nervous system handles such actions.

    When humans create a robot in the fashion of Rod Brook, they are training a system analogous to our own peripheral nervous system. Why force the machine to learn to walk when we can tell it how to walk from our own experience (knowledge of physics, etc).

    The exact implementation Brook uses may not scale, but analogous programming options exist that could scale, and IMHO, approaches addressing immediate actions/reactions should be built into robots as described.

    From the interview it seems Brook admits the need for serious processing power to reach the "next level", but shrewdly points to the fact that spending all of your time thinking and not doing is not a good way to get anything done.

    If you can't walk and chew gum at the same time...
    • Mmm. This is the classic method of getting to where you want to be - you start with the basics which will demonstrate some concept, then you put another layer on top of that to demonstrate another concept, and pretty soon you're talking a serious piece of kit.

      OK, Brook's robots are only hunting light. But if he plugs in more processing power to give them other inputs to the decision-making process (eg. avoid water, seek other robots) then it starts getting pretty complex behaviour. I mean, the actions of a human when we're dying for a piss are pretty damn predictable! Sure it doesn't scale at 50-odd neurons, but up it to 500-odd neurons and it can start doing some interesting stuff.

      Grab.
  • by Dalaram ( 447015 )
    Part of human nature is to associate with what is closest to us. Think religion, beliefs, ideals. Why shouldnt this be applied to our development of machines. After all, what are machines and robots but the next stage of human interaction? In some respects, this is almost our playing god, creating man(robots) in our image. We are most comfortable with what we percieve to be like us. I guess on a lighter note think of the last time you were physically attracted to a chimpanzee. Organically similar, but not human. Creating humanoid robots is our way of asserting our power over our environment.
  • Robots ! (Score:1, Informative)

    by qurob ( 543434 )

    Robots have taken countless assembly/factory jobs

    Robots are supposed to kill us all anyway
  • by sniggly ( 216454 ) on Tuesday March 26, 2002 @09:55AM (#3228087) Journal
    Combine everything thats going on into the soup of the future: robotics, quantum technology, biotechnology, high speed wireless internet, satellite communications...

    I believe the robots are going to be us, except for advanced machinery in manufacturing the "happening" thing will be integration and interfacing of electronics and biocircuitry with ourselves. You will think and your interface will retrieve data from storage attached to you.

    Electronics can monitor your bloodstream for diseases, lack of resources, and the like, and synthesize whatever is required. Good for anyone with a genetic defect or an illness. Good for your general health & wellbeing.

    The advantages are so enormous these technologies will be used in that manner. You will probably want to have it. But you'll also realize that at that moment you are not only vulnerable to hackers that try to access your biosystems, also those that create the hardware and software within you are potentially able to upgrade software and firmware that has essentially become a part of your being.

    So who will controls that, us, intimately? Open Source at least insures that we will have insight into if not control over who we are developing into...
    • Have you seen Johny Nemonic (whatever it's spelled) one too many times?

      The human body is a biological system that responds to the various elements around it; it is an open system - effected by it's surroundings - yet it is still seperate from it's surroundings. Unlike a computer, you can't simply add more memory or storage . The human state doesn't allow for it.

      There are instances of people with photographic memory going insane, because they recall every instance of every event perfectly. Their brains are not able to process such a large amount of data, and thus they lose their sanity. People can experience trauma from excessive sound, light, and various other effects. It's called sensory overload.

      While having a device attached to you that would allow you to have an extended memory or such would be awesome in theory, think of the implications - i'ts probably not freasable. Were it to happen, the human body wouldn't be able to power the devices efficiently: out bodies produce only enough energy each sleep cycle for the next day, and not enough for electronics. The addition of those electronics would strain and tire the body, to the point where people would start sleeping more and more each night, thus canceling any percieved benefit of the biotech enhancements. That is, of course, unless an alternative power source were created that would work in harmony with the human body. (Deus Ex is coming to mind as an example of this scenario, actually.)
  • why is it... (Score:4, Interesting)

    by oo7tushar ( 311912 ) <slash.@tushar.cx> on Tuesday March 26, 2002 @09:58AM (#3228105) Homepage
    That whenever we think of AI we think that it must think like a human. On the contrary, if it thinks like anything at all that's living then it's intelligent.

    Intelligence:
    The capacity to acquire and apply knowledge.
    The faculty of thought and reason.

    According to the above AFSM's are the exact principle behind intelligence. Think about how any analysis of the world happens. We don't consider the entire world when we try to catch a ball, we consider the position of the ball and where it will be. We don't take the position of a bird in relation to the ball, or something far away, all that matters is the ball.

    Slightly more complex would be hit detection, is there anything close to me? Yes or no...that easy, you'd have a range that it's ok for an object to be in, a range where we should slow down and a range where we fire thrusters to stop.

    Simple actions put together equal complex life form.

    • It seems to me that there are a lot of tasks we know how to do in the AFSM sort of way, such as folling our regular commutes, walking, catching balls, typing, and so forth. These are the tasks which do not normally impact out consciousness; we know that we're doing them, but we're remarkably bad at explaining how we do them.

      These models don't have an internal representation of the world, and for good reason: the world itself is the best representation you could ever want. But it isn't sufficient for conscious thought, because that depends on measuring the world as you imagine it, not as you can perceive it.

      It's not so much that the problems associated with consciousness are harder than the problems involving subconscious behavior; the latter turn out to be essentially impossible to solve using general intelligence (either by AI researchers or by humans with specific brain damage). But the problems associated with consciousness are almost certainly equally difficult to solve with AFSMs. It's certainly possible, but it'd be like trying to write software by arranging electrons.

      Of course, the interesting stuff happens when both types of systems work together. Read Phantoms in the Brain by VS Ramachandran for a lot of examples, or consider that, when you picture a scene you know well, the visual areas of your brain are actually affected, and your conscious thought can alter your perception of space (like looking at an MC Escher picture).

      Consider the non-AI case of graphics. Hardware is great for digital camera processing, and you wouldn't want to write any of that in software. Software is great for photo manipulation, and you wouldn't want to write it in hardware. And there are a lot of really interesting things you can't do with either of them alone.
  • ...with a manically depressed robot?

    What are you supposed to do if you ARE a manically depressed robot?
  • I know I'm going to get flamed to oblivion, and maybe I deserve it, but... I don't know about Rod Brooks. He has done some cool things, sure. (Call me cynical, but I wonder what I could accomplish, given his budget.) I build robots. Nothing spectacular, I'm just a hobbyist, but I like to think I have some handle on the realities of it. And Brooks, in my (probably useless ;) opinion, is just out there. Idealism is good, but so much of his work just appears frivelous to me. It is depressing to me that this guy has these ridiculous amounts of money (don't tell me about MIT budget downsizing), and he is using it to build robots that smile at you, and beefed up digital versions of BEAM robots. While I am sitting here trying to scrape up $30 for an ultrasonic rangefinder for my latest critter.

    Sure, life is unfair. Wah wah wah. I just always go nuts when I hear anything by this guy. "One day we'll sell millions of tiny robots in a jar, and they'll clean your TV screen." "Robots are going to change the world." I don't see it, Rod, much as I'd genuinely love to. We need to stay grounded at least a little bit.

    Thanks for putting up with my whining. ;) Let the flaming begin.

    • You're right. Brooks has created a few glorified toys and made a name for himself. MIT's Technology Review refers to him as "The Lord of the Robots". It's all PR. The AI community has failed on its promise to create a human-level artificial intelligence and they've been at it for fifty years! So what do we do? We throw more money at them. Makes sense. Not! We need new ideas and new blood in AI research. The tax-paying public should not be forced to reward failure.
    • To some extent, I agree. I've met Brooks, and I went out to his lab in the early insect robot days. He demonstrated how much could be accomplished with cooperating reactive controllers. But then came trouble.

      With no world model at all, you're limited to insect-level behavior. This works for insects because they're small and light. If a feeler hits something, that's OK. Larger creatures need some minimal prediction of the future just to put the feet in reasonable places and not bump into obstacles. Once a creature gets fast enough and large enough that inertia matters, it needs a control system with some predictive power.

      What's needed is the "lizard brain", or limbic system, which does that job for lizards, birds and mammals. Instead of trying to crack that problem, Brooks tried to go all the way to human-level AI in one step, with his Cog project. [mit.edu] He didn't claim to know how to solve the problem; he just planned to throw about 30 MIT PhD theses at it and hope. That didn't work.

      I once asked him why he didn't try for a robot mouse, which seemed a reachable goal. He said "Because I don't want to go down in history as the person who built the world's greatest robot mouse". That's where he went wrong. This problem is too big to solve in one big step.

      I think we'll see a solution to lizard brain AI in the next few years, but it will come from the video game community, not academia.

      • Great post, thanks. :) The "lizard brain" idea is interesting, I hadn't heard it put that way. It really is a tough thing to connect the higher level with the lower level.
        I think we'll see a solution to lizard brain AI in the next few years, but it will come from the video game community, not academia
        On the one hand I agree: the old ways of thinking don't seem to be doing it. But on the other hand, I'm an undergrad in Cognitive Science, so I like to at least pretend I'll have something to do with it. ;) (But then, I'd never really consider myself "academia", so maybe there's hope for me. ;)
      • This problem is too big to solve in one big step.

        The incremental approach is precisely why we don't yet have a HAL-like intelligent machine. That's the approach that's been used up to now by the GOFAI community and it has failed miserably. If the goal of an AI researcher is to understand human cognition, the problem is indeed too big. The interconnectedness of human cognition is so astronomically complex as to be intractable to formal solutions. This problem is too big for any approach, incremental or otherwise. Therefore the goal of the sensible AI researcher is not to develop a theory of cognition, but to discover the fundamental principles that govern the emergence of intelligence. Let's get the damn thing to learn first. We can worry about what it's thinking later. We need an overarching theory of the brain. We don't need limited, isolated bits of cognition.
        • Let's get the damn thing to learn first.

          Enthusiasm for that approach has waned somewhat. Remember "connectionism"? Simply throwing some hill-climbing algorithm at very hard problems doesn't work very well, as the neural net and genetic algorithm people have discovered. The problem isn't lack of CPU time, either; it's not like there are algorithms that are really good, but slow. The real problem with hill-climbing is that much of the answer is encoded into the evaluation function. Where the evaluation function is ill-defined or noisy, hill-climbing gets lost.

          It's reasonably clear now that "learning" isn't merely rule acquisition (see the Cyc debacle) or hill-climbing. We need different primitives.

          • Remember "connectionism"?

            The problem with connectionism is that it came from the same GOFAI crowd that gave us symbol manipulation and knowledge representation. Those guys made it a point to ignore every significant advance that happened in neurobiology and psychology over the last 100 years. ANNs are a joke. They have as much to do with animal intelligence as an alpha-beta tree-searching algorithm. Temporal, spiking neural networks are where it's at in the new AI, AKA computational neuroscience. Everything else is a joke. Like I said, we need new blood in AI. The old school has got to go.
  • by zapfie ( 560589 ) on Tuesday March 26, 2002 @10:15AM (#3228194)
    Or more importantly: Why am I still vacuuming the floors and mowing the lawn by myself?

    Whether or not the book actually discusses that, it's a point kind of disturbs me. Honestly, vacuuming floors and mowing the lawn are not that hard. Having to look after yourself also gives you a sense of responsiblity, IMHO. I'm not sure I'd want a robot doing these things for me.

    While tools have become more and more comprehensive in helping humans solve tasks (and humans have come to depend more on those tools), humans are still usually the ones directly in control. You push or steer the lawnmower, you move the vacuum where you want to clean, etc. If I had a robot do these things, all of a sudden it's the robot deciding when and how these things are done, and not me. On the other hand, there are also people who may not have the time or ability to take care of chores like these themselves, and having a robot do them might mean the difference between still being able to live at home, and having to live in a nursing home.
    • > Or more importantly: Why am I still vacuuming
      > the floors and mowing the lawn by myself?


      If taken literally, the wording of the question means is "Why aren't I being helped when I do these chores?" The answer: You already are. You're not chopping the lawn with sheers, are you? You're not using a hand-crank to operate your self-propelled vaccum, are you?

      Whether or not the book actually discusses that, it's a point kind of disturbs me. Honestly, vacuuming floors and mowing the lawn are not that hard. Having to look after yourself also gives you a sense of responsiblity, IMHO. I'm not sure I'd want a robot doing these things for me.

      While tools have become more and more comprehensive in helping humans solve tasks (and humans have come to depend more on those tools), humans are still usually the ones directly in control. You push or steer the lawnmower, you move the vacuum where you want to clean, etc. If I had a robot do these things, all of a sudden it's the robot deciding when and how these things are done, and not me. On the other hand, there are also people who may not have the time or ability to take care of chores like these themselves, and having a robot do them might mean the difference between still being able to live at home, and having to live in a nursing home.


      I see two possible outcomes from sentient robots further easing our workload the same way conventional machinery does today. One, we can devote more of our time to worthwhile activities, such as intellectul persuits, helping others, getting exercise through sports or nature, etc. The other is where you sit on the sofa and watch cable-TV until your brain dribbles out your ears. Might as well do something else, you just lost your job to a machine right?

      Hmm, I just realized I'm wasting my free time right now, and I owe this opportunity to technology. Well Slashdot reader, how are you spending your life with the free time conventional machinery has already given you? Is there life outside of Slashdot? (It's too late for me, save yourself! ;)
  • One of the robots fakes human interaction by tracking fast motion and flesh colored pixels. Brooks marvels at how a few simple rules can produce a machine that is remarkably life-like. If you're not sure, they have video tapes of lab visitors holding conversations with the machine, who apparently takes part in the conversation with the patient interest of a well-bred host.


    I listened to Brooks present the semi-academic version of his talk at Duke. The really fascinating thing about this robot/experiment is that making the robot react to simple cues from the human makes the robot act much more intelligent than it actually is. It may be easier to make a robot that behaves intelligently around humans than it is to make one that intelligently explores mars.

    By giving the robot the ability to recognize eyes and where the human is looking, it can pick up cues as to what aspects of the environment are important. By making it maintain a proper conversational distance from the human, it prevents collisions and makes talking to it much more comfortable.

    Because the robot responds to its environment, the environment shapes the robot's behavior. If that enviroment is alive and intelligent, the robot's behavior becomes more intelligent than it would normally be. We give off hundreds of little cues that allow us to respond intelligently to each other, and Brooks' work has opened the door to letting robots bootstrap themselves to a higher level of interaction.
  • by TomRC ( 231027 ) on Tuesday March 26, 2002 @10:32AM (#3228323)
    Penrose's lawn mower robot doesn't mow his lawn properly because he forgot to design it to WANT to mow his lawn properly.

    Seriously! To properly want something, you need a means to know that that desire is or is not satisfied, and a means to move closer to achieving your desire - just like Genghis' leg muscles.

    His mower robot needs a laser scanner to light up stalks that stick up too high, a sensor to detect stalks being lit up within maybe 10 feet, a desire to go to spots where that light is seen, and a desire to wander and seek out lit spots if it doesn't see any nearby.

    A bit more is needed to handle edge conditions (literally the edges of the lawn and objects in it). It needs the ability to learn where it can't go, and the ability to slowly forget that learning so if it makes a mistake about not being able to get somewhere it can eventually correct itself.
  • ...but APPROXIMATIONS. Of course, this only applies to humanoid robots. Anyone who claims that robots (in general) are portraits of humans is severely deluded.

    Take an assembly-line robot, for example. It so happens that a human configuration for an arm (A fairly mobile shoulder, a somewhat limited elbow, a fully-functional wrist, and some sort of manipulator at the end) is very useful. With a system like that you can reach any part of a design. Could you add another joint and achieve more flexibility? Or perhaps give the elbow more degrees of freedom? Naturally, and people have in fact done these things. However, there are a number of good reasons to mimic human design.

    First of all, we are innately familiar with the operation of an arm. We have no trouble visualizing just how an arm like our own would move around something - For those who are good with math, this can translate into an easy understanding of the math involved.

    Second, lots more work has gone into human-similar models. This means you can draw upon the accumulated design experience of hundreds and thousands of other people even inside the field of robotics.

    Finally: Adding more joints/making more capable joints costs more money. In most systems which need to be versatile, the human-mimic system is the most efficient from a cost:capability standpoint.

    Robots are like humans where they need to be. When we can make them identical to humans, no doubt some will, while others will feel that that is some sort of travesty. We all know that the big application in robots is the self-mobile realdoll, though, and that's an attempt to make something as much like a person as possible.

    You might as well argue that giving birth is creating a portrait, since there is such variation in humanity - And there is still MORE variation between robots.

  • by Wreck ( 12457 ) on Tuesday March 26, 2002 @11:24AM (#3228646) Homepage
    One of the things I recall about Brooks' work from grad school, was this idea: that the world is its own best model. What that means is, that instead of trying to model the world on a computer then compute what to do based on the model, you should just do stuff and then see how the input from sensors changes. By acting, you interact directly with the "model" -- the world -- and therefore you can cut down on the computation enormously.

    I have the feeling that this notion works well for simple robots, including lower life forms such as insects. Like Genghis, they simple do "simple" stuff based on simple neural computers that hardly warrant the name. But where Brooks' work falls short, as you can see in the review, is where neurons are clumped into serious computers that do model the world. The worst offenders, of course, are humans. The problem is that have no idea how to wire a robot to do that, and a lot of the behaviors we really want from robots rely on it.

    AI still has a long, hard road ahead of it. But we will succeed, eventually, simply by virtue of reverse engineering if nothing else.

  • I studied robotics and Brooks' work for a couple of years in graduate school. I built several robots using some of his ideas with some pretty spectacular results (I was impressed anyway) considering that they were able to navigate around and perform some very simple tasks using less code than your average mouse driver. Brooks turned the whole notion of robotic intelligence upside down and started from the bottom up, keeping things simple.

    Its pretty striking to me how different an engineer's life can be depending on his area of interest. There are some topics where we are essentially on the "right track". Some genius has made the initial breakthrough in thinking. Steady progress can be made by moderately intelligent people such as myself by following the premise to its logical conclusions. While I was studying robotics, the Web was really taking off. Ideas spread like wild fire and advances are still being made fairly rapidly.

    Other areas of study stagnate for years with random dispersed periods of growth and euphoria followed by periods of disappointment and disillusionment. In AI/machine intelligence, we have had several small breakthroughs that allow us to progress a little before hitting the brick wall again. We're all waiting for someone to make the leap in thought that will allow us to progress.

    My opinion now is that we have some fairly specialized approaches that work well in specific circumstances but we are all essentially still on the wrong track.

    Rodney Brooks caused quite a bit of excitement in the early '90's with Ghengis and some of his other robots but it wasn't that breakthrough that
    we are all waiting for.

    From what I understand, if you have read his papers and publications through the years then this book doesn't offer much new information. If you aren't familiar with his work and are interested in the subject then definitely read the book. Even if Brooks doesn't turn out to be the genius who makes the breakthrough, his work has definitely contributed to the field and brings us a little closer.

    In the mean time I guess I'll just have to wait for the big breakthrough by building some more little robots to keep me busy. I've been thinking about a little robot with a single board Linux computer for a controller and a WIFI adapter. That way I can sit at my desk or laptop and watch what is going on a tune code and develop behaviors from the comfort of my couch instead of having to track the little bugger down and stick a serial cable in its ass to upload new programs and download data. I was also thinking that I could then give real time performance feedback and let some genetic algorithms and/or neural networks tune the parameters. That should keep me preoccupied for a while while the geniuses work on the really heady stuff.

    If you are one of those geniuses, quit screwing around reading /. and get back to work. Let me know when you make the breakthrough, I'll buy you a six pack.

    • But do you really need the physical stuff to do AI development?

      Building physical machines seems to be just avoiding the main problem with robotics - a decent AI. If robotics isn't looking for a decent AI then I don't see any problems: the mechanical and control issues aren't such difficult problems.

      If AI can really be done on your single board computer, then I figure can be done by itself in a virtual world on a home pc. I don't see much of a difference. Plus there are certain advantages with virtual environments.

      Once you've worked out most of the bugs, you can port it to the physical world.

      Personally I don't find the roaches Brooks does very interesting. They're interesting from the control perspective. But not from an AI perspective.

      Just the other day I was feeding two geckos (not pets - just hang around the house). One ran out and grabbed the food. The other just wouldn't come out from it's nook. I flicked a piece of food in, and it ate it. Once it finished, it was severely tempted to move out - it moved forward a bit. But then it was still too afraid/cautious to go out. I'm sure it knows there is a big creature out there. And I guess you can imagine what I'm talking about.

      As far as I know the more intelligent creatures know the difference between a feeder and food - you hand out some food, they bite the food and not your hand. If they are afraid of you, they try to get as close to the food whilst avoiding your reach. There is quite a degree of intelligence there.

      A decent AI has to simulate various futures and choose.

      But a good independent AI will have feelings. Some brain damaged humans with disconnected emotions find it hard to decide what to do - though they are still intelligent. When an AI starts trying to model itself things could get interesting - in the future A I will feel like X, in future B I will feel Y, therefore I will do this.

      Cheerio,
      Link.
  • by Phrogz ( 43803 ) <!@phrogz.net> on Tuesday March 26, 2002 @12:35PM (#3229192) Homepage

    A great movie! I was the web designer who made the official website [sonyclassics.com] for the movie (hey, be nice, it was done a LONG time ago) and so got to see the movie before it came out. I watched it 3 times, and made others come watch it. It's so very random and disconnected, and then you start to just see it all coming together.

    Very good movie, and Rodney Brooks is fun to watch. I highly suggest you rent it...just be prepared to be barraged with non-sequitor scene after non-sequitor scene, without a plot but four intermixed lives revealed.

  • Unlike the people who do all those jobs now. If I could only get a robot to perform minimally sufficient babysitting....that would be the cat's ass!
  • | Brooks, on the other hand, is sure that these machines are on the
    | right track. In a sense, he makes it easier for his robots to catch up
    | with humans by lowering the bar. On the back of the book, Brooks
    | ladles out the schmaltz and proclaims, "We are machines, as are our
    | spouses, our children and our dogs... I believe myself and my children
    | all to be mere machines." That is, we're all just a slightly more
    | involved collection of simple neurons that don't do much more than the
    | balance mechanism of Genghis. You may think that you're deeply in love
    | with the City of Florence, the ideal of democratic discourse, that
    | raven-haired beauty three rows up, puppy dogs, or rainy nights cuddled
    | under warm blankets, but according to the Brooks paradigm, you're just
    | a bunch of AFSMs passing numbers back and forth.

    in combating the concept of free will. The germs of all the relevant
    arguments are to be found as early as Spinoza. All that he brought forward
    in clear and simple language against the idea of freedom has since been
    repeated times without number, but as a rule enveloped in the most
    hair-splitting theoretical doctrines, so that it is difficult to recognize
    the straightforward train of thought which is all that matters. Spinoza
    writes in a letter of October or November, 1674:

    I call a thing free which exists and acts from the pure necessity
    of its nature, and I call that unfree, of which the being and
    action are precisely and fixedly determined by something else.
    Thus, for example, God, though necessary, is free because he
    exists only through the necessity of his own nature. Similarly,
    God cognizes himself and all else freely, because it follows
    solely from the necessity of his nature that he cognizes all. You
    see, therefore, that for me freedom consists not in free decision,
    but in free necessity.

    But let us come down to created things which are all
    determined by external causes to exist and to act in a fixed and
    definite manner. To perceive this more clearly, let us imagine
    a perfectly simple case. A stone, for example, receives from an
    external cause acting upon it a certan quantity of motion, by
    reason of which it necessarily continues to move, after the
    impact of the external cause has ceased. The continued motion
    of the stone is due to compulsion, not to the necessity of its
    own nature, because it requires to be defined by the thrust of
    an external cause. What is true here for the stone is true also
    for every other particular thing, however complicated and
    many-sided it may be, namely, that everything is necessarily
    determined by external causes to exist and to act in a fixed and
    definite manner.

    Now, please, suppose that this stone during its motion thinks and
    knows that it is striving to the best of its ability to continue in
    motion. This stone, which is conscious only of its striving and is
    by no neans indifferent, will believe that it is absolutely free, and
    that it continues in motion for no other reason than its own will to
    continue. But this is just the human freedom that everybody claims
    to possess and which consists in nothing but this, that men are
    conscious of their desires, but ignorant of the causes by which they
    are determined. Thus the child believes that he desires milk of
    his own free will, the angry boy regards his desire for vengeance
    as free, and the coward his desire for flight. Again, the drunken
    man believes that he says of his own free will what, sober
    again, he would fain have left unsaid, and as this prejudice is
    innate in all men, it is difficult to free oneself from it. For,
    although experience teaches us often enough that man least of
    all can temper his desires, and that, moved by conflicting passions,
    he sees the better and pursues the worse, yet he considers
    himself free because there are some things which he desires
    less strongly, and some desires which he can easily inhibit
    through the recollection of something else which it is often
    possible to recall.

    Because this view is so clearly and definitely expressed it is easy to
    detect the fundamental error that it contains. The same necessity by which
    a stone makes a definite movement as the result of an impact, is said to
    compel a man to carry out an action when impelled thereto by any reason.
    It is only because man is conscious of his action that he thinks himself
    to be its originator. But in doing so he overlooks the fact that he is
    driven by a cause which he cannot help obeying. The error in this train of
    thought is soon discovered. Spinoza, and all who think like him, overlook
    the fact that man not only is conscious of his action, but also may become
    conscious of the causes which guide him. Nobody will deny that the child
    is unfree when he desires milk, or the drunken man when he says things
    which he later regrets. Neither knows anything of the causes, working in
    the depths of their organisms, which exercise irresistible control over
    them. But is it justifiable to lump together actions of this kind with
    those in which a man is conscious not only of his actions but also of the
    reasons which cause him to act? Are the actions of men really all of one
    kind? Should the act of a soldier on the field of battle, of the
    scientific researcher in his laboratory, of the statesman in the most
    complicated diplomatic negotiations, be placed scientifically on the same
    level with that of the child when it desires milk: It is no doubt true
    that it is best to seek the solution of a problem where the conditions are
    sinmplest. But inability to discrinminate has before now caused endless
    confusion. There is, after all, a profound difference between knowing why
    I am acting and not knowing it. At first sight this seems a self-evident
    truth. And yet the opponents of freedom never ask themselves whether a
    motive of action which I recognize and see through, is to be regarded as
    compulsory for me in the same sense as the organic process which causes
    the child to cry for milk...

    (Rudolf Steiner, The Philosophy of Freedom [elib.com], Chapter 1, 1895)

    Materialism can never offer a satisfactory explanation of the world. For
    every attempt at an explanation must begin with the formation of thoughts
    about the phenomena of the world. Materialism thus begins with the thought
    of matter or material processes. But, in doing so, it is already
    confronted by two different sets of facts: the material world, and the
    thoughts about it. The materialist seeks to make these latter intelligible
    by regarding them as purely material processes. He believes that thinking
    takes place in the brain, much in the same way that digestion takes place
    in the animal organs. Just as he attributes mechanical and organic effects
    to matter, so he credits matter in certain circumstances with the capacity
    to think. He overlooks that, in doing so, he is merely shifting the
    problem from one place to another. He ascribes the power of thinking to
    matter instead of to himself. And thus he is back again at his starting
    point. How does matter come to think about its own nature? Why is it not
    simply satisfied with itself and content just to exist? The materialist
    has turned his attention away from the definite subject, his own I, and
    has arrived at an image of something quite vague and indefinite. Here the
    old riddle meets him again. The materialistic conception cannot solve the
    problem; it can only shift it from one place to another.

    (Ibid, Chapter 2)

...there can be no public or private virtue unless the foundation of action is the practice of truth. - George Jacob Holyoake

Working...