Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
News Books Media Book Reviews

Vehicles: Experiments in Synthetic Psychology 112

bbagnall writes "Vehicles, by Valentino Braitenberg, presents a different way of thinking about thinking, one tied more closely to sensing and acting as opposed to long, detailed calculations. In many ways it's similar to the strategy of robot programming advocated by Rodney Brooks, however Braitenberg's ideas came first so he probably deserves more recognition for this train of thought than the much more publicized Brooks." The book is not new (it's in reprint mode now, was published first in 1986), but relevance trumps novelty. Read on for the rest of the review.
Vehicles: Experiments in Synthetic Psychology
author Valentino Braitenberg
pages 168
publisher MIT Press
rating 8/10
reviewer Brian Bagnall
ISBN 0262521121
summary Profound, easy to read theories about intelligence in robots.

Valentino Braitenberg has written one of the cleanest books on robot behavior ever published. It is apparent he wrote exactly what he wanted; no more, no less. The total size of this book is 152 pages, but that seems to be exactly the proper size for the topic he has chosen. Other authors (or editors) would probably say that's not enough pages. It has to be 250, minimum! 400 is better! Not Braitenberg. Vehicles has the raw ideas of a 400-page book. In fact, if you take the proper amount of time to ponder each idea it might even take as long as a 400-page book to get through.

This book contains descriptions of various robots, which Braitenberg calls vehicles since they all use wheels for mobility. They start off simple, then gradually become more complex with each chapter, each new robot being an evolutionary step up from the previous one. In fact, rather than starting with "Chapter 1," Braitenberg starts with "Vehicle 1," and so on. By Vehicle 14 these robots could hardly be said to differ from actual living creatures in the way they behave (though Vehicle 6 describes self-reproducing robots, which is currently beyond our ability to duplicate).

Each new vehicle focuses on an animal behavior: moving, aggression, fear, love and how these can be created in a mechanical vehicle. Braitenberg has a rare mind that can think up original, non-intuitive ideas backed by logic. He also has the ability to present them well. There are a few penalties from Braitenberg's minimalist approach, however. Plain, minimal language can be a bit boring at times, stripping the book of character. Sometimes I like big words and clever turns of phrase that make my mind work, such as the writings of Douglas R. Hofstadter.

How minimal is it? Vehicle 1 contains two pages of text and one page for a diagram. I can just imagine the editor receiving chapter 1 from Braitenberg and saying, "Where's the rest of it?" But it is the perfect length for the simple robot it describes. Vehicle 2 is two pages, plus two pages for two diagrams, and so on. Honestly, for the first four chapters a 12-year-old could read this book and get the same from it as a university professor. His minimalism is admirable, however at times it can feel maddeningly incomplete.

Vehicle 5 (logic) begins by explaining a system of inhibitors that can build a thinking machine. What he is really explaining is the basis for a neural net, however he attempts to do it in five pages. Are five pages enough to explain a neural net? Unfortunately, No. This seemingly simplistic approach actually means he is leaving out vital parts of the explanation that prohibit complete understanding. More description in this chapter would be incredibly helpful. He doesn't talk enough about how the "pulses" given to the neural network gates add up. Is there a cumulative effect going on? After a 1-paragraph explanation he shows 2 examples and describes what they do, but unfortunately he doesn't explain them enough for me to understand the mechanism. Thankfully instances like this are rare, and Vehicle 5 was the only description lacking.

Vehicle 6 describes chance and the role it plays in natural selection. He describes chance as "a source of intelligence that is much more powerful than any engineering mind." Never before have I directly thought of natural selection as being intelligent, but once Braitenberg said it, it sunk in that, Yes, natural selection is intelligent; much more intelligent than any human who ever lived. It is the most skilled engineer ever, making machines of unbelievable complexity and ability. And this "intelligence" has no form, no body. It has always been around since life began and it will always be around until the universe ceases to exist. It is a process; an invisible concept. And yet it is more intelligent than any human.

Artificial Intelligence authors often state the importance of language and symbols, but one can't help but notice that animals seem to do fine without language. And aren't animals intelligent too? He demonstrates that we always assume because an animal reacts a certain way towards an object it must store a symbolic representation of this object. That seems to be reasonable, but Braitenberg demonstrates you can get what appears to be symbolic thought when in fact internally there is no symbol stored -- just electronic paths. It causes one to rethink some well-entrenched ideas about AI. What about meditation? I know when I'm in a meditative state (not thinking/using language) I can perform some actions like sweeping, making food, walking, etc.. So just how important are symbols? Is there a limit to the thoughts that can occur without symbols? I don't think this demolishes the importance of symbols -- likely they are needed to create new ideas -- but they might have their place, one less central than we generally suppose.

At the heart of each vehicle are the pathways that the wires make as they connect sensors to motors. The robots in the first 2 chapters consist of a few sensors, a few motors, and a few wires connecting them. There are no CPUs in any of the robots, except for when the wire connections become so complex, embodying logic, that they effectively become CPUs themselves. The later chapters get into concepts that would not be as easy to replicate in actual robots, and rely a little more on speculation than hard fact. He addresses such difficult topics as getting ideas and having trains of thought. Most of the robots, up to perhaps Vehicle 9 (excluding the evolutionary vehicle) could likely be built in reality. With the recent advent of Lego Mindstorms, the perfect canvas exists to create these types of simple robots, and a programming environment like leJOS Java would make it possible to simulate the wiring described in the book. Maybe someone will eventually recreate the Vehicles in the book using these tools.

The book also includes imaginative artwork of the robots, done in a thought-provoking, abstract style. Unfortunately, rather than interspersing them throughout the book at the appropriate chapter, the editors have placed them all at the end of the book, where they are ineffectual. By the time you get to them, you've either forgotten the thrust of the robot described in the chapter or have mulled over the robot enough already. Having these pictures within each chapter would give the reader something to look at while pondering the meaning of these robots.

So what is this book really about? Well, everyone who reads it probably has his or her own opinion. Braitenberg himself calls it a fantasy with roots in science. I think it is partly about our own origins through evolution, and how something as complex as the human mind might have got started. It's also a bit of a roadmap as to how we might be able to construct our own complex, thinking machines. Braitenberg is laying out no less than the evolution of our brain. For people interested in these topics, he uses his vehicles to construct another metaphor with which to study Darwinian evolution.

Braitenberg includes a section at the end of the book titled "Biological Notes on the Vehicles." These describe the concepts of his robots and how they relate to actual observations in biological creatures. As a scientist, he has done a world of research into brains. I've read his previous book, On the texture of brains: an introduction to neuroanatomy for the cybernetically minded. Though not a popular book, it is evident he is very meticulous in his research. He has dissected and examined fly neurons under microscope for weeks at a time, and from this work, as his mind pondered what he was seeing, came the realizations described in Vehicles. It's quite a treat to read the results of his thoughts without having to do the tedious work yourself! It all adds up to Braitenberg's startling conclusion (which he states at the beginning): The complex behavior we see exhibited by thinking creatures is probably generated by relatively simple mechanisms.


You can purchase Vehicles from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.

This discussion has been archived. No new comments can be posted.

Vehicles: Experiments in Synthetic Psychology

Comments Filter:
  • "By Vehicle 14 these robots could hardly be said to differ from actual living creatures" As Kraftwerk puts it, "we are the robots".
  • Also for sale (Score:1, Informative)

    Here [amazon.com]
    • by Anonymous Coward
      Holy cow! You mean Amazon also sells books? I thought they were just another web site that I was meant to waste time at. :-)
  • by Crasoum ( 618885 ) on Thursday October 24, 2002 @10:11AM (#4522245) Journal
    We have 6 billion people in the world...

    Does anyone really want to compete with a robot for space and a date?

    • 01001110011010010110001101100101001000000110001101 10100001101001011100000111001100101100001000000111 01110110000101101110011011100110000100100000011010 01011011100111010001100101011001110111001001100001 01110100011001010011111100000000
    • (* We have 6 billion people in the world... Does anyone really want to compete with a robot for space and a date? *)

      If they do the boring grunt-work like take out the garbage and clean the toilets, that would be great. However, if they are physically inept but intellectually brilliant, that would be a real bummer.

      I think that getting the feel of human social skills will be the hardest because they are hard to identify and quantify, and the geeks who will make the machines are not good at it themselves.

      Thus, PHB's will be the last to be automated unfortunately. As far as whether janitors or programmers will go first is hard to say. Physical navigation and perception is a tougher task than AI researchers thought. Few would have guessed that a chess mastering computer would appear before one that could recognize gummy spots on wet, reflective tile. There are also math theorem-proving systems. IOW, intellectual tasks actually seem *easier* to solve WRT AI than making burgers and sweeping the floor. We just dismissed them because they are boring or commonly-found skills in humans, not because they are simple to perform.

      Perhaps AI has been too "geek-centric" in its targets and speculation.
      • I think that getting the feel of human social skills will be the hardest because they are hard to identify and quantify, and the geeks who will make the machines are not good at it themselves.

        I do not necessarily agree. The lack of social intelligence in geeks is most of the times (at least partly) compensated by their reasoning skills (IQ). So they actually think about the logical relationships between the social action-reaction responses that people with a higher emotional intelligence take for granted.

        Never met anybody who you could see thinking on how to respond to something?
        This actually would make it a bonus for geeks when implementing these social skills.

        OTOH, it would make for interesting 'geek' robots (and the question how to distinguish between them ;-).
    • If a robot is more suitable as a date than you are, I think your chances of getting a date are pretty minimal anyway...

      Grab.
  • "By Vehicle 14 these robots could hardly be said to differ from actual living creatures"


    They have tons of these!!! Look at the check outs!

  • Vehicle 14? (Score:5, Funny)

    by jamie ( 78724 ) <jamie@slashdot.org> on Thursday October 24, 2002 @10:15AM (#4522278) Journal
    "Vehicle 1 contains two pages of text and one page for a diagram... Vehicle 2 is two pages, plus two pages for two diagrams, and so on."

    So Vehicle 14 is two pages of text, plus 8192 pages for 8192 diagrams?

  • by Cade144 ( 553696 ) on Thursday October 24, 2002 @10:19AM (#4522300) Homepage
    Vehicle 6 describes chance and the role it plays in natural selection. He describes chance as "a source of intelligence that is much more powerful than any engineering mind." Never before have I directly thought of natural selection as being intelligent, but once Braitenberg said it, it sunk in that, Yes, natural selection is intelligent; much more intelligent than any human who ever lived. It is the most skilled engineer ever, making machines of unbelievable complexity and ability. And this "intelligence" has no form, no body. It has always been around since life began and it will always be around until the universe ceases to exist. It is a process; an invisible concept. And yet it is more intelligent than any human.

    I'm going to disagree and say that a process is a process, and intelligence is something different.
    Natural Selection is an elegent process and can (for lack of a better word) craft some exquisitely designed things. Trees, eagles, mosquitos, and even humans are all engineering marvels created in the forge of Natural Selection. But there is no intelligence behind it.

    If Natural Selection were intelligent then the dinosaurs would not be extinct, nor would the miryad of complex and promising creatures of the Edicarian Fauna. Intelligent design would not waste such potential sources of design diversity.

    Even crystals are beautifully "designed". They are pretty to look at, serve useful functions, and can be highly prized as art, or jewelry. But the crystalization process is merely a result of natural chemical forces in action. No intelligence behind that, or natural selection either.

    If the reviewer wants to suggest that Braitenberg is implying that "God is in the details," he can. But a process is a process, and chance is not intelligent design.

    • by oddjob ( 58114 ) on Thursday October 24, 2002 @10:36AM (#4522431)
      I'm going to put the burden back on you to support your assertion that intelligence is not a process. How do you distinguish between a action that is intelligent, and one that is not? In the reviewer's case, he seems to consider natural selection as an intelligent because it provides solutions to problems. This places the focus on the result of the process, not what drives it. If you would place your requirement for intelligence on the driver of the process, what is it? Does it require a brain? A human brain? A soul?
      • its very simple. LCD or lowest common denominator. intelligence is indicated when LCD is NOT the path chosen to solve a particular problem AND the problem is solved anyway. natural selection always chooses the lowest common denominator or shortest path to solve a particular problem set. Intelligence does not. intelligence definitely requires either a brain or some centre of processing analogous to a brain. A CPU or turing machine will do if it can emulate a brain like process.
        Consider a parrot which wants a cracker. Everytime it wants a cracker is sez "Polly wants a cracker" at which time a human feeds it a cracker as a reward. Natural selection gave the parrot wings to fly and eat berries from trees. That was a lowest common denominator solution i.e. berry on tree -> parrot must eat -> give parrot wings. However the parrot is also intelligent and understands that crackers dont grow on trees and there is NO way it can get a cracker without pleasing its human keeper. So in order to get a cracker it must please its human keeper and to do that it must reproduce a set of sounds accurately in order to obtain the cracker.
        so intelligence is : cracker with human -> learn sound to please human -> reproduce sound when human in room with cracker -> parrot gets cracker.
        while natural selection is : cracker with human -> fly to human and attack human to get cracker -> human too large -> get some other food item.
        crude analogy but i think i made my point.
        • I don't think you've made your point, or at least, not clearly. As I understand you, you would define an intelligent actor as one that solves a problem with a non-LCD solution that works. This doesn't strike me as a useful test. If presented with a problem, I can usually find more than one solution. If some are LCD and some are not, and I am free to choose any one of them, how can my choice tell you whether or not I am intelligent (as in: intelligent vs. non-intelligent, not intelligent vs. stupid :) )
      • ...he seems to consider natural selection as an intelligent because it provides solutions to problems.

        It doesn't "provide" solutions. It eliminates the weak. For example, say a disease kills off 98% of a species, leaving the 2% who were immune. Now 100% of the species is immune. It's cause and effect. If none were immune, the species would be extict. If the immunity was on purpose, then the whole species would have been immune before the outbreak.

        • I was mearly expressing my understanding of the author's definition of intellingence, not defending or attacking it. I was asking the poster of the parent comment to give us his definition of intelligence, since he was clearly using a different one, but hadn't bothered to mention what it was.
    • Braitenberg's work does this to people. It makes them say things like "that's not intelligence", "chance is not intelligence", process is not intelligence, etc.

      I am unimpressed when people say what intelligence is not. To be saying anything about intelligence, you must also say what it is. All you can offer is that intelligence is "something different." You have said nothing.

      Maybe intelligence is mostly our perception. Like opinions about art, you know it when you see it, but others may not agree with you.

      You say that if natural selection were intelligent, the dinosaurs would still be alive. Well, consider a different perspective. If the dinosaurs were still alive, mammals would still be rodents hunted down by reptilian carnivores, humans would never have evolved, and there would be no "intelligent" species in existence. From that standpoint, evolution brilliantly managed the development of an intelligent species on the planet by making the dinosaurs vulnerable.

      Most people think that for a behavior to be intelligent, it must come from a conscious entity. Maybe. Maybe not. Braitenberg shows that considerable intelligence in behavior is possible without a unitary consciousness.

      Which brings up the topic of consciousness, but that's a topic for another day.
      • You say that if natural selection were intelligent, the dinosaurs would still be alive. Well, consider a different perspective. If the dinosaurs were still alive, mammals would still be rodents hunted down by reptilian carnivores, humans would never have evolved

        Who needs humans? Mammals at that time were probably not any smarter than reptiles. If there were pressure to grow bigger brains, then perhaps dinos would have. There is evidence that hunting dinos had larger brains than plant eaters. This is probably because of the complex strategies involved in hunting. Some speculate that some dino's evolved wolf-like "gang" hunting strategies to reduce the total risks. Complex social behavior seemed to be gaining in the works.

        Some dinos even had warm blood according to some studies.

        A smart reptile is not out of the question. I think you are being mammal-centric here. Tits are not everything..........just a nice bonus.

        • I agree that natural selection might have created smart reptiles if the dinosaurs had survived. I was showing how a different story can make natural selection *seem* intelligent. Or not.

          The original post implies that the extinction of dinosaurs was a failure of the imperative to survive. Hence, natural selection appears to be UNintelligent. It was stupid enough to let the dinosaurs die, after all.

          But, if you believe that dinosaurs had to die out in order that mammals evolve intelligence, then natural selection seems intelligent. Extinction of the dinosaurs let mammals expand, leading to (trumpet blare) humanity. This perspective takes a longer-term view. Not that I believe this, I'm just illustrating another viewpoint.

          Cynics might think that humanity evolved because natural selection really IS stupid. I'm not sure they're wrong.

          Personally, I don't think the survival of any single species proves anything about the intelligence of natural selection.

          Besides, if the meteor theory is correct, natural selection didn't kill the dinosaurs -- a meteor did. Natural selection, even if it is an intelligent process, does not promise to protect a species against every possible catastrophe. Even intelligent people get injured or die through no fault of their own. Intelligence does not necessarily make a you, or your species, safe from harm. S_t happens.

          Intelligence, as we perceive it, is partly determined by our personal biases. To some people, natural selection is an unintelligent process and intelligence is accidental. To others, natural selection shows evidence of a non-divine intelligence (laws of physics are beautiful and intelligent, but unconscious). To others, the results of natural selection prove that a conscious omniscient intelligence is guiding fate.

          Some people believe that intelligence imbues homeopathic water, religious icons, and tobacco executives. I don't, but that's my bias.

          When looking for evidence of intelligence, what you already believe is probably what you'll find.
          • Perhaps I should define what I mean by "intelligence". It is only a working definition and may not match others'.

            Higher intelligence involves some kind of planning or "modeling". IOW, something is planned out in advanced using visual or symbolic representations before the physical implementation is tried.

            Natural selection fails this test because it does not "test" something before implementation. It tries it directly and lets nutural correct it.

            Note that some of the worse programmers probably fail this definition because they just hack at the code until it works. I HATE that kind of code. Those programmers should be summarily fired. However, nobody does because they actually get their organic shit to work and managers' don't know the difference even when they end up obsorbing the maintanence penalty down the road.

            Good design/planning has actually gotten me laid off because my stuff was too easy to maintain and I ended up with too much spare time. Maybe I should try the organic approach, it seems like job security. Like somebody else once said, the system favors swamp-guides (and thus swamp-makers), not true engineers.

            I swear some of the idiot programmers out there are using genetic algorithms to derive their code. If not, then their result is indistinquishable from something from a genetic algorithm (like Koza's LISP generators).

            Their hacky approach is almost identical to GA's:

            while code does not work as intended
            mutate some code
            cross-bread some code
            end while

            To be fair, there probably is a little bit of planning there, maybe say 5% planning and 95% hack-and-sack. Of course there is a continuum among developers of which percent is planning and which percent is organic. Even I experiment sometimes for unfamiliar stuff. Organic has its place, but if over used then please put up a big warning sign, such as a red bioharzard symbol (the Klingon-looking one) next to your code.
    • If Natural Selection were intelligent then the dinosaurs would not be extinct...

      Why not? This sound like intelligence requires some kind of morale?

      If a process is just following rules and laws (of physics or your environment or whatever) then playing chess for example would be just a process too, by your definition. Similar to natural selection, no intelligence behind it.

      You don't explain much the difference between process and intelligence, at least in terms of what intelligence would do or be in addition to process (only what it would not).
      So where is the extra in intelligence, on top of or opposed to process then?

      • >Why not? This sound like intelligence requires some kind of morale?

        Perhaps he's suggesting that something intelligent enough to build, from the ground up, stuff like Dinosaurs, would also know why they would die out, whereas humans or other existant beings didn't, and would have designed the dinosaurs according, so that they`ve have survived.

      • One amazing sign of intelligence is the ability of the brain to develop automaticity. A reflex occurs when a stimulus is strong and instead of the spinal cord sending the info to the brain for processing, it sends the correct (usually) response back. These reflexes are built into the circuitry. The human brain is capable of forming this kind of automation. Yesterday, when you saw your wife, or good friend or whomever, did you spend time processing all the individual features, analyzing their orientation to each other, and then running them by a list of all the names and faces you've ever met in the world? Or did the face instantly pop up a name? Now granted, there must be some degree of processing, but then again the spinal cord must decide if the pan is hot enough to jerk the hand away or not (not the greatest analogy). One of the greatest examples of automatic processing is the Stroop Effect. [apa.org] You are not wasting any thought on semantic meaning, but you can't stop the brain from putting meaning into primary memory (= short term memory, for laymen).

        In Regards to Chess, masters of the game don't even bring up rules into primary memory such as where and how pieces can move. Attention is limited (thus controversy over driving car and talking on cellphone), so the more rules/strategies/tricks that become automatic, the more moves a chess player can think into the future. IANAE (correct acronymn??) but talk to any cognitive psychologist first if you disagree.

        So you ask, how does automaticity show intelligence vs. process? It skips the processing part and leads to, for the most part, instant input/output (I'd love to see someone hook a monitor up to where a GPU should be and expect a sensical image). If there was intelligence behind natural selection, then one day, when something happens, like drought, people or whatever is alive at that time would all become pecfectly accustomed to said event because anyone/thing not perfectly accustomed would instantly die before reproducing.
    • by Anonymous Coward
      Well really, what humans posess is not "intellegence" because our brains function only on complex chemical reactions which dictate everything we think or do. So either we aren't intellegent or your argument is flawed.
    • Comment removed based on user account deletion
    • If Natural Selection were intelligent then the dinosaurs would not be extinct, nor would the miryad of complex and promising creatures of the Edicarian Fauna. Intelligent design would not waste such potential sources of design diversity.

      Regarding the "Edicarian Fauna" (sp?), they were just worms and blobs according to my search. I don't see what you find so special about such fossils.

      As far as what an "intelligent designer" would do, well that is a highly speculative thing. Who knows what their alleged goals would be.

      If there is a creator, it does appear that at least they *wanted* the history of life to look as if gradule evolution took place such that most creature designs appear to be tree-based variations of prior organisms. The genetic patterns tend to fit the physical patterns of this tree of life.

      The appearent evolution of sea mammals is a good case of this. Dolphins and sharks are quite different physiologically, yet share a roughly similar habitate, niche, and food.

      There is little or no evidence that creatures were custom-made FROM SCRATCH for a particular niche, but rather borrow largely from existing (at the time) plans.

      This would imply either an imperfect absentminded creator(s), a creator that uses evolution for at least a good part of the process, or a creator that wanted to deceive us. (Some religions suggest that the devil played around with life to fool scientists, I would note. This would fit into the last category.)
    • by Anonymous Coward
      I believe most of those who have replied to this post are missing the point.

      Braitenberg describes natural selection "a source of intelligence". To the reviewer, that meant that Natural Selection was itself intelligent. I don't believe that was what the author was saying.
      He is asserting that the force of evolution creates better designs... designs which select for intelligence .. than any engineer. The elements of Chance are greater than directed purpose given sufficient time and appropriate selection pressures/criteria.

      As to how we define Intelligence.. well the word is defined as
      1. a. The capacity to acquire and apply knowledge.
      b. The faculty of thought and reason.

      If you want to redefine Intelligence as "the process whereby complexity evolves from simplicity, whether I can understand that process or not."... then go right ahead.
      Just be prepared for difficulties in communicating with other "intelligent" beings who already know what the word means.

    • Even intelligent beings make mistakes. So the extinction of the dinosaurs is not a reason to say that natural selection (or God) lacks intelligence.

      But anyway, if n.s. were intelligent, what would its goals be?
    • Intelligence is the process of a process becoming aware of itself. The more aware I become of the the process of my mind, the more intelligent I become as this knowledge allows me to interact with the world based on the strengths of my minds processes. This thought itself is the result of a process, in fact it is an ongoing process. And if I stick with it long enough, I realize that I am aware of this thought, that I am aware of this process going on in my head. And this process that I am aware of is the very process creating this thought. Thus, this thought that I am thinking is essentially a process aware of itself. Capiche?
    • > If Natural Selection were intelligent then the
      > dinosaurs would not be extinct, nor would the
      > miryad of complex and promising creatures of the
      > Edicarian Fauna. Intelligent design would not waste
      > such potential sources of design diversity.

      You assume too much about the nature of an intelligent system.

      Anyway, there are many intelligent systems which cover the planet who are wasting our design diversity. We call them "humans."

    • If Natural Selection were intelligent then the dinosaurs would not be extinct, nor would the miryad of complex and promising creatures of the Edicarian Fauna. Intelligent design would not waste such potential sources of design diversity.

      You could just as easily say, that if the designers of programming languages were intelligent, COBOL would not be going extinct. (COBOL is probably not the best choice, since it is still around, but I imagine the idea is clear)

    • If Natural Selection were intelligent then the dinosaurs would not be extinct, nor would the miryad of complex and promising creatures of the Edicarian Fauna. Intelligent design would not waste such potential sources of design diversity.

      Eh? Far be it for me to claim the intellectual powers of natural selection, but I know when one of my projects fails due to circumstances beyond my control (hard disk crash, project manager decides to use XML, meteorite strike) I can almost always think of a new and better way of approaching the original problem. So, if I was redesigning after a small Cretaceous mishap, I'd want to come up with something better than a dinosaur too.
  • by FranticMad ( 618762 ) on Thursday October 24, 2002 @10:21AM (#4522321)
    I heard about this book at the first conference on Artificial Life at Los Alamos in 1988. Maybe I'm out of the loop, but I don't hear Braitenberg's work discussed as much as it deserves to be. The core concept, that great complexity can arise from the interaction of simple systems, is also demonstrated by Cellular Automata, but "Vehicles" has a beauty and simplicity that makes it a classic.

    I think breakthroughs in AI will probably come from people who are familiar with physiology (especially biophysics), or some new branch of mathematics. So many theoreticians from cognitive science, computer science, psychology, and psychiatry ignore physiology. I can't blame them, I suppose -- the field is unbelievably complex.

    In any case, "Vehicles" should be required reading for anyone aspiring to have a degree in systems, human or otherwise (and that includes /.ers)

  • by JonTurner ( 178845 ) on Thursday October 24, 2002 @10:22AM (#4522326) Journal
    "Novelty?" That's an unfair characterization of Brooks' work!

    Braitenberg has written a 152 page book describing robots that he has thought about creating, using minimalist language and half-explanations which lack necessary detail. As you stated, the first several chapters are only a few pages long and can be understood by a 12-year-old, and the systems "described" in the later chapters may not be possible to implement.

    Brooks, by comparison, has created REAL robots which do REAL work and he (and his graduate students) publish detailed papers which explain their methodology, technique and results in detail.

    Compared to Braitenberg's book of "what if..." ideas, I suppose Brooks' approach is novel.

    • dude - i think by "relevance trumps novelty" he meant a book's relevance is more important than its newness. see def's 1 & 2 [dictionary.com]
    • I somewhat agree that it takes more work to come up with something solid rather than something theoretical, unless you are Braitenberg. He did a load of background research in the lab to come up with his ideas, and in fact also did go as far as implementing them in robotics (see book below):

      http://www.amazon.com/exec/obidos/tg/detail/-/0262 57067X/qid=1035829768/

      I think Brooks is a bit more showy, and seeks publicity more/better than someone like Braitenberg. I've used Brooks subsumption architecture many times, and find it a really bad model for doing things from a programming standpoint (I programmed the Behavior API module at www.lejos.org). All behaviors interact with the motors directly, rather than working with a higher level of abstraction, which makes it quite limited. They should be able to build on each other/help each other. For example, if I have a behavior to shoot at objects, and one to run away, it is difficult with his model to have it run away AND shoot at the same time without programming the routine twice. This is because when a behavior takes over presumably all other motors are supposed to be shut off.

      Anyway, in my mind Braitenbergs theories carry more weight and will likely stand the test of time, whereas subsumption has been used and discarded.

  • Thousands of geeks were seen scratching their heads, seemingly in deep thought, almost perplexed. What is this supposed to mean anyway??
    • it can only mean one thing:

      in 2 years, the Sirius Cybernetics corporation will release their first elevators with GPP (Genuine people personality).

      Up please! //rdj
  • Two questions (Score:3, Interesting)

    by dgoodman ( 51656 ) on Thursday October 24, 2002 @10:33AM (#4522405) Homepage
    One: there's no detail in this review. It sounds like the author is suggesting behaviourism (cf. Skinner) as a theory of cognition, an idea that was discarded before I was born. Someone give me some details and prove me wrong.

    Two: What is being said here that Simon hasn't already said in his essays on complexity of behaviour well before this book was published? In otherwords, even in 1986, is this really new?

    Although it is interesting that he describes a neural network here: it is clear to me that the reason the description is so shrouded is because prior to 1989, ANN's were taboo in the literature (Minsky having ripped perceptrons to peices back in the 60s).
    • Re:Two questions (Score:2, Interesting)

      by leipold ( 103074 )
      > It sounds like the author is suggesting behaviourism (cf. Skinner) as a theory of cognition, an idea that was discarded before I was born.

      Nope, the author is suggesting that complex behaviors can arise from complex interconnections of simple, identical components. He argues from his vast knowledge of neuroanatomy, and doesn't have any theoretical axe to grind.

      Minsky may have ripped perceptrons to peices [sic], but that has no bearing on the elegance or correctness of Braitenberg's exposition. Perceptrons were toys.
      • Minsky may have ripped perceptrons to peices [sic], but that has no bearing on the elegance or correctness of Braitenberg's exposition. Perceptrons were toys.

        Um, his point was that from the 1960s, when Minsky trashed Perceptrons, until 1989, when AI research 'rediscovered' the neural net, all neural nets were looked down upon as perceptrons or their equivalents, hence toys. Therefore, although this book includes descriptions of Neural nets, it was 1985 and he wanted to cloak the description, so as not to offend the AI priesthood, book reviewers, etc.

        I'm not sure, personally, that I buy this as an explanation: after all, the book's whole approach was so unorthodox, I'm not sure that that mollifying the academics was even on the radar screen.

    • One: there's no detail in this review. It sounds like the author is suggesting behaviourism (cf. Skinner) as a theory of cognition, an idea that was discarded before I was born. Someone give me some details and prove me wrong.

      Braitenberg's vehicles depart from behaviorism quite nicely; in fact it's a topic addressed directly. Chapter 10, Getting Ideas, discusses how the architecture built in preceding chapters, the vehicles break free from simple stimulus->response behavior and develop ideas that can be active without direct support from the environment.

    • One of the main ideas behind behavior based robotics is that though intelligent thinking produces intelligent behavior, the thinking part is not necessary to create such behaviors. A good example of this is what is referred to as emergent behavior: multiple simple behavior modules can produce complex actions when combined to work as a single unit. A behavior based system, though, do not fall under the traditional philosophical behaviorist world views. Modern robotics BBS's define the use of an internal representation of the external world as a defining aspect of the system--something which does not exist in pure reactive robotic systems (which braitenberg's vehicles were).
  • by angio ( 33504 ) on Thursday October 24, 2002 @10:33AM (#4522407) Homepage
    > however Braitenberg's ideas came first so he
    > probably deserves more recognition for this
    > train of thought than the much more publicized Brooks.

    Brooks [mit.edu] teaches the Embodied Intelligence [mit.edu] course at MIT (which I took two years ago). One of the first things the course covers are Braitenberg's creatures (see the syllabus [mit.edu]). So while Brooks may certainly get more air-time than Braitenberg, he certainly gives credit where credit is due. .. but then, remember that Braitenberg focused on astoundingly simple circuits that lead to interesting-appearing behavior, whereas Brooks has used his approach to build working autonomous robots...

  • by Dutchie ( 450420 ) on Thursday October 24, 2002 @10:41AM (#4522478) Homepage Journal
    Actually the comment about self reproducing robots in this review is incorrect. Please check out:

    Robot learns to reproduce
    http://news.bbc.co.uk/1/hi/sci/tech/903 238.stm
  • Content Free Book (Score:4, Interesting)

    by exp(pi*sqrt(163)) ( 613870 ) on Thursday October 24, 2002 @10:44AM (#4522499) Journal
    I read this book years ago but because I liked the pictures so much I still have it. However the text is completely devoid of content.

    Imagine a philosopher with no practical experience of anything vaguely robotic wrote a book on robotics. This is what they would write. Braitenberg talks about vague concepts like memory, foresight, logic and trains of thought. But these discussions are completely sophomoric jumping from systems with one or two neurons to imaginary systems with the above properties. I don't need a book to point out that an intelligent machine needs foresight, and I don't need a book to point out that a simple neuronal system with persistence might have something to do with memory. Unless you're going to say something about the details between neurons and full blown brains then you're just armchair philosophizing and any sophomore can do that without the help of a book. Maybe if he had written the book in the forties it'd be interesting. But by the eighties every science fiction writer and his dog had written about these subjects with far more detail.

    But I do love the pictures by Ladina Ribi and Claudia-Martin-Schubert. They are quite special.

    • But by the eighties every science fiction writer and his dog had written about these subjects with far more detail.

      In 1987 they had a Scientific American article on this...was that the old "Computer Recreations" column? And what I took away from it was how many surprisingly nuanced behaviors can be caused by such amazingly simple wiring. That's not something I see "every sci fi writer and his dog" covering in any kind of detail, bub. You're right in that the question then becomes 'but how does it scale', and I don't know how well it covers thatm but I think you're being a little too dismissive of the work as a whole.
  • The idea that the human brain is not logical isn't new. Psychologists and AI theorists have long known that there are several modes used in human reason, and ANalogic reasoning is used 99% of the time. We don't stop and deduce the correct answer, we leap to a similar situation in memory, and adjust the outcome of that situation to this one. Incidentally, this is why we remember events and associate pleasure/pain responses with them. This makes the choice of whether or not to bite your fingers off rather easy... for most of us.
  • Can I order a few tender loving fembots?
  • "Vehicles, by Valentino Braitenberg, presents a different way of thinking about thinking, one tied more closely to sensing and acting as opposed to long, detailed calculations."

    Ralph Wiggum: My cat is a Robot.
  • by Animats ( 122034 ) on Thursday October 24, 2002 @11:45AM (#4523100) Homepage
    That book got way too much publicity for what it's worth. It's one of those psuedo-science works that gains popularity because it claims to explain something complicated. But it's psuedo-science because when you build the things, they don't work very well.

    The first "behavior-based robots" along those lines go back to 1948, with the work of W. Grey Walters. [uwe.ac.uk] Those little wheeled robots did much of what Braitenberg talks about with his earlier models. And, since Walters actually built them, he discovered behaviors that weren't obvious just thinking about it. If you're into this at all, read everything you can find about Walters "Turtles". They were shown, working, in a museum for a year in the 1940s, and modern replicas have been built. Walters was decades ahead of his time.

    There was considerable thinking along those lines in the 1950s, most of which didn't go anywhere. I have some old AI books that contain similar speculations, although they're far less readable than "Vehicles".

    The basic problem with model-less behavior-based robotics, as Brooks and his followers have discovered, is that the ceiling is low. You can can get some simple insect-like behaviors without much trouble, but then progress stalls. That's why Brooks' best work was back in the 1980s. The robot insects were great; the humanoid torso Cog is an embarassment. This is typical of AI; somebody has a good idea and then thinks that strong AI is right around the corner.

    If you have a Lego Mindstorms set, you can build many of the "Vehicles". They're kind of cute, but don't do much.

    • Although its not commonly known, there were "behavior-based" robots even earlier than W.G. Walters' work. Jacques Loeb, an outspoken proponent of the mechanistic view of animal behavior, cites a J.H. Hammond as having built an "artificial heliotropic machine" in 1918.

      In other words, Braitenburg was gazumped by some electrical hobbyist over 60 years before he wrote about Vehicle 2.

      And of course you can always point to Descarte as the originator of the view of animals as simple machines. Or probably someone even earlier than that.

      Loeb, J. (1918). Forced Movements, Tropisms, and Animal Conduct. -- widely available in several reprints.

  • by Anonymous Coward
    There have been several software simulations of Braitenberg Vehicles over the years. Check out the good videos at
    http://people.cs.uchicago.edu/~wiseman/vehicles/

    or try out some vehicles yourself:
    http://www.lcdf.org/~eddietwo/xbraitenb erg/
    http://www.ifi.unizh.ch/groups/ailab/people/ lambri /mitbook/braitenberg/braitenberg.html

    And here are links to notes and a review I like:
    http://www.bcp.psych.ualberta.ca/~mike/Pear l_Stree t/Margin/Vehicles/index.html
    http://www.santafe.edu/~shalizi/reviews/vehicles /
  • One key insight (Score:3, Insightful)

    by heikkile ( 111814 ) on Thursday October 24, 2002 @12:25PM (#4523475)
    The one thing I remember best from Braitenberg (I read it when it was farly new, in the 80's). is that analyzing complex behaviour is hard, building it is much easier. Or, in other words, things seem much more simple once you know how they work.

    This is a book I enjoyed greatly, and that gave me some sort of insight to many problems, most notably debugging software...

  • Mindstorms? Java? He's describing exactly what BEAM robotics sets out to do, build processorless robots. Why model with a processor? Check out Solarbotics [solarbotics.net] for working models of these things, with no computers, and usually costing a few dollars.
  • I have a friend who makes solar powered, Braitenberg inspired artwork from old computer parts. They're far more interesting in person, but you can check out his website [danroe.net]
  • Pheomenology (Score:3, Interesting)

    by tomdarch ( 225937 ) on Thursday October 24, 2002 @02:03PM (#4524249)
    If you're interested in the philosophical underpinnings of this type of thought, you should check out the field of phenomonology. There's a bunch of spacy 60's crap out there - so start with the source: Martin Heidigger. (Just to complicate things, he was basically a Nazi (literally!)) His work prior to WWII, e.g. Being and Time, are seriously academic. His work after WWII is more poetic (he was banned from teaching because of his connections with the Nazis). Either way, he has his own take on language and it's a tough nut to crack.

    My take on all this stuff is that it's a contrast to Kant. For Kant, the world around us is a bunch of unknowable abstract objects, which we 'know' through our flawed senses. ("Ah, the abstract 'pen' probably exists, but I can only know what my imperfect senses tell me about it.") This is more like the robotic systems that create an abstract construct of the environment and then internally work with that abstracted construct.

    As I read Heidegger, he's saying that, yeah, Kant has a point, but it's not very useful in day to day life. When you walk through a door, you don't think about the doorknob, you just turn it, open the door and walk through. It's all what he calls "taken for granted." You don't stand there thinking "Hmm, maybe my perception of the doorknob is flawed, and there is no knob. I can never be sure" (well, some of us have thought thoughts like that, but only after consuming certain molecules).

    Essentially, Heidegger's take is much more practical: how do we do the useful everyday stuff? This is a lot more like robotic systems that are based on more reflexive responses.

    Yes, Heidegger deals with lots and lots of other stuff ("Language is the house of being" "Death fractures the taken for granted", and the scary stuff about how when you are speaking old German you are more truly in touch with existance!) But the underpinnings of phenomenology is potentially really useful for understanding the "nuts and bolts" of interacting with the world. Oh, and he's the "Velvet Underground" of twentyth century thought (Sartre, Derrida, etc, etc cite him as a critical influence).
  • Vehicles, by Valentino Braitenberg, presents a different way of thinking about thinking, one tied more closely to sensing and acting as opposed to long, detailed calculations.

    In a related story, Valentino Braitenberg has been elected to the Berkeley City Council.

  • I've always loved the book, but trying to use the ideas from it to implement socerplaying softbots (aka robocup) taught me that it's really hard to get the behavior you want. Sure, what you get is damn interesting, but is it what you intended? Often not. For those of you interested what the limitations of vehicle style robots (in this case, simulated) see http://wonka.hampshire.edu/~alan/research/soccerbo t.html

    (I still love the book though; it's really worth reading, just for the points it makes about what fear, love, and hate are).
  • by po8 ( 187055 )

    For kit hardware that implements the ideas of Braitenberg et. al., check out the BYO-bot [kipr.org]. I've had one for several years, and they're a great classroom demo.

  • The game Mindrover (http://mindrover.com) could be used to implement these ideas. The object of the game is to construct software robots that accomplish a task. You have an assortment of sensors, wires, servos, and logic units to do this. There are mechanisms for stateful behavior. It's also available for Linux.

It's currently a problem of access to gigabits through punybaud. -- J. C. R. Licklider

Working...