Vehicles: Experiments in Synthetic Psychology 112
Vehicles: Experiments in Synthetic Psychology | |
author | Valentino Braitenberg |
pages | 168 |
publisher | MIT Press |
rating | 8/10 |
reviewer | Brian Bagnall |
ISBN | 0262521121 |
summary | Profound, easy to read theories about intelligence in robots. |
Valentino Braitenberg has written one of the cleanest books on robot behavior ever published. It is apparent he wrote exactly what he wanted; no more, no less. The total size of this book is 152 pages, but that seems to be exactly the proper size for the topic he has chosen. Other authors (or editors) would probably say that's not enough pages. It has to be 250, minimum! 400 is better! Not Braitenberg. Vehicles has the raw ideas of a 400-page book. In fact, if you take the proper amount of time to ponder each idea it might even take as long as a 400-page book to get through.
This book contains descriptions of various robots, which Braitenberg calls vehicles since they all use wheels for mobility. They start off simple, then gradually become more complex with each chapter, each new robot being an evolutionary step up from the previous one. In fact, rather than starting with "Chapter 1," Braitenberg starts with "Vehicle 1," and so on. By Vehicle 14 these robots could hardly be said to differ from actual living creatures in the way they behave (though Vehicle 6 describes self-reproducing robots, which is currently beyond our ability to duplicate).
Each new vehicle focuses on an animal behavior: moving, aggression, fear, love and how these can be created in a mechanical vehicle. Braitenberg has a rare mind that can think up original, non-intuitive ideas backed by logic. He also has the ability to present them well. There are a few penalties from Braitenberg's minimalist approach, however. Plain, minimal language can be a bit boring at times, stripping the book of character. Sometimes I like big words and clever turns of phrase that make my mind work, such as the writings of Douglas R. Hofstadter.
How minimal is it? Vehicle 1 contains two pages of text and one page for a diagram. I can just imagine the editor receiving chapter 1 from Braitenberg and saying, "Where's the rest of it?" But it is the perfect length for the simple robot it describes. Vehicle 2 is two pages, plus two pages for two diagrams, and so on. Honestly, for the first four chapters a 12-year-old could read this book and get the same from it as a university professor. His minimalism is admirable, however at times it can feel maddeningly incomplete.
Vehicle 5 (logic) begins by explaining a system of inhibitors that can build a thinking machine. What he is really explaining is the basis for a neural net, however he attempts to do it in five pages. Are five pages enough to explain a neural net? Unfortunately, No. This seemingly simplistic approach actually means he is leaving out vital parts of the explanation that prohibit complete understanding. More description in this chapter would be incredibly helpful. He doesn't talk enough about how the "pulses" given to the neural network gates add up. Is there a cumulative effect going on? After a 1-paragraph explanation he shows 2 examples and describes what they do, but unfortunately he doesn't explain them enough for me to understand the mechanism. Thankfully instances like this are rare, and Vehicle 5 was the only description lacking.
Vehicle 6 describes chance and the role it plays in natural selection. He describes chance as "a source of intelligence that is much more powerful than any engineering mind." Never before have I directly thought of natural selection as being intelligent, but once Braitenberg said it, it sunk in that, Yes, natural selection is intelligent; much more intelligent than any human who ever lived. It is the most skilled engineer ever, making machines of unbelievable complexity and ability. And this "intelligence" has no form, no body. It has always been around since life began and it will always be around until the universe ceases to exist. It is a process; an invisible concept. And yet it is more intelligent than any human.
Artificial Intelligence authors often state the importance of language and symbols, but one can't help but notice that animals seem to do fine without language. And aren't animals intelligent too? He demonstrates that we always assume because an animal reacts a certain way towards an object it must store a symbolic representation of this object. That seems to be reasonable, but Braitenberg demonstrates you can get what appears to be symbolic thought when in fact internally there is no symbol stored -- just electronic paths. It causes one to rethink some well-entrenched ideas about AI. What about meditation? I know when I'm in a meditative state (not thinking/using language) I can perform some actions like sweeping, making food, walking, etc.. So just how important are symbols? Is there a limit to the thoughts that can occur without symbols? I don't think this demolishes the importance of symbols -- likely they are needed to create new ideas -- but they might have their place, one less central than we generally suppose.
At the heart of each vehicle are the pathways that the wires make as they connect sensors to motors. The robots in the first 2 chapters consist of a few sensors, a few motors, and a few wires connecting them. There are no CPUs in any of the robots, except for when the wire connections become so complex, embodying logic, that they effectively become CPUs themselves. The later chapters get into concepts that would not be as easy to replicate in actual robots, and rely a little more on speculation than hard fact. He addresses such difficult topics as getting ideas and having trains of thought. Most of the robots, up to perhaps Vehicle 9 (excluding the evolutionary vehicle) could likely be built in reality. With the recent advent of Lego Mindstorms, the perfect canvas exists to create these types of simple robots, and a programming environment like leJOS Java would make it possible to simulate the wiring described in the book. Maybe someone will eventually recreate the Vehicles in the book using these tools.
The book also includes imaginative artwork of the robots, done in a thought-provoking, abstract style. Unfortunately, rather than interspersing them throughout the book at the appropriate chapter, the editors have placed them all at the end of the book, where they are ineffectual. By the time you get to them, you've either forgotten the thrust of the robot described in the chapter or have mulled over the robot enough already. Having these pictures within each chapter would give the reader something to look at while pondering the meaning of these robots.
So what is this book really about? Well, everyone who reads it probably has his or her own opinion. Braitenberg himself calls it a fantasy with roots in science. I think it is partly about our own origins through evolution, and how something as complex as the human mind might have got started. It's also a bit of a roadmap as to how we might be able to construct our own complex, thinking machines. Braitenberg is laying out no less than the evolution of our brain. For people interested in these topics, he uses his vehicles to construct another metaphor with which to study Darwinian evolution.
Braitenberg includes a section at the end of the book titled "Biological Notes on the Vehicles." These describe the concepts of his robots and how they relate to actual observations in biological creatures. As a scientist, he has done a world of research into brains. I've read his previous book, On the texture of brains: an introduction to neuroanatomy for the cybernetically minded. Though not a popular book, it is evident he is very meticulous in his research. He has dissected and examined fly neurons under microscope for weeks at a time, and from this work, as his mind pondered what he was seeing, came the realizations described in Vehicles. It's quite a treat to read the results of his thoughts without having to do the tedious work yourself! It all adds up to Braitenberg's startling conclusion (which he states at the beginning): The complex behavior we see exhibited by thinking creatures is probably generated by relatively simple mechanisms.
You can purchase Vehicles from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
Vehicle 14 (Score:2, Funny)
Re:Vehicle 14 (Score:1)
Also for sale (Score:1, Informative)
Re:Also for sale (Score:1, Funny)
Reproducing robots.... (Score:5, Funny)
Does anyone really want to compete with a robot for space and a date?
Re:Reproducing robots.... (Score:3, Insightful)
Re:Reproducing robots.... (Score:2)
If they do the boring grunt-work like take out the garbage and clean the toilets, that would be great. However, if they are physically inept but intellectually brilliant, that would be a real bummer.
I think that getting the feel of human social skills will be the hardest because they are hard to identify and quantify, and the geeks who will make the machines are not good at it themselves.
Thus, PHB's will be the last to be automated unfortunately. As far as whether janitors or programmers will go first is hard to say. Physical navigation and perception is a tougher task than AI researchers thought. Few would have guessed that a chess mastering computer would appear before one that could recognize gummy spots on wet, reflective tile. There are also math theorem-proving systems. IOW, intellectual tasks actually seem *easier* to solve WRT AI than making burgers and sweeping the floor. We just dismissed them because they are boring or commonly-found skills in humans, not because they are simple to perform.
Perhaps AI has been too "geek-centric" in its targets and speculation.
Re:Reproducing robots.... (Score:1)
I do not necessarily agree. The lack of social intelligence in geeks is most of the times (at least partly) compensated by their reasoning skills (IQ). So they actually think about the logical relationships between the social action-reaction responses that people with a higher emotional intelligence take for granted.
Never met anybody who you could see thinking on how to respond to something?
This actually would make it a bonus for geeks when implementing these social skills.
OTOH, it would make for interesting 'geek' robots (and the question how to distinguish between them
Re:Reproducing robots.... (Score:2)
Grab.
Go to Wal-Mart (Score:1)
They have tons of these!!! Look at the check outs!
Re:The complex behavior we see (Score:2)
Re:The complex behavior we see (Score:2)
Re:The complex behavior we see (Score:2)
Re:The complex behavior we see (Score:1, Interesting)
Re:The complex behavior we see (Score:1)
Think about what would happen if you could upload your consciousness into a machine with a synthetic brain (modeled after a human brain). It would be a copy of you, right? The "synthetic you" would have your memories, and it would be confused because it would think that it was you. Make sense?
Now what if you turned it off? Does it die? Let's turn it back on and find out!
Nope, it didn't die. Apparently it was just asleep, because when we turned it back on, it just loaded it's memories like a computer booting an operating system and it said "good morning" and quickly got back to thinking that it was you. It can live forever and go on forming new memories of its environment and our interaction with it, though it would probably need some counseling. Post-humanism might be rough for some of the first uploads.
Now think about this: what if our bodies are just machines, and our brain is just a computer? Every morning, we wake up and remember who we are based on the memories that we have. Our memories are all consistent with our identity. If we could swap experiences by trading memory data files, we would probably end up with multiple personality disorder.
Maybe when you go to sleep tonight, you effectively die (in terms of consciousness) and then your machine wakes up tomorrow morning and assembles its sense of self based on existing memory data. There's no reason to even think of it as "you"...it's "tomorrow's you", and you're "today's you". Sorry if you don't like the existential stuff, but bear with me...
Well, what the New Scientist said was that it doesn't work like that...morning-to-morning...it's actually more like a moment-to-moment thing. Stimulus-Response...just "like an animal". The only thing that connects one moment to the next to create the illusion of the mind is that we keep piling memories, one on top of the next. But in a split second, you could lose touch with your memories and become a complete vegetable. This is the tragedy of Alzheimers, and why insensitive people use the term "vegetable" when describing these patients.
Animals are intelligent in the same way that we are. We can easily characterize the way they think as Stimulus-Response, and nothing more, because we don't really care what "the meaning of life" is for a dog or a worm. Few of us are detached enough to be able to look upon ourselves in the same way...we're far too invested in the idea that our life has meaning and that we're special. We're more complex than a worm, dog, or chimp, but it's all just a matter of degree. Even plants react to their environment and respond with appropriate survival behavior. Human beings are just highly sophisticated organic life forms, and suggesting that there is anything that sets us apart categorically is species-centricism....pure arrogance.
Ignorance is bliss. Long live religious theories of explanation.
Re:The complex behavior we see (Score:1)
Vehicle 14? (Score:5, Funny)
So Vehicle 14 is two pages of text, plus 8192 pages for 8192 diagrams?
Intellegence is not a Process (Score:5, Insightful)
I'm going to disagree and say that a process is a process, and intelligence is something different.
Natural Selection is an elegent process and can (for lack of a better word) craft some exquisitely designed things. Trees, eagles, mosquitos, and even humans are all engineering marvels created in the forge of Natural Selection. But there is no intelligence behind it.
If Natural Selection were intelligent then the dinosaurs would not be extinct, nor would the miryad of complex and promising creatures of the Edicarian Fauna. Intelligent design would not waste such potential sources of design diversity.
Even crystals are beautifully "designed". They are pretty to look at, serve useful functions, and can be highly prized as art, or jewelry. But the crystalization process is merely a result of natural chemical forces in action. No intelligence behind that, or natural selection either.
If the reviewer wants to suggest that Braitenberg is implying that "God is in the details," he can. But a process is a process, and chance is not intelligent design.
Re:Intellegence is not a Process (Score:5, Informative)
Re:Intellegence is not a Process (Score:2, Insightful)
Consider a parrot which wants a cracker. Everytime it wants a cracker is sez "Polly wants a cracker" at which time a human feeds it a cracker as a reward. Natural selection gave the parrot wings to fly and eat berries from trees. That was a lowest common denominator solution i.e. berry on tree -> parrot must eat -> give parrot wings. However the parrot is also intelligent and understands that crackers dont grow on trees and there is NO way it can get a cracker without pleasing its human keeper. So in order to get a cracker it must please its human keeper and to do that it must reproduce a set of sounds accurately in order to obtain the cracker.
so intelligence is : cracker with human -> learn sound to please human -> reproduce sound when human in room with cracker -> parrot gets cracker.
while natural selection is : cracker with human -> fly to human and attack human to get cracker -> human too large -> get some other food item.
crude analogy but i think i made my point.
Re:Intellegence is not a Process (Score:1)
Re:Intellegence is not a Process (Score:1)
It doesn't "provide" solutions. It eliminates the weak. For example, say a disease kills off 98% of a species, leaving the 2% who were immune. Now 100% of the species is immune. It's cause and effect. If none were immune, the species would be extict. If the immunity was on purpose, then the whole species would have been immune before the outbreak.
Re:Intellegence is not a Process (Score:1)
Re:Intellegence is not a Process (Score:2, Insightful)
I am unimpressed when people say what intelligence is not. To be saying anything about intelligence, you must also say what it is. All you can offer is that intelligence is "something different." You have said nothing.
Maybe intelligence is mostly our perception. Like opinions about art, you know it when you see it, but others may not agree with you.
You say that if natural selection were intelligent, the dinosaurs would still be alive. Well, consider a different perspective. If the dinosaurs were still alive, mammals would still be rodents hunted down by reptilian carnivores, humans would never have evolved, and there would be no "intelligent" species in existence. From that standpoint, evolution brilliantly managed the development of an intelligent species on the planet by making the dinosaurs vulnerable.
Most people think that for a behavior to be intelligent, it must come from a conscious entity. Maybe. Maybe not. Braitenberg shows that considerable intelligence in behavior is possible without a unitary consciousness.
Which brings up the topic of consciousness, but that's a topic for another day.
Re:Intellegence is not a Process (Score:2)
Who needs humans? Mammals at that time were probably not any smarter than reptiles. If there were pressure to grow bigger brains, then perhaps dinos would have. There is evidence that hunting dinos had larger brains than plant eaters. This is probably because of the complex strategies involved in hunting. Some speculate that some dino's evolved wolf-like "gang" hunting strategies to reduce the total risks. Complex social behavior seemed to be gaining in the works.
Some dinos even had warm blood according to some studies.
A smart reptile is not out of the question. I think you are being mammal-centric here. Tits are not everything..........just a nice bonus.
Re:Intellegence is not a Process (Score:1)
The original post implies that the extinction of dinosaurs was a failure of the imperative to survive. Hence, natural selection appears to be UNintelligent. It was stupid enough to let the dinosaurs die, after all.
But, if you believe that dinosaurs had to die out in order that mammals evolve intelligence, then natural selection seems intelligent. Extinction of the dinosaurs let mammals expand, leading to (trumpet blare) humanity. This perspective takes a longer-term view. Not that I believe this, I'm just illustrating another viewpoint.
Cynics might think that humanity evolved because natural selection really IS stupid. I'm not sure they're wrong.
Personally, I don't think the survival of any single species proves anything about the intelligence of natural selection.
Besides, if the meteor theory is correct, natural selection didn't kill the dinosaurs -- a meteor did. Natural selection, even if it is an intelligent process, does not promise to protect a species against every possible catastrophe. Even intelligent people get injured or die through no fault of their own. Intelligence does not necessarily make a you, or your species, safe from harm. S_t happens.
Intelligence, as we perceive it, is partly determined by our personal biases. To some people, natural selection is an unintelligent process and intelligence is accidental. To others, natural selection shows evidence of a non-divine intelligence (laws of physics are beautiful and intelligent, but unconscious). To others, the results of natural selection prove that a conscious omniscient intelligence is guiding fate.
Some people believe that intelligence imbues homeopathic water, religious icons, and tobacco executives. I don't, but that's my bias.
When looking for evidence of intelligence, what you already believe is probably what you'll find.
Re:Intellegence is not a Process (Score:2)
Higher intelligence involves some kind of planning or "modeling". IOW, something is planned out in advanced using visual or symbolic representations before the physical implementation is tried.
Natural selection fails this test because it does not "test" something before implementation. It tries it directly and lets nutural correct it.
Note that some of the worse programmers probably fail this definition because they just hack at the code until it works. I HATE that kind of code. Those programmers should be summarily fired. However, nobody does because they actually get their organic shit to work and managers' don't know the difference even when they end up obsorbing the maintanence penalty down the road.
Good design/planning has actually gotten me laid off because my stuff was too easy to maintain and I ended up with too much spare time. Maybe I should try the organic approach, it seems like job security. Like somebody else once said, the system favors swamp-guides (and thus swamp-makers), not true engineers.
I swear some of the idiot programmers out there are using genetic algorithms to derive their code. If not, then their result is indistinquishable from something from a genetic algorithm (like Koza's LISP generators).
Their hacky approach is almost identical to GA's:
while code does not work as intended
mutate some code
cross-bread some code
end while
To be fair, there probably is a little bit of planning there, maybe say 5% planning and 95% hack-and-sack. Of course there is a continuum among developers of which percent is planning and which percent is organic. Even I experiment sometimes for unfamiliar stuff. Organic has its place, but if over used then please put up a big warning sign, such as a red bioharzard symbol (the Klingon-looking one) next to your code.
Re:Intellegence is not a Process (Score:2)
Why not? This sound like intelligence requires some kind of morale?
If a process is just following rules and laws (of physics or your environment or whatever) then playing chess for example would be just a process too, by your definition. Similar to natural selection, no intelligence behind it.
You don't explain much the difference between process and intelligence, at least in terms of what intelligence would do or be in addition to process (only what it would not).
So where is the extra in intelligence, on top of or opposed to process then?
Re:Intellegence is not a Process (Score:2)
Perhaps he's suggesting that something intelligent enough to build, from the ground up, stuff like Dinosaurs, would also know why they would die out, whereas humans or other existant beings didn't, and would have designed the dinosaurs according, so that they`ve have survived.
Re:Intellegence is not a Process (Score:3, Insightful)
In Regards to Chess, masters of the game don't even bring up rules into primary memory such as where and how pieces can move. Attention is limited (thus controversy over driving car and talking on cellphone), so the more rules/strategies/tricks that become automatic, the more moves a chess player can think into the future. IANAE (correct acronymn??) but talk to any cognitive psychologist first if you disagree.
So you ask, how does automaticity show intelligence vs. process? It skips the processing part and leads to, for the most part, instant input/output (I'd love to see someone hook a monitor up to where a GPU should be and expect a sensical image). If there was intelligence behind natural selection, then one day, when something happens, like drought, people or whatever is alive at that time would all become pecfectly accustomed to said event because anyone/thing not perfectly accustomed would instantly die before reproducing.
Re:Intellegence is not a Process (Score:1, Interesting)
Re: (Score:1)
Re:Intellegence is not a Process (Score:2)
Regarding the "Edicarian Fauna" (sp?), they were just worms and blobs according to my search. I don't see what you find so special about such fossils.
As far as what an "intelligent designer" would do, well that is a highly speculative thing. Who knows what their alleged goals would be.
If there is a creator, it does appear that at least they *wanted* the history of life to look as if gradule evolution took place such that most creature designs appear to be tree-based variations of prior organisms. The genetic patterns tend to fit the physical patterns of this tree of life.
The appearent evolution of sea mammals is a good case of this. Dolphins and sharks are quite different physiologically, yet share a roughly similar habitate, niche, and food.
There is little or no evidence that creatures were custom-made FROM SCRATCH for a particular niche, but rather borrow largely from existing (at the time) plans.
This would imply either an imperfect absentminded creator(s), a creator that uses evolution for at least a good part of the process, or a creator that wanted to deceive us. (Some religions suggest that the devil played around with life to fool scientists, I would note. This would fit into the last category.)
Re:Intellegence is not a Process (Score:1)
Braitenberg describes natural selection "a source of intelligence". To the reviewer, that meant that Natural Selection was itself intelligent. I don't believe that was what the author was saying.
He is asserting that the force of evolution creates better designs... designs which select for intelligence
As to how we define Intelligence.. well the word is defined as
1. a. The capacity to acquire and apply knowledge.
b. The faculty of thought and reason.
If you want to redefine Intelligence as "the process whereby complexity evolves from simplicity, whether I can understand that process or not."... then go right ahead.
Just be prepared for difficulties in communicating with other "intelligent" beings who already know what the word means.
Re:Intellegence is not a Process (Score:2)
But anyway, if n.s. were intelligent, what would its goals be?
Intelligence is a process (Score:1)
Re:Intellegence is not a Process (Score:2)
> dinosaurs would not be extinct, nor would the
> miryad of complex and promising creatures of the
> Edicarian Fauna. Intelligent design would not waste
> such potential sources of design diversity.
You assume too much about the nature of an intelligent system.
Anyway, there are many intelligent systems which cover the planet who are wasting our design diversity. We call them "humans."
Re:Intellegence is not a Process (Score:1)
If Natural Selection were intelligent then the dinosaurs would not be extinct, nor would the miryad of complex and promising creatures of the Edicarian Fauna. Intelligent design would not waste such potential sources of design diversity.
You could just as easily say, that if the designers of programming languages were intelligent, COBOL would not be going extinct. (COBOL is probably not the best choice, since it is still around, but I imagine the idea is clear)
Re:Intellegence is not a Process (Score:1)
Eh? Far be it for me to claim the intellectual powers of natural selection, but I know when one of my projects fails due to circumstances beyond my control (hard disk crash, project manager decides to use XML, meteorite strike) I can almost always think of a new and better way of approaching the original problem. So, if I was redesigning after a small Cretaceous mishap, I'd want to come up with something better than a dinosaur too.
Vehicles is a classic (Score:4, Insightful)
I think breakthroughs in AI will probably come from people who are familiar with physiology (especially biophysics), or some new branch of mathematics. So many theoreticians from cognitive science, computer science, psychology, and psychiatry ignore physiology. I can't blame them, I suppose -- the field is unbelievably complex.
In any case, "Vehicles" should be required reading for anyone aspiring to have a degree in systems, human or otherwise (and that includes
Re:Vehicles is a classic (Score:3, Informative)
So many theoreticians from cognitive science
No they don't, the others for the most part do. But you obviously don't understand the scope of CogSci.
Re:Vehicles is a classic (Score:1)
I too read Vehicles a long time ago and found it inspirational. My work on steering behaviors [red3d.com] was strongly influenced by Braitenberg's book.
relevance trumps novelty? (Score:5, Interesting)
Braitenberg has written a 152 page book describing robots that he has thought about creating, using minimalist language and half-explanations which lack necessary detail. As you stated, the first several chapters are only a few pages long and can be understood by a 12-year-old, and the systems "described" in the later chapters may not be possible to implement.
Brooks, by comparison, has created REAL robots which do REAL work and he (and his graduate students) publish detailed papers which explain their methodology, technique and results in detail.
Compared to Braitenberg's book of "what if..." ideas, I suppose Brooks' approach is novel.
Re:relevance trumps novelty? (Score:1)
Re:relevance trumps novelty? (Score:1)
I somewhat agree that it takes more work to come up with something solid rather than something theoretical, unless you are Braitenberg. He did a load of background research in the lab to come up with his ideas, and in fact also did go as far as implementing them in robotics (see book below):
http://www.amazon.com/exec/obidos/tg/detail/-/026I think Brooks is a bit more showy, and seeks publicity more/better than someone like Braitenberg. I've used Brooks subsumption architecture many times, and find it a really bad model for doing things from a programming standpoint (I programmed the Behavior API module at www.lejos.org). All behaviors interact with the motors directly, rather than working with a higher level of abstraction, which makes it quite limited. They should be able to build on each other/help each other. For example, if I have a behavior to shoot at objects, and one to run away, it is difficult with his model to have it run away AND shoot at the same time without programming the routine twice. This is because when a behavior takes over presumably all other motors are supposed to be shut off.
Anyway, in my mind Braitenbergs theories carry more weight and will likely stand the test of time, whereas subsumption has been used and discarded.
In other news... (Score:2)
Re:In other news... (Score:2)
in 2 years, the Sirius Cybernetics corporation will release their first elevators with GPP (Genuine people personality).
Up please!
Two questions (Score:3, Interesting)
Two: What is being said here that Simon hasn't already said in his essays on complexity of behaviour well before this book was published? In otherwords, even in 1986, is this really new?
Although it is interesting that he describes a neural network here: it is clear to me that the reason the description is so shrouded is because prior to 1989, ANN's were taboo in the literature (Minsky having ripped perceptrons to peices back in the 60s).
Re:Two questions (Score:2, Interesting)
Nope, the author is suggesting that complex behaviors can arise from complex interconnections of simple, identical components. He argues from his vast knowledge of neuroanatomy, and doesn't have any theoretical axe to grind.
Minsky may have ripped perceptrons to peices [sic], but that has no bearing on the elegance or correctness of Braitenberg's exposition. Perceptrons were toys.
Re:Two questions (Score:2)
Um, his point was that from the 1960s, when Minsky trashed Perceptrons, until 1989, when AI research 'rediscovered' the neural net, all neural nets were looked down upon as perceptrons or their equivalents, hence toys. Therefore, although this book includes descriptions of Neural nets, it was 1985 and he wanted to cloak the description, so as not to offend the AI priesthood, book reviewers, etc.
I'm not sure, personally, that I buy this as an explanation: after all, the book's whole approach was so unorthodox, I'm not sure that that mollifying the academics was even on the radar screen.
Re:Two questions (Score:1)
One: there's no detail in this review. It sounds like the author is suggesting behaviourism (cf. Skinner) as a theory of cognition, an idea that was discarded before I was born. Someone give me some details and prove me wrong.
Braitenberg's vehicles depart from behaviorism quite nicely; in fact it's a topic addressed directly. Chapter 10, Getting Ideas, discusses how the architecture built in preceding chapters, the vehicles break free from simple stimulus->response behavior and develop ideas that can be active without direct support from the environment.
Re:Two questions (Score:1)
Brooks and Braitenberg (Score:3, Informative)
> probably deserves more recognition for this
> train of thought than the much more publicized Brooks.
Brooks [mit.edu] teaches the Embodied Intelligence [mit.edu] course at MIT (which I took two years ago). One of the first things the course covers are Braitenberg's creatures (see the syllabus [mit.edu]). So while Brooks may certainly get more air-time than Braitenberg, he certainly gives credit where credit is due. .. but then, remember that Braitenberg focused on astoundingly simple circuits that lead to interesting-appearing behavior, whereas Brooks has used his approach to build working autonomous robots...
Get it used (Score:1)
Self Reproducing Robots Do exist. (Score:3, Informative)
Robot learns to reproduce
http://news.bbc.co.uk/1/hi/sci/tech/90
Re:Self Reproducing Robots Do exist. (Score:2)
Gosh, who says robotics research isn't exciting?
Re:Self Reproducing Robots Do exist. (Score:1)
Content Free Book (Score:4, Interesting)
Imagine a philosopher with no practical experience of anything vaguely robotic wrote a book on robotics. This is what they would write. Braitenberg talks about vague concepts like memory, foresight, logic and trains of thought. But these discussions are completely sophomoric jumping from systems with one or two neurons to imaginary systems with the above properties. I don't need a book to point out that an intelligent machine needs foresight, and I don't need a book to point out that a simple neuronal system with persistence might have something to do with memory. Unless you're going to say something about the details between neurons and full blown brains then you're just armchair philosophizing and any sophomore can do that without the help of a book. Maybe if he had written the book in the forties it'd be interesting. But by the eighties every science fiction writer and his dog had written about these subjects with far more detail.
But I do love the pictures by Ladina Ribi and Claudia-Martin-Schubert. They are quite special.
Re:Content Free Book (Score:2)
In 1987 they had a Scientific American article on this...was that the old "Computer Recreations" column? And what I took away from it was how many surprisingly nuanced behaviors can be caused by such amazingly simple wiring. That's not something I see "every sci fi writer and his dog" covering in any kind of detail, bub. You're right in that the question then becomes 'but how does it scale', and I don't know how well it covers thatm but I think you're being a little too dismissive of the work as a whole.
analogy (Score:2)
Bye Bye Blow-Up Dolls (Score:1, Funny)
OB Simpsons Reference (Score:1)
Ralph Wiggum: My cat is a Robot.
Not "Vehicles" again (Score:4, Insightful)
The first "behavior-based robots" along those lines go back to 1948, with the work of W. Grey Walters. [uwe.ac.uk] Those little wheeled robots did much of what Braitenberg talks about with his earlier models. And, since Walters actually built them, he discovered behaviors that weren't obvious just thinking about it. If you're into this at all, read everything you can find about Walters "Turtles". They were shown, working, in a museum for a year in the 1940s, and modern replicas have been built. Walters was decades ahead of his time.
There was considerable thinking along those lines in the 1950s, most of which didn't go anywhere. I have some old AI books that contain similar speculations, although they're far less readable than "Vehicles".
The basic problem with model-less behavior-based robotics, as Brooks and his followers have discovered, is that the ceiling is low. You can can get some simple insect-like behaviors without much trouble, but then progress stalls. That's why Brooks' best work was back in the 1980s. The robot insects were great; the humanoid torso Cog is an embarassment. This is typical of AI; somebody has a good idea and then thinks that strong AI is right around the corner.
If you have a Lego Mindstorms set, you can build many of the "Vehicles". They're kind of cute, but don't do much.
There's always prior work. (Score:1)
In other words, Braitenburg was gazumped by some electrical hobbyist over 60 years before he wrote about Vehicle 2.
And of course you can always point to Descarte as the originator of the view of animals as simple machines. Or probably someone even earlier than that.
Loeb, J. (1918). Forced Movements, Tropisms, and Animal Conduct. -- widely available in several reprints.
Software / Review Links (Score:1, Informative)
http://people.cs.uchicago.edu/~wiseman/vehicles
or try out some vehicles yourself:
http://www.lcdf.org/~eddietwo/xbraiten
http://www.ifi.unizh.ch/groups/ailab/people
And here are links to notes and a review I like:
http://www.bcp.psych.ualberta.ca/~mike/Pea
http://www.santafe.edu/~shalizi/reviews/vehicle
One key insight (Score:3, Insightful)
This is a book I enjoyed greatly, and that gave me some sort of insight to many problems, most notably debugging software...
modelling vehicles in BEAM (Score:1)
real life vehicles (Score:1)
Pheomenology (Score:3, Interesting)
My take on all this stuff is that it's a contrast to Kant. For Kant, the world around us is a bunch of unknowable abstract objects, which we 'know' through our flawed senses. ("Ah, the abstract 'pen' probably exists, but I can only know what my imperfect senses tell me about it.") This is more like the robotic systems that create an abstract construct of the environment and then internally work with that abstracted construct.
As I read Heidegger, he's saying that, yeah, Kant has a point, but it's not very useful in day to day life. When you walk through a door, you don't think about the doorknob, you just turn it, open the door and walk through. It's all what he calls "taken for granted." You don't stand there thinking "Hmm, maybe my perception of the doorknob is flawed, and there is no knob. I can never be sure" (well, some of us have thought thoughts like that, but only after consuming certain molecules).
Essentially, Heidegger's take is much more practical: how do we do the useful everyday stuff? This is a lot more like robotic systems that are based on more reflexive responses.
Yes, Heidegger deals with lots and lots of other stuff ("Language is the house of being" "Death fractures the taken for granted", and the scary stuff about how when you are speaking old German you are more truly in touch with existance!) But the underpinnings of phenomenology is potentially really useful for understanding the "nuts and bolts" of interacting with the world. Oh, and he's the "Velvet Underground" of twentyth century thought (Sartre, Derrida, etc, etc cite him as a critical influence).Re:Pheomenology (Score:1)
Heh (Score:1)
In a related story, Valentino Braitenberg has been elected to the Berkeley City Council.
Great fun, but hard to make it work (Score:1)
(I still love the book though; it's really worth reading, just for the points it makes about what fear, love, and hate are).
BYO-bot (Score:2)
For kit hardware that implements the ideas of Braitenberg et. al., check out the BYO-bot [kipr.org]. I've had one for several years, and they're a great classroom demo.
MindRover (Score:1)