Flesh and Machines: How Robots Will Change Us 202
Flesh and Machines: How Robots Will Change Us | |
author | Rod Brooks |
pages | 260 |
publisher | Pantheon Books |
rating | 8 |
reviewer | Peter Wayner |
ISBN | 0375420797 |
summary | A charming look at an unconventional (and powerful) way to think about and design robots. |
In a way, robots are portraits of humans. Machines are just machines and assembly lines are just assembly lines. The buckets of bolts don't become robots until they start to take on some of the characteristics and a few of the jobs of humans. A drill for tightening a bolt may replace a biceps, but it's just a motor until it's on the end of a fancy mechanical arm that positions it automatically. Then it's a robot ready for a call from central casting.
Defining just what is and is not a robot is not an easy job for technologists because the replicants and androids are a touchstone and a benchmark for measuring our progress toward the future. It's 2002 and everyone is asking: Where's mad Hal steering a space craft to oblivion? Or more importantly: Why am I still vacuuming the floors and mowing the lawn by myself?
If you are asking these questions, then you might want to read the answers Rod Brooks, the director of MIT's Artificial Intelligence Laboratory, offers in his charming book, Flesh and Machines: How Robots will Change Us. The book is half a thoughtful biography of the various robots created by his graduate students and half a philosophical explanation of what to expect from the gradual emergence of robot butlers.
The biographical part is probably the most enjoyable. He and his students have produced more than a dozen memorable robots who've crawled, rolled and paced their way around MIT. One searched for Coke cans to recycle, one tried to give tours to visitors, and another just tried to hold a conversation. Brooks spends time outlining how and why each machine can into being. The successes and more importantly the failures become the basis for creating a new benchmark for what machines can and can't do.
An ideal version of this book should include a DVD or a video cassette with pictures of the robots in action because the movement is surprisingly lifelike. Brooks is something of a celebrity because a film maker named Errol Morris made a droll, deadpan documentary that cut between four eccentric geniuses talking about their work. One guy sculpted topiary, one tamed lions, one studied naked mole rats, and the fourth was Rod Brooks, the man who made robots. Brooks minted the title for the film, Fast, Cheap and Out of Control, a phrase he uses to describe his philosophy for creating robots. The movie tried to suss out the essence of genius, but it makes a perfect counterpoint for the book by providing some visual evidence of Brooks' success.
One of the stars of the movie was a six-legged robot called Genghis, a collection of high-torque RC airplane servo motors that Brooks feels is the best or most fully-realized embodiment of this fast and cheap approach. The robot marches along with a surprisingly life-like gait chasing after the right kind of radiation to tickle the IR and pyro-electric sensors mounted on whiskers. If you've seen the film, it's hard to forget his gait.
Brooks says that the secret to the success of Genghis is that there is no secret. The book's appendix provides an essential exploration of the design, which is short and very simple. The soul of the machine has 57 neuron-like subroutines, or "augmented finite state machines" in academic speak. For instance, one of the AFSMs responsible for balance constantly checks the force on a motor. If it is less than 7, the AFSM does nothing and if it is greater than 11, the AFSM reduces the force by three. That's doesn't seem like very much intelligence be it artificial or real, but 57 neuron-like subroutines like this are all it takes to create a fairly good imitation of a cockroach.
Brooks calls this a "subsumption architecture" and the book is most successful describing the days that he spent with his graduate students building robots and seeing what the architecture and a handful of AFSMs could do. He half mocks the roboticists who load up their machines with big computers trying to compute complex models of the world and all that is in it. In his eyes, the lumbering old-school machines just move a few inches and then devote a gazillion cycles to creating a detailed, digital description of every plant, brick or wayward child in the field of view. After a few more gazillion cycles, the machine chooses a path and moves a few more inches. Even when they find their way, time passes them by.
There are no complex control mechanisms sucking down cycles on the machines from Brooks' lab, the source of the claim that they're "out of control". It's just AFSMs wired together. One of the robots fakes human interaction by tracking fast motion and flesh colored pixels. Brooks marvels at how a few simple rules can produce a machine that is remarkably life-like. If you're not sure, they have video tapes of lab visitors holding conversations with the machine, who apparently takes part in the conversation with the patient interest of a well-bred host. As if by magic, the AFSMs are creating enough human-like movement and visitor in the tape begins treating the robot like a human!
If you're still not sure, you might buy a "My Real Baby" doll designed by Brooks with the help of the adept mechanical geniuses in Taiwan. The story of taking a highbrow concept from MIT to the local toy store is a great part of the book. The so-called toy is filled with AFSMs that tell it when to gurgle, when to pout, when to sleep, and when to demand sustenance. Alas, the toy makers tell Brooks that the market can't stomach so much innovation. One new thing at a time.
So are these machines truly successful simulacra? Are they infused with enough of the human condition to qualify as the science-fiction-grade robots or are they just cute parlor tricks? Some readers will probably point to the AFSMs and scoff. Seeing the code is like learning the secret to a magic trick.
Brooks, on the other hand, is sure that these machines are on the right track. In a sense, he makes it easier for his robots to catch up with humans by lowering the bar. On the back of the book, Brooks ladles out the schmaltz and proclaims, "We are machines, as are our spouses, our children and our dogs... I believe myself and my children all to be mere machines." That is, we're all just a slightly more involved collection of simple neurons that don't do much more than the balance mechanism of Genghis. You may think that you're deeply in love with the City of Florence, the ideal of democratic discourse, that raven-haired beauty three rows up, puppy dogs, or rainy nights cuddled under warm blankets, but according to the Brooks paradigm, you're just a bunch of AFSMs passing numbers back and forth.
If you think this extreme position means he's a few AFSMs short of a robot professor though, don't worry. Brooks backs away from this characterization when he takes on some of the bigger questions of what it means to be a human and what it means to be a machine. The latter part of the book focuses on what we can and can't do with artificial intelligence. He is very much a realist with the ability to admit what is working and what is failing. His machines definitely capture a spark, he notes, but they also fall short.
He notes with some chagrin that his robot lawnmower leaves behind tufts of uncut grass. Why? It uses a subsumption-like algorithm that doesn't bother creating a model of the yard. The robot just bounces around until the battery runs out. Eventually the laws of random chance mean that every blade should be snipped, but the batteries aren't strong enough to reach that point at infinity. A model might help prevent random lapses, but that still won't solve the problem. Alas, the machines themselves are limited by the lack of precision. One degree of error quickly turns into several feet by the other end of the yard. A robot wouldn't be able to follow a plan, even if it could compute one.
What's missing, Brooks decides, is some secret sauce he calls "the juice". Computation and AFSMs may work with cockroaches, but we need something more to get to the next level. Faster computers can do much more, but eventually we see through the mechanism. Genghis looks cool, but learning about the 57 AFSMs spoils the trick.
The standard criticism of Brooks' machines is that they don't scale. There is no superglue juice that can save a scaffolding built of toothpicks. The AFSM may produce good cockroaches, but that's just the beginning of the game. Humans are more than that. Eventually, the AFSMs become too unwieldy to be a stable programming paradigm. In fact, Brooks sort of agrees with this premise when he suggests that Genghis is his "most satisfying robot." It was also one of the first. The later models with more AFSMs just don't rank.
But humans and other living creatures don't scale either. We may be able to run 20 miles per hour, but only for 100 yards. We may be able to troll for flames on five bulletin boards, but eventually we get our pseudonyms confused. Limits are part of life and we only survive by forgiving them. To some extent, the lifelike qualities of his robots are direct results of the self-imposed limits of the AFSMs.
Your reaction to these machines will largely depend upon how many of the limits you are willing to forgive. Stern taskmasters may never be happy with a so-called robot, but a relaxed fellow traveller may ignore enough of the glitches to interface successfully. Some will see enough of themselves to be happy with the whirring gizmos as a portrait of human and others may never find what they're looking for. That's just the nature of portraits. For me, this book is an excellent portrait of a research program and the collection of questions it tried to answer. You may look in the mirror and want something different, but it's worth taking a look at these machines.
Peter Wayner is the author of two books appearing this spring: the second edition of Disappearing Cryptography , a book about steganography, and Translucent Databases , a book about adding extra security to databases. You can purchase Flesh and Machines from Barnes & Noble. Want to see your own review here? Just read the book review guidelines, then use Slashdot's handy submission form.
Robots (Score:1)
Cost is WAAAAYY to high. (Score:5, Insightful)
Re:Cost is WAAAAYY to high. (Score:1)
It does rather depend on how useful the gadget is. I believe that luxury cars have always been priced near the price of a luxury car, but they seem to sell a few million of them per year.
I can certainly see some consumer robotics applications that people would pay that kind of money for. Even some that aren't sex-related.
A Washer/Dryer IS a robot.... (Score:3, Insightful)
Instead you drop your clothes and soap into a box and give some instructions (turn indicator knob). No human labor involved. Sounds like just as much of a robot as the other items mentioned above
Re:A Washer/Dryer IS a robot.... (Score:3, Insightful)
A robot washer/dryer would grab my clothes from the hamper (We'll assume the hamper is on top of the robot, I won't require it to walk around the house.), empty the pockets, sort the whites and colored, wash, dry, and fold. And removing any clothes that I've indicated require decisions on my part from the process, and keep the load balanced.
Only then will I call it a robot. Until then, it's just two tools sitting next to each other.
There are things out there I would almost call a robot. Some of the high-end copiers, the ones that can fold, staple, sort, etc. That's the cheapest thing I can think of that I would call a robot. And it still can't handle documents that start stapled.
In other words, the main difference between a robot and a simple tool is that a robot doesn't need you to hold its hand. You give it a task and it can do it without you needing to make sure everything is set up correctly every step of the way, just like a person. And if it can't handle something, it has to be able to realize that and stop. Otherwise it's a complicated hammer.
Re:A Washer/Dryer IS a robot.... (Score:3, Funny)
Basically, you're saying that if you can teach it to make and serve coffee you can marry it and call yourself lucky...:)
Re:Cost is WAAAAYY to high. (Score:1)
The "just plain" variety are all over the place, manufacturing, sewing, blending, cooking... something with an programmable motor is more or less a robot, no?
In French, a "Kitchen robot" is a variable speed multifunction blender... they only cost a few bucks.
AI is a whole other ball game...
...with sharp appendages and weaponry (Score:4, Funny)
Re:...with sharp appendages and weaponry (Score:2)
Technically, he's right. (Score:3, Insightful)
Oh, and humans run the single most complicated OS ever.
LV
Re:Technically, he's right. (Score:2)
I'll believe that when somebody can upgrade or replace it.
Re:Technically, he's right. (Score:3, Insightful)
That we can't just remove it as a whole & take apart it's raw code yet to rewrite portions directly doesn't mean it isn't true. We've just gone around our limitation (of lack of source code) by interaction...
Re:Technically, he's right. (Score:1)
" Oh, and humans run the single most complicated OS ever.
And look what happens if you get a crash... Just go to your nearest mental hospital. And there's no rebooting.
Re:Technically, he's right. (Score:1)
Re:Technically, he's right. (Score:1)
While many of us might agree with this statement, it is in no way a proven fact. And there's lots of people out there who'll tell you that a human is more than a just body.
Re:Technically, he's right. (Score:2)
If it wasn't for nucleic acids, tiny machine like molecules - we wouldn't be here.
Scientifically all life on this planet started trying to compete and consume all organic molecules. In turn they developed some pretty cool tools such as chloroplasts, mitochondria and other tools to deal with the environment around us.
We are machines, but we can study that fact.
Re:Technically, he's right. (Score:2)
All living things are subject to data encoded in their DNA. While two exact asexually reproduced humans may not think and feel the same they would be the same none the less.
Thinking and feelings are assumed to be developed with experience. DNA provides no such experience.
By measuring who we are by what we can measure I can safely say that we exist and so do complex molecular actions that define us.
Re:Technically, he's right. (Score:2)
A system is composed of multiple machines. A machine can be replaced, but only if it still fits within the context of the greater system.
Re:Technically, he's right. (Score:1)
I don't know what people you live with, but I think "bugs" in humans have been visible for quite some time now. Think of all of the phobias that people suffer from. Think about all of the times that you've said, "Why'd I do that, I should've known better." Just turn on your TV and watch the Jerry Springer show. I mean, come _on_, we're so full of bugs it's ridiculous.
Re:Technically, he's right. (Score:2)
"So... what does the thinking?"
"You're not understanding, are you? The brain does the thinking. The meat."
"Thinking meat! You're asking me to believe in thinking meat!"
"Yes, thinking meat! Conscious meat! Loving meat. Dreaming meat. The meat is the whole deal! Are you getting the picture?"
"Omigod. You're serious then. They're made out of meat." [setileague.org]
Robots in the future (Score:2)
the little cockroach and fly robots with tiny cameras that peek in on people.
The time to really worry is when these show up as radio shack kits in about 10 to 20 years.
No one get all paranoid now.
Re:Robots in the future (Score:1)
Re:Robots in the future (Score:3, Interesting)
Not so fast.
There are some cockroaches, you step on the, and all they do is get mad. You have to splat them with a hammer [aol.com]. Of course, you could always get some as pets [angelfire.com]. Nevermind the ones in Florida that fly [mit.edu] imported from Asia [pestproducts.com].
Re:Robots in the future (Score:1)
It costs too much to build a house these days and most of it is not the raw materials... so I'm really looking forward to this!!
Re:Robots in the future (Score:1)
Would the surveillance drones in use in Afghanistan right now fall into this caregory? I havn't heard much about them as yet but it seems at first glance that they'd fall into such a category.
You are Late. (Score:2)
Maybe not at the insect size range, but perhaps at the dog/dolphin/avian level.
Re:Robots in the future (Score:2)
Grab.
What would Roger Penrose say! (Score:1)
Re:What would Roger Penrose say! (Score:1)
Re:What would Roger Penrose say! (Score:3, Informative)
Re:What would Roger Penrose say! (Score:2)
I think humans are capable of something fundamentally impossible for deterministic computers, but at the same time I think that most of us barely use these facilities. Most of what we do is mundane, and perfectly possible to mimic on a computer. We may not be able to mimic consciousness life, but we may be able to prove that most of us spend most of our lives in a zombified state.
Two things to emphasis: all turing machines are equivalent; speed and intelligence are independent. If it is ever possible to produce consciousness on a deterministic computer, then it is possible on today's hardware. If you had a radio onversation with an intelligent alien who lived 100 light years away, it would take 200 years to get each response - that doesn't mean he is stupid. Similarly, if it is possible to mimic humanity on tommorow's hardware, we should be able to do it, slowly, on today's.
Re:What would Roger Penrose say! (Score:3, Insightful)
Aah, the old paradox. It's based on a false premise though.
To climb a pole, the monkey must move. To move, it must displace molecules of one substance (say air) with that of another (say a monkey hair molecule). In other words, although movement appears to be constant, it is actually a series of discrete steps.
The monkey will reach the top of the pole when its next step cannot be broken down any further - ie. when it has only one molecule of another substance left to displace with its own.
Unless you're into nuclear monkey of course, where it could start splitting up the molecule, then the atoms underneath it and then have a crack at the sub-atomic particles beneath that...
Cheers,
Ian
Re:What would Roger Penrose say! (Score:1)
Re:What would Roger Penrose say! (Score:2)
You know, that's a hell of a lot easier than the way I learnt.
Cheers,
Ian
Re:What would Roger Penrose say! (Score:2)
If I'm following the person who replied to me correctly, then the paradox can be disproved like this:
Suppose the pole is 10m high. Suppose the Monkey climbs at 1m/s (and starts from zero altitude relative to the base of the pole). How long before the monkey reaches the top?
time = distance/speed
time = 10m/1m/s
time = 10s
Following the paradox, the monkey climbs 5m of the pole in 5 seconds, leaving 5m more to climb in 5 further seconds. It then climbs 2.5m in 2.5s, then 1.25m in 1.25s etc.
In other words, no matter how little distance there is to move, there's always enough time left to do it in.
Cheers,
Ian
Re:What would Roger Penrose say! (Score:2)
> i.e. There comes a point when you can't subdivide them.
Which brings us back to Penrose.
As far as I can tell, The Emperor's New Mind rants and raves about how hard AI is "impossible", and then devolves into some mumbo-jumbo about microtubules and quantum effects.
Fine, Penrose. The brain can be modelled as a Turing machine with a random number generator as one of its inputs.
And even worse for Penrose, what if I take his wacky quantum microtubules at face value? So the brain's a quantum computer (which, admittedly, didn't exist, as far as I was aware, when the book was written). It's massively parallel and sometimes gets the wrong answer.
I'll grant that a quantum computer running AI software is no longer a Turing machine per se, but I fail to see how any of Penrose's arguments preclude meat-brained humans from building something out of non-meat that does the same thing.
Re:What would Roger Penrose say! (Score:1)
Asymptotes vs. the Turing Test (Score:3, Insightful)
hmmm. (Score:2)
No, the monkey will fall on Xeno's head and kill him. Now what will we do with all our Thetans? His courage was to change the world!
graspee
classical allusion: 1
classical allusion is not funny: -1
bizarro segue: 2
lame Xena reference: -1
total: 1
graspee
Beautiful... This is what I've been thinking about (Score:2, Insightful)
Well, biological creatures don't scale well at all, right? We have access to the code that Rod Brooks made, right? Well, using other technology, lets evolove the code. If for some reason that doesn't work, we have most (all?) of the human genome done. How about other dna strings?
We either "evolove" the creature or we model it after the dna it came from. Anyways, score 1 for robotics.
Why Human? (Score:5, Insightful)
Re:Why Human? (Score:1)
Re:Why Human? (Score:2)
You'll note that George Lucas failed to conceptualize anything closely resembling steps (at least that droids are shown transversing).
Re:Why Human? (Score:1)
One of the later "Foundation" books has nonhumanoid robots, too, and there is a brief discussion of what it is that makes a machine a robot.
Hmmm, probably the only facet of robots-in-society not explored by Asimov is the possibility that people never would really come to fear them. The two attitudes I've seen toward robots are basically (a) "so what?" and (b) "cool!"
Re:Why Human? (Score:2)
Thing is, we don't fear them bcos they're locked away. I think if an 8 foot humanoid robot came stomping down the road tomorrow, ppl *would* be scared. And then there's the whole slavery thing which Asimov was into (and which Pratchett also covered in Feet of Clay) - if they're sentient, can we force them to work? And there's a third attitude you've missed which came along in the 1970s and 1980s, which is "f***ing robots stole my job", and Asimov made this one of the underlying causes of the anti-robot movement.
Asimov certainly has covered a lot of this area. The Caliban/Inferno series covered a bit more by hypothesizing robots which *weren't* forced to work but could choose their actions. It's a shame all these books are quite bad fiction - damn good ideas, but bad novels. Ho hum.
Grab.
Re:Why Human? (Score:1)
Re:Why Human? (Score:3, Insightful)
That's a pretty damn big "pro". I don't care if the robot is a freakin genius...if it can't open a door or walk up stairs it's not going to be able to do much.
Re:Why Human? (Score:2)
But, sex will be one of the "killer apps" for robots. If you have any doubts, check out www.realdoll.com (NOT AT WORK - those robots look very very much like naked people). Even Asimov wrote in his books about people using robots for sex.
Re:Why Human? (Score:2)
They should go and see what Sybian has done as far as objectifying men. This sex robot doesn't even have a face.
Plenty of uses for small (mostly) brainless robots (Score:1)
Zorg: Take this glass for instance. Sterile. Pristine. Boring. But if it's broken...
[glass breaks and many small robots come zooming out to clean up the mess]
Now look at that, a variable ballet ensues so full of form and color!
AI Hopes Killed by Recursion Issues (Score:2, Interesting)
Biological neurons have been shown in the laboratory to grow new connections based on information learned. In a robot, what possible mechanism could guide such growth? Programming is the only answer, but keep in mind that "programming" is just shorthand for "the intelligence of the programmer". In other words, the AI itself isn't self-contained, as it were.
There is no other way for "mental" activity to be guided, thus AI will always be as unattainable as the Philosopher's Stone.
Don't forget about the random issue (Score:1)
Possibly this point is moot if pseudo-random based on some external element that is more or less random in nature is an acceptable alternative to internally generating a random number.
I choose 156 -- why? Dunno. It sounded good at the time
Re:Don't forget about the random issue (Score:1)
Re:Don't forget about the random issue (Score:2)
If a computer cycled through numbers and chose one of them the next time disc I/O was requested, this would be as random as anything you like. Sure, it can be repeated, but so can anything that goes on in my head.
What I'm saying is just because you don't understand why you choose a seemingly random number doesn't mean it's actually random. You know all those tricks about people being made to pick a random number (David Blaine style) and it's known to other people -- well, they would have though their choice is random.
Re:Don't forget about the random issue (Score:2)
The problem with this statement is there is more than one definition of random. Human-generated numbers would be non-random in that they are compressible: if you ran a perfect WinZip on a 1000 sequence generated by a human, it would find repeating patterns and uneven distributions, and it would reduce the data size by a statistically significant amount, on average. But because humans receive a rich input from the world around them, including indirect inputs from truely random sources such as cosmic rays, their numbers are completely unpredictable, on average.
Most computers are non-random in that they have an invariant algorithm they use to generate their random data. Their data is irreducible and statistically random, yet predictable. If you made a computer like a human, giving it input that comes from truely random sources, then they would give you truely random numbers. And of course, people have done this, but it is simpler just to use a truely random source as directly as possible.
Machines, random numbers (Score:2)
There's no machine I've ever heard of nor seen that could generate a truely random number
It's true that no computational algorithm can generate truly random numbers, without input from some random physical process. The real test would be whether you could look at the history of numbers generated and predict the next number. This would mean inferring the state bits of the algorithm and deducing its inputs, if any. Cryptographic hashes are algorithms specifically designed to make that difficult.
In physics, you don't get real randomness without quantum effects, but statistical processes can give you highly unpredictable numbers, unless you're prepared to do faster-than-real-time molecular dynamics on 10^23 particles.
Here's a random bit generator suitable for use with a crypto hash algorithm to make good random bits: http://willware.net:8080/hw-rng.html [willware.net]
Crawl before we walk... (Score:2)
I'll bet it's possible to create a cybernetic "animal" that functions on 95% instinct and 5% learning. The recursion problem could therefore be contained and studied.
Such a thing won't be "HAL" or "C3PO" by any stretch of the imagination, but it'll be a start.
Re:AI Hopes Killed by Recursion Issues (Score:5, Interesting)
Crikey, you figured that out after two semesters. I guess I wasted 4 years of my life doing a degree in it all then... I must never have cottoned on to how well expert systems such as Mycin [surrey.ac.uk] and Dendral [mit.edu] actually perform.
You think programming is just the "intelligence of the programmer"? Guess again -- many people have AI systems running which program themselves, coming out with emergent behaviour which the programmer never expected.
Do you really think that a person can simplify circuit boards to their simplest form by themselves? I thought not. I know that Julian Miller [bham.ac.uk] can't, but that using his Cartesian Genetic Programming [bham.ac.uk] he's managed to wirte programs that do just that. Thus proved that a computer program can ultimetaly be more than the sum of its external inputs.
Re:AI Hopes Killed by Recursion Issues (Score:2)
The issue is "simply" to expand the domain in which an AI can work to beyond known-I/O systems. When someone comes up with this, it gets more interesting; the *same* controller can fly a 747, run a traffic light system *and* route your PCBs, it just needs to spend some time thinking about the problem to work out the I/O to the decision-making process. Which is exactly what a human does - we work out what we need to keep an eye on when we're doing something (for flying a plane, maybe airspeed, altitude and angle of bank) and work out how to get the system to behave whilst controlling that I/O set. And when something unexpected happens (eg. an engine fails), then the robot needs to work out that suddenly its expected I/O set isn't having the right effect, so it needs to expand its scope to try other things it had previously disregarded or taken as basic assumptions (eg. there are 4 working engines).
And right there you start getting onto some interesting philosophical problems, when the AI is behaving in the same way as a human...
Grab.
Re:AI Hopes Killed by Recursion Issues (Score:1)
People seem to think that there is something "magical" about the human brain but this need not be so.
Critics of AI point to the most complex program and say, "see, it's mechanical -- given the same input it produces the same output." The problem with this argument is that we cannot make the same test on a human brain. I'd bet that if we could save the state of a human brain, run a series of tests, then reload the old state and run the tests again, we would begin to see the mechanism underlying our program.
Re:AI Hopes Killed by Recursion Issues (Score:2)
Self-reprogramming FPGAa perhaps? Dedicated genetic algorithm circuits which evaluate the behavior of the rest of a chip and reprogram it? Why do think this is so impossible? We may end up actually using biological processes for this "growth" anyway if/when we arrive at biological computing (DNA/molecular computing) etc. I fail to see what is so darn impossible about the process. It took evolution billions of years to produce us through random change...knowing this, I think we can definately speed that process up a bit to create AI.
Re:AI Hopes Killed by Recursion Issues (Score:2)
There are people who use this as an argument to prove that intelligent biological life must have been designed. So all we need for working AI is to play god.
Alternatively, we just accept that the programmers' guiding is a more effective equivalent of the natural selection that led to biological life, and that the AI will be just as self-contained as biological life.
After all, your brain wouldn't exist without your parents, and wouldn't work the same way without years of training. That doesn't make human intelligence unattainable.
Re:AI Hopes Killed by Recursion Issues (Score:5, Interesting)
Problems:
O(n^2)-structure
Learning (Growing)
Current learning algorithms include (among others):
Various backpropagation algorithms, AFAIK not observed in biological systems. A fairly mathematical approach.
Self Organising Maps (SOM), especially Kohonen-networks: a similar strucure has been observed in the visual cortex.
Both algorithm do not include a temporal component although biological neurons rely heavily on temporal information, but IRC there are some neuronal networks out there that employ a temporal encoding.
Of course, all existing networks rely heavily on the knowledge of the programmer, who tailors the system to the problems (and partly the other way around). Partly, this is due to the prohibitivly expensive costs of large neuronal networks and partly nature does the same.
Humans are pre-wired, so may AIs.
Furthermore, it is quite interesting that an "AI", programmed to learn articulating words, made similar errors to those of a baby learning speaking.
Have a look at Ghengis, AFAIK the only programmed knowledge is: "contact with ground -> bad", "moving forward -> good", and how to learn.
> In other words, the AI itself isn't self-contained, as it were.
This reminds me somehow at an AI Koan:
Help me out here. (Score:3, Interesting)
"So that the room will be empty."
At that moment, Sussman was enlightened.
I may seem a bit foolish here for asking, but what does this mean? I don't understand. Is it that Sussman learned to start with all 0s instead of random inputs? Or that cutting out all preconceptions is only counterproductive?
Re:Help me out here. (Score:3, Informative)
That is, it is impossible to free a system of preconceptions. By making parameters random rather than hand-picked, I am simply trading one set of preconceptions for another.
Of course, if it is a true koan, it will probably evoke as many different thought-paths as it has readers. Hope the above helped, though.
Wondering about the scope (Score:3, Insightful)
The human element can't be ignored in favor of fully robotic solutions. People enjoy feeling involved in what it is they're doing. Personally I'm all for having an entire race of robot slaves that do all the work for everyone, leaving people free to create Art, Science, and Music (and giving *me* time to finish Final Fantasy 10).. but I don't see it happening any time soon.
Flying cars would rock. Talking cars that remember your favorite radio stations, seat settings, A/C settings, and possibly directions to drive to your parent's house are far more likely.
Re:Wondering about the scope (Score:1)
The only way around that would be to automate the guidance, and the first thing people would do is hack the systems so THE MAN wouldn't be able to tell them where to go.
The rest of that sentence, remembering seat/AC etc. are here already.
Sirius Cybernetics Corporation (Score:2, Funny)
Your plastic pal who's fun to be with!
sometimes words just aren't enough (Score:2, Insightful)
As for the 'non-scaling' criticism: to quote Dogbert, 'Pah!' They do what they're supposed to do. I never criticised my Spectrum because it didn't have dolby sound; I wouldn't criticise my roaches because they don't write operas.
Re:sometimes words just aren't enough (Score:2)
It is not a critism of the robots in themselves, but of the methodology. Humans become capable of manipulating new ideas when they develop a symbology for modeling those ideas mentally. The various finite state automata that Rod Brooks has developed are useful in themselves, but unless the reviewer failed to mention it, they do not portend a new way of designing complex life-like systems. This is not because there is no merit to his ideas, but because his ideas are not new, FSA's are widely recognized as an excellent way of implementing real-time limited-adaptability behavior. I used multiple software-implemented FSA's to control my easter-egg-hunting robot back in college years ago, and I definitely wasn't breaking any ground. He has simply applied them (very!) well to the tasks for which they are best suited: simple machines with limited behaviors.
If he had provided a set of equations or even just a pseudo-algorithm for breaking down complex, adaptive behaviors into multiple interlinked AFSA's, he would have significantly advanced AI, but I saw no such evidence in this book review.
nun (Score:2)
Slashdot and Pr0n in one easy to swallow pill.
I drove to work on autopilot... (Score:4, Insightful)
Much of what you do each and every day occurs in spite of the ability I just asked of you. Your brain is not responsible for thinking about how to walk (at least not after you learn how). You peripheral nervous system handles such actions.
When humans create a robot in the fashion of Rod Brook, they are training a system analogous to our own peripheral nervous system. Why force the machine to learn to walk when we can tell it how to walk from our own experience (knowledge of physics, etc).
The exact implementation Brook uses may not scale, but analogous programming options exist that could scale, and IMHO, approaches addressing immediate actions/reactions should be built into robots as described.
From the interview it seems Brook admits the need for serious processing power to reach the "next level", but shrewdly points to the fact that spending all of your time thinking and not doing is not a good way to get anything done.
If you can't walk and chew gum at the same time...
Re:I drove to work on autopilot... (Score:2)
OK, Brook's robots are only hunting light. But if he plugs in more processing power to give them other inputs to the decision-making process (eg. avoid water, seek other robots) then it starts getting pretty complex behaviour. I mean, the actions of a human when we're dying for a piss are pretty damn predictable! Sure it doesn't scale at 50-odd neurons, but up it to 500-odd neurons and it can start doing some interesting stuff.
Grab.
Why Humanoid Robots? (Score:2, Insightful)
Robots ! (Score:1, Informative)
Robots have taken countless assembly/factory jobs
Robots are supposed to kill us all anyway
robots? nay we are the borg. (Score:3, Interesting)
I believe the robots are going to be us, except for advanced machinery in manufacturing the "happening" thing will be integration and interfacing of electronics and biocircuitry with ourselves. You will think and your interface will retrieve data from storage attached to you.
Electronics can monitor your bloodstream for diseases, lack of resources, and the like, and synthesize whatever is required. Good for anyone with a genetic defect or an illness. Good for your general health & wellbeing.
The advantages are so enormous these technologies will be used in that manner. You will probably want to have it. But you'll also realize that at that moment you are not only vulnerable to hackers that try to access your biosystems, also those that create the hardware and software within you are potentially able to upgrade software and firmware that has essentially become a part of your being.
So who will controls that, us, intimately? Open Source at least insures that we will have insight into if not control over who we are developing into...
Re:robots? nay we are the borg. (Score:2)
The human body is a biological system that responds to the various elements around it; it is an open system - effected by it's surroundings - yet it is still seperate from it's surroundings. Unlike a computer, you can't simply add more memory or storage . The human state doesn't allow for it.
There are instances of people with photographic memory going insane, because they recall every instance of every event perfectly. Their brains are not able to process such a large amount of data, and thus they lose their sanity. People can experience trauma from excessive sound, light, and various other effects. It's called sensory overload.
While having a device attached to you that would allow you to have an extended memory or such would be awesome in theory, think of the implications - i'ts probably not freasable. Were it to happen, the human body wouldn't be able to power the devices efficiently: out bodies produce only enough energy each sleep cycle for the next day, and not enough for electronics. The addition of those electronics would strain and tire the body, to the point where people would start sleeping more and more each night, thus canceling any percieved benefit of the biotech enhancements. That is, of course, unless an alternative power source were created that would work in harmony with the human body. (Deus Ex is coming to mind as an example of this scenario, actually.)
why is it... (Score:4, Interesting)
Intelligence:
The capacity to acquire and apply knowledge.
The faculty of thought and reason.
According to the above AFSM's are the exact principle behind intelligence. Think about how any analysis of the world happens. We don't consider the entire world when we try to catch a ball, we consider the position of the ball and where it will be. We don't take the position of a bird in relation to the ball, or something far away, all that matters is the ball.
Slightly more complex would be hit detection, is there anything close to me? Yes or no...that easy, you'd have a range that it's ok for an object to be in, a range where we should slow down and a range where we fire thrusters to stop.
Simple actions put together equal complex life form.
Re:why is it... (Score:2)
These models don't have an internal representation of the world, and for good reason: the world itself is the best representation you could ever want. But it isn't sufficient for conscious thought, because that depends on measuring the world as you imagine it, not as you can perceive it.
It's not so much that the problems associated with consciousness are harder than the problems involving subconscious behavior; the latter turn out to be essentially impossible to solve using general intelligence (either by AI researchers or by humans with specific brain damage). But the problems associated with consciousness are almost certainly equally difficult to solve with AFSMs. It's certainly possible, but it'd be like trying to write software by arranging electrons.
Of course, the interesting stuff happens when both types of systems work together. Read Phantoms in the Brain by VS Ramachandran for a lot of examples, or consider that, when you picture a scene you know well, the visual areas of your brain are actually affected, and your conscious thought can alter your perception of space (like looking at an MC Escher picture).
Consider the non-AI case of graphics. Hardware is great for digital camera processing, and you wouldn't want to write any of that in software. Software is great for photo manipulation, and you wouldn't want to write it in hardware. And there are a lot of really interesting things you can't do with either of them alone.
What are you supposed to do.... (Score:2, Insightful)
What are you supposed to do if you ARE a manically depressed robot?
Brooks... (Score:2)
Sure, life is unfair. Wah wah wah. I just always go nuts when I hear anything by this guy. "One day we'll sell millions of tiny robots in a jar, and they'll clean your TV screen." "Robots are going to change the world." I don't see it, Rod, much as I'd genuinely love to. We need to stay grounded at least a little bit.
Thanks for putting up with my whining. ;) Let the flaming begin.
Re:Brooks... (Score:2)
Re:Brooks... (Score:2)
With no world model at all, you're limited to insect-level behavior. This works for insects because they're small and light. If a feeler hits something, that's OK. Larger creatures need some minimal prediction of the future just to put the feet in reasonable places and not bump into obstacles. Once a creature gets fast enough and large enough that inertia matters, it needs a control system with some predictive power.
What's needed is the "lizard brain", or limbic system, which does that job for lizards, birds and mammals. Instead of trying to crack that problem, Brooks tried to go all the way to human-level AI in one step, with his Cog project. [mit.edu] He didn't claim to know how to solve the problem; he just planned to throw about 30 MIT PhD theses at it and hope. That didn't work.
I once asked him why he didn't try for a robot mouse, which seemed a reachable goal. He said "Because I don't want to go down in history as the person who built the world's greatest robot mouse". That's where he went wrong. This problem is too big to solve in one big step.
I think we'll see a solution to lizard brain AI in the next few years, but it will come from the video game community, not academia.
Re:Brooks... (Score:2)
The Incremental Approach Has Failed (Score:2)
The incremental approach is precisely why we don't yet have a HAL-like intelligent machine. That's the approach that's been used up to now by the GOFAI community and it has failed miserably. If the goal of an AI researcher is to understand human cognition, the problem is indeed too big. The interconnectedness of human cognition is so astronomically complex as to be intractable to formal solutions. This problem is too big for any approach, incremental or otherwise. Therefore the goal of the sensible AI researcher is not to develop a theory of cognition, but to discover the fundamental principles that govern the emergence of intelligence. Let's get the damn thing to learn first. We can worry about what it's thinking later. We need an overarching theory of the brain. We don't need limited, isolated bits of cognition.
Re:The Incremental Approach Has Failed (Score:2)
Enthusiasm for that approach has waned somewhat. Remember "connectionism"? Simply throwing some hill-climbing algorithm at very hard problems doesn't work very well, as the neural net and genetic algorithm people have discovered. The problem isn't lack of CPU time, either; it's not like there are algorithms that are really good, but slow. The real problem with hill-climbing is that much of the answer is encoded into the evaluation function. Where the evaluation function is ill-defined or noisy, hill-climbing gets lost.
It's reasonably clear now that "learning" isn't merely rule acquisition (see the Cyc debacle) or hill-climbing. We need different primitives.
Re:The Incremental Approach Has Failed (Score:2)
The problem with connectionism is that it came from the same GOFAI crowd that gave us symbol manipulation and knowledge representation. Those guys made it a point to ignore every significant advance that happened in neurobiology and psychology over the last 100 years. ANNs are a joke. They have as much to do with animal intelligence as an alpha-beta tree-searching algorithm. Temporal, spiking neural networks are where it's at in the new AI, AKA computational neuroscience. Everything else is a joke. Like I said, we need new blood in AI. The old school has got to go.
Should robots control things like lawn mowing? (Score:3, Insightful)
Whether or not the book actually discusses that, it's a point kind of disturbs me. Honestly, vacuuming floors and mowing the lawn are not that hard. Having to look after yourself also gives you a sense of responsiblity, IMHO. I'm not sure I'd want a robot doing these things for me.
While tools have become more and more comprehensive in helping humans solve tasks (and humans have come to depend more on those tools), humans are still usually the ones directly in control. You push or steer the lawnmower, you move the vacuum where you want to clean, etc. If I had a robot do these things, all of a sudden it's the robot deciding when and how these things are done, and not me. On the other hand, there are also people who may not have the time or ability to take care of chores like these themselves, and having a robot do them might mean the difference between still being able to live at home, and having to live in a nursing home.
Re:Should robots control things like lawn mowing? (Score:2)
> the floors and mowing the lawn by myself?
If taken literally, the wording of the question means is "Why aren't I being helped when I do these chores?" The answer: You already are. You're not chopping the lawn with sheers, are you? You're not using a hand-crank to operate your self-propelled vaccum, are you?
Whether or not the book actually discusses that, it's a point kind of disturbs me. Honestly, vacuuming floors and mowing the lawn are not that hard. Having to look after yourself also gives you a sense of responsiblity, IMHO. I'm not sure I'd want a robot doing these things for me.
While tools have become more and more comprehensive in helping humans solve tasks (and humans have come to depend more on those tools), humans are still usually the ones directly in control. You push or steer the lawnmower, you move the vacuum where you want to clean, etc. If I had a robot do these things, all of a sudden it's the robot deciding when and how these things are done, and not me. On the other hand, there are also people who may not have the time or ability to take care of chores like these themselves, and having a robot do them might mean the difference between still being able to live at home, and having to live in a nursing home.
I see two possible outcomes from sentient robots further easing our workload the same way conventional machinery does today. One, we can devote more of our time to worthwhile activities, such as intellectul persuits, helping others, getting exercise through sports or nature, etc. The other is where you sit on the sofa and watch cable-TV until your brain dribbles out your ears. Might as well do something else, you just lost your job to a machine right?
Hmm, I just realized I'm wasting my free time right now, and I owe this opportunity to technology. Well Slashdot reader, how are you spending your life with the free time conventional machinery has already given you? Is there life outside of Slashdot? (It's too late for me, save yourself!
Notes from his talk at Duke (Score:2, Interesting)
I listened to Brooks present the semi-academic version of his talk at Duke. The really fascinating thing about this robot/experiment is that making the robot react to simple cues from the human makes the robot act much more intelligent than it actually is. It may be easier to make a robot that behaves intelligently around humans than it is to make one that intelligently explores mars.
By giving the robot the ability to recognize eyes and where the human is looking, it can pick up cues as to what aspects of the environment are important. By making it maintain a proper conversational distance from the human, it prevents collisions and makes talking to it much more comfortable.
Because the robot responds to its environment, the environment shapes the robot's behavior. If that enviroment is alive and intelligent, the robot's behavior becomes more intelligent than it would normally be. We give off hundreds of little cues that allow us to respond intelligently to each other, and Brooks' work has opened the door to letting robots bootstrap themselves to a higher level of interaction.
Learning Lawnmowers, Robotman! (Score:4, Interesting)
Seriously! To properly want something, you need a means to know that that desire is or is not satisfied, and a means to move closer to achieving your desire - just like Genghis' leg muscles.
His mower robot needs a laser scanner to light up stalks that stick up too high, a sensor to detect stalks being lit up within maybe 10 feet, a desire to go to spots where that light is seen, and a desire to wander and seek out lit spots if it doesn't see any nearby.
A bit more is needed to handle edge conditions (literally the edges of the lawn and objects in it). It needs the ability to learn where it can't go, and the ability to slowly forget that learning so if it makes a mistake about not being able to get somewhere it can eventually correct itself.
Re:Learning Lawnmowers, Robotman! (Score:1)
Not PORTRAITS of humans... (Score:2, Interesting)
Take an assembly-line robot, for example. It so happens that a human configuration for an arm (A fairly mobile shoulder, a somewhat limited elbow, a fully-functional wrist, and some sort of manipulator at the end) is very useful. With a system like that you can reach any part of a design. Could you add another joint and achieve more flexibility? Or perhaps give the elbow more degrees of freedom? Naturally, and people have in fact done these things. However, there are a number of good reasons to mimic human design.
First of all, we are innately familiar with the operation of an arm. We have no trouble visualizing just how an arm like our own would move around something - For those who are good with math, this can translate into an easy understanding of the math involved.
Second, lots more work has gone into human-similar models. This means you can draw upon the accumulated design experience of hundreds and thousands of other people even inside the field of robotics.
Finally: Adding more joints/making more capable joints costs more money. In most systems which need to be versatile, the human-mimic system is the most efficient from a cost:capability standpoint.
Robots are like humans where they need to be. When we can make them identical to humans, no doubt some will, while others will feel that that is some sort of travesty. We all know that the big application in robots is the self-mobile realdoll, though, and that's an attempt to make something as much like a person as possible.
You might as well argue that giving birth is creating a portrait, since there is such variation in humanity - And there is still MORE variation between robots.
The world is its own best model (Score:4, Informative)
I have the feeling that this notion works well for simple robots, including lower life forms such as insects. Like Genghis, they simple do "simple" stuff based on simple neural computers that hardly warrant the name. But where Brooks' work falls short, as you can see in the review, is where neurons are clumped into serious computers that do model the world. The worst offenders, of course, are humans. The problem is that have no idea how to wire a robot to do that, and a lot of the behaviors we really want from robots rely on it.
AI still has a long, hard road ahead of it. But we will succeed, eventually, simply by virtue of reverse engineering if nothing else.
Robots and AI, very humbling (Score:2, Interesting)
Its pretty striking to me how different an engineer's life can be depending on his area of interest. There are some topics where we are essentially on the "right track". Some genius has made the initial breakthrough in thinking. Steady progress can be made by moderately intelligent people such as myself by following the premise to its logical conclusions. While I was studying robotics, the Web was really taking off. Ideas spread like wild fire and advances are still being made fairly rapidly.
Other areas of study stagnate for years with random dispersed periods of growth and euphoria followed by periods of disappointment and disillusionment. In AI/machine intelligence, we have had several small breakthroughs that allow us to progress a little before hitting the brick wall again. We're all waiting for someone to make the leap in thought that will allow us to progress.
My opinion now is that we have some fairly specialized approaches that work well in specific circumstances but we are all essentially still on the wrong track.
Rodney Brooks caused quite a bit of excitement in the early '90's with Ghengis and some of his other robots but it wasn't that breakthrough that
we are all waiting for.
From what I understand, if you have read his papers and publications through the years then this book doesn't offer much new information. If you aren't familiar with his work and are interested in the subject then definitely read the book. Even if Brooks doesn't turn out to be the genius who makes the breakthrough, his work has definitely contributed to the field and brings us a little closer.
In the mean time I guess I'll just have to wait for the big breakthrough by building some more little robots to keep me busy. I've been thinking about a little robot with a single board Linux computer for a controller and a WIFI adapter. That way I can sit at my desk or laptop and watch what is going on a tune code and develop behaviors from the comfort of my couch instead of having to track the little bugger down and stick a serial cable in its ass to upload new programs and download data. I was also thinking that I could then give real time performance feedback and let some genetic algorithms and/or neural networks tune the parameters. That should keep me preoccupied for a while while the geniuses work on the really heady stuff.
If you are one of those geniuses, quit screwing around reading
Re:Robots and AI, very humbling (Score:2)
Building physical machines seems to be just avoiding the main problem with robotics - a decent AI. If robotics isn't looking for a decent AI then I don't see any problems: the mechanical and control issues aren't such difficult problems.
If AI can really be done on your single board computer, then I figure can be done by itself in a virtual world on a home pc. I don't see much of a difference. Plus there are certain advantages with virtual environments.
Once you've worked out most of the bugs, you can port it to the physical world.
Personally I don't find the roaches Brooks does very interesting. They're interesting from the control perspective. But not from an AI perspective.
Just the other day I was feeding two geckos (not pets - just hang around the house). One ran out and grabbed the food. The other just wouldn't come out from it's nook. I flicked a piece of food in, and it ate it. Once it finished, it was severely tempted to move out - it moved forward a bit. But then it was still too afraid/cautious to go out. I'm sure it knows there is a big creature out there. And I guess you can imagine what I'm talking about.
As far as I know the more intelligent creatures know the difference between a feeder and food - you hand out some food, they bite the food and not your hand. If they are afraid of you, they try to get as close to the food whilst avoiding your reach. There is quite a degree of intelligence there.
A decent AI has to simulate various futures and choose.
But a good independent AI will have feelings. Some brain damaged humans with disconnected emotions find it hard to decide what to do - though they are still intelligent. When an AI starts trying to model itself things could get interesting - in the future A I will feel like X, in future B I will feel Y, therefore I will do this.
Cheerio,
Link.
Fast, Cheap, and Out of Control (Score:3, Informative)
A great movie! I was the web designer who made the official website [sonyclassics.com] for the movie (hey, be nice, it was done a LONG time ago) and so got to see the movie before it came out. I watched it 3 times, and made others come watch it. It's so very random and disconnected, and then you start to just see it all coming together.
Very good movie, and Rodney Brooks is fun to watch. I highly suggest you rent it...just be prepared to be barraged with non-sequitor scene after non-sequitor scene, without a plot but four intermixed lives revealed.
My robot won't need a green card. (Score:2)
Becoming Conscious of Our Causes (Score:2)
| right track. In a sense, he makes it easier for his robots to catch up
| with humans by lowering the bar. On the back of the book, Brooks
| ladles out the schmaltz and proclaims, "We are machines, as are our
| spouses, our children and our dogs... I believe myself and my children
| all to be mere machines." That is, we're all just a slightly more
| involved collection of simple neurons that don't do much more than the
| balance mechanism of Genghis. You may think that you're deeply in love
| with the City of Florence, the ideal of democratic discourse, that
| raven-haired beauty three rows up, puppy dogs, or rainy nights cuddled
| under warm blankets, but according to the Brooks paradigm, you're just
| a bunch of AFSMs passing numbers back and forth.
in combating the concept of free will. The germs of all the relevant
arguments are to be found as early as Spinoza. All that he brought forward
in clear and simple language against the idea of freedom has since been
repeated times without number, but as a rule enveloped in the most
hair-splitting theoretical doctrines, so that it is difficult to recognize
the straightforward train of thought which is all that matters. Spinoza
writes in a letter of October or November, 1674:
I call a thing free which exists and acts from the pure necessity
of its nature, and I call that unfree, of which the being and
action are precisely and fixedly determined by something else.
Thus, for example, God, though necessary, is free because he
exists only through the necessity of his own nature. Similarly,
God cognizes himself and all else freely, because it follows
solely from the necessity of his nature that he cognizes all. You
see, therefore, that for me freedom consists not in free decision,
but in free necessity.
But let us come down to created things which are all
determined by external causes to exist and to act in a fixed and
definite manner. To perceive this more clearly, let us imagine
a perfectly simple case. A stone, for example, receives from an
external cause acting upon it a certan quantity of motion, by
reason of which it necessarily continues to move, after the
impact of the external cause has ceased. The continued motion
of the stone is due to compulsion, not to the necessity of its
own nature, because it requires to be defined by the thrust of
an external cause. What is true here for the stone is true also
for every other particular thing, however complicated and
many-sided it may be, namely, that everything is necessarily
determined by external causes to exist and to act in a fixed and
definite manner.
Now, please, suppose that this stone during its motion thinks and
knows that it is striving to the best of its ability to continue in
motion. This stone, which is conscious only of its striving and is
by no neans indifferent, will believe that it is absolutely free, and
that it continues in motion for no other reason than its own will to
continue. But this is just the human freedom that everybody claims
to possess and which consists in nothing but this, that men are
conscious of their desires, but ignorant of the causes by which they
are determined. Thus the child believes that he desires milk of
his own free will, the angry boy regards his desire for vengeance
as free, and the coward his desire for flight. Again, the drunken
man believes that he says of his own free will what, sober
again, he would fain have left unsaid, and as this prejudice is
innate in all men, it is difficult to free oneself from it. For,
although experience teaches us often enough that man least of
all can temper his desires, and that, moved by conflicting passions,
he sees the better and pursues the worse, yet he considers
himself free because there are some things which he desires
less strongly, and some desires which he can easily inhibit
through the recollection of something else which it is often
possible to recall.
Because this view is so clearly and definitely expressed it is easy to
detect the fundamental error that it contains. The same necessity by which
a stone makes a definite movement as the result of an impact, is said to
compel a man to carry out an action when impelled thereto by any reason.
It is only because man is conscious of his action that he thinks himself
to be its originator. But in doing so he overlooks the fact that he is
driven by a cause which he cannot help obeying. The error in this train of
thought is soon discovered. Spinoza, and all who think like him, overlook
the fact that man not only is conscious of his action, but also may become
conscious of the causes which guide him. Nobody will deny that the child
is unfree when he desires milk, or the drunken man when he says things
which he later regrets. Neither knows anything of the causes, working in
the depths of their organisms, which exercise irresistible control over
them. But is it justifiable to lump together actions of this kind with
those in which a man is conscious not only of his actions but also of the
reasons which cause him to act? Are the actions of men really all of one
kind? Should the act of a soldier on the field of battle, of the
scientific researcher in his laboratory, of the statesman in the most
complicated diplomatic negotiations, be placed scientifically on the same
level with that of the child when it desires milk: It is no doubt true
that it is best to seek the solution of a problem where the conditions are
sinmplest. But inability to discrinminate has before now caused endless
confusion. There is, after all, a profound difference between knowing why
I am acting and not knowing it. At first sight this seems a self-evident
truth. And yet the opponents of freedom never ask themselves whether a
motive of action which I recognize and see through, is to be regarded as
compulsory for me in the same sense as the organic process which causes
the child to cry for milk...
(Rudolf Steiner, The Philosophy of Freedom [elib.com], Chapter 1, 1895)
Materialism can never offer a satisfactory explanation of the world. For
every attempt at an explanation must begin with the formation of thoughts
about the phenomena of the world. Materialism thus begins with the thought
of matter or material processes. But, in doing so, it is already
confronted by two different sets of facts: the material world, and the
thoughts about it. The materialist seeks to make these latter intelligible
by regarding them as purely material processes. He believes that thinking
takes place in the brain, much in the same way that digestion takes place
in the animal organs. Just as he attributes mechanical and organic effects
to matter, so he credits matter in certain circumstances with the capacity
to think. He overlooks that, in doing so, he is merely shifting the
problem from one place to another. He ascribes the power of thinking to
matter instead of to himself. And thus he is back again at his starting
point. How does matter come to think about its own nature? Why is it not
simply satisfied with itself and content just to exist? The materialist
has turned his attention away from the definite subject, his own I, and
has arrived at an image of something quite vague and indefinite. Here the
old riddle meets him again. The materialistic conception cannot solve the
problem; it can only shift it from one place to another.
(Ibid, Chapter 2)
Re:Although Peter Weller is legendary... (Score:1, Offtopic)
Great leaps in technology are sometimes followed by falling on your face.