When Things Start to Think 187
When Things Start to Think | |
author | Neil Gershenfeld |
pages | 225 |
publisher | Owl Books (paperback) |
rating | For Slashdotters: 5 to read, 9 to give your folks |
reviewer | EnlightenmentFan |
ISBN | 080505880X |
summary | Seamless, foolproof mini-computers coming up. |
One underlying theme dear to Gershenfeld's heart is the death of traditional academic distinctions between physics and engineering, or between academia and commerce. Applied research is real research.
Another major theme is that older technologies should be treated with respect as we seek to supplement or replace them. For example, a laptop's display is much harder to read in most light than the paper in a book.
The book starts by drawing a contrast between Digital Revolution and Digital Evolution. Digital Revolution is the already-tired metaphor for universal connectivity to infinite information and memory via personal computers, the Internet, etc. Digital Evolution describes a more democratic future, from Gershenfeld's point of view, when computers are so smart, cheap, and ubiquitous that they do many ordinary chores to help ordinary people. When things talk to things, human beings are set free to do work they find more appealing.
"What are things that think?" asks the first section of the book.
Gershenfeld's whizbang examples won't be big news to Slashdot readers. My favorite, the Personal Fabricator, ("a printer that outputs working things instead of static objects")-- whose relationship to a full machine shop analog is like that of the Personal Computer to the old-fashioned mainframe. Gershenfeld actually has one of these in his lab (it outputs plastic doohickeys)--seeing it was one of the high points of my visit there.
"Why should things think?" asks the second section.
My favorite here is the Bill of Rights for machine users. (In true Baby-Boom style, it's of list of wants arbitrarily declared to be rights.) "You have the right to
Have information available when you want it, where you want it, and in the form you want it
Be protected from sending or receiving information that you don't want
Use technology without attending to its needs"
Under the heading "Bad Words," Gershenfeld offers a snide but useful summary of many high-tech pop-sci buzzwords, showing how they get misused by people who don't understand their real content or context.
"How will things that think be developed?"
By making them small and cheap. By getting industry to pay the bills for targeted, practical research, using the Media Lab model TTT ("Things That Think.") By reorganizing education on the model of the Media Lab, where students learn things as they need them for practical projects, not all at once in a huge, abstract lump.
The book concludes with directions to various websites, including the Physics and Media Group (One of their projects these days is "Intrabody Signaling.") Slashdotters might also be interested in Gershenfeld's textbooks The Nature of Mathematical Modeling and The Physics of Information Technology.
You can purchase When Things Start To Think from bn.com, and Amazon has the book paperback discounted to $11.20. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
Are we even remotely close? (Score:4, Insightful)
Re:Are we even remotely close? (Score:2, Interesting)
Re:Are we even remotely close? (Score:2)
Re:Are we even remotely close? (Score:2, Interesting)
We don't even understand the human brain (Score:2)
Our ability to mimic and/or produce human intelligence in machines is severely hampered by our poor understanding of how human intelligence works. The problem is so glaring experts can even barely agree as to what human intelligence is.
Until we understand how our own minds work, we're going to have a hard time getting machines to think as we do.
We're not close yet, but we're trying! (Score:2)
My dissertation is available online from my web site [greatmindsworking.com]. I hope to have the (open source) source code posted later today.
One of Todays Big Blunders (Score:5, Insightful)
How is a computer program ever going to adopt abstract thinking and creativity? Is a computer program ever going to invent mathematics without previous knowledge of it just because it finds it to be a useful utility for solving problems?
Heck, if someone could write a decent language translation program I might think there is a hope.
Re:One of Todays Big Blunders (Score:2, Interesting)
Re:One of Todays Big Blunders (Score:3, Interesting)
It is naïve for you to suggest that this is understood with certainty. We are a long ways away from decoding the brain, and there are many theories that imply that the brain is actually a magnifier for quantum processes. For example, it is believed that the microtubules in the neuron's cell structure may be chambers that can amplify quantum processes to the point that they impact macroscopic processes in the brain. If this turns out to be the case, then we may never be able to decode the brain. For the past century physics has hit a barrier as far as our being able to understand how and why things work at the quantum level. There could be an ocean of mechanics and means behind this quantum barrier, but we may never have the capability to see it.
Re:One of Todays Big Blunders (Score:2)
~Slashdot post in 1902
Never understimate the power of an inquisitive human building on the knowledge of all humanity. Human society is the most complex machine in the universe, but I have no doubt in my mind with enough study even a simple human brain is capable of reducing it to symbols.
Re:One of Todays Big Blunders (Score:3, Insightful)
A) Computers can never think like we do. Well, if not, why not? There's no reason why you couldn't simulate the actions of neurons with sufficient numbers of transistors. If computers can never think like we do, it's either because they can't because we're insufficiently intelligent to recreate the human brain (unsettling) or, for intelligent thought, maybe you need something like a soul. (unsettling to the average slashdot athiest)
B) Computers can think like we do. Isn't that unsettling enough as it is? Free will might as well not be real, since it can be simulated. So how do you know that you actually have it, and not a simulacrum?
Really, there's no way that this can work out comfortably.
Re:One of Todays Big Blunders (Score:2)
Free will might as well not be real, since it can be simulated.
From Eliezer Yudkowsky's FAQ about the Meaning of Life [sysopmind.com] which is much too Singularity-optimistic and generally raving about AI, but still a good thing to read:
4.5 Do I have free will?
"Free will" is a cognitive element representing the basic game-theoretical unit of moral responsibility. It has nothing whatsoever to do with determinism or quantum randomness. Free will doesn't actually exist in reality, but only in the sense that flowers don't actually exist in reality.
I'll go with your point B) :-)
Re:One of Todays Big Blunders (Score:2)
That's not a given. We don't understand enough about how brains work to know that a whole bunch of transistors will be big and fast enough to simulate the brain. There are physical limits to consider.
If computers can never think like we do, it's. .
Why is that so unsettling? Our minds evolved to solve problems such as finding food and shelter, and getting along with other humans. Artificially recreating the human brain has never been a criteria for survival. As it happens, evolution has provided us with a nifty system for generating new minds with natural materials such as food and water - you just have to tolerate some crying and spitting up for a few years.
Re:One of Todays Big Blunders (Score:2)
Uhmmm. Not neccessarily. There's also no reason why you couldn't simulate the weather of the earth with a sufficient number of transistors. That doesn't mean that this "sufficient number of transistors" will be able to fit into the know universe. Until we learn a heck of alot more on the how the brain works, both at a low level and at a high level (how do you recognize it when you smell it again?) conjecture about the "computability" of the brain is just that, conjecture. This, of course, is discounting the non-deterministic (possibly quantum) nature of our brains which may be impossible to duplicate with deterministic transistors. Penrose makes some interesting points in his book, "The Emperor's New Mind", which I don't completely agree with, but seems more reasonable some other books about the future of computing and AI.
The one thing we are sure of is that Kramnik doesn't process a chess board in the same why that Deep Fritz does. And we really don't have any idea what is happening inside of Kramnik's brain. Yet.
EnkiduEOT
Re:One of Todays Big Blunders (Score:5, Interesting)
I hope not.
Will computers ever out-think humans?
Almost certainly.
How soon?
That depends on your metrics. When you speak of abstract throught, you're automatically applying a set of logical "filters" that have to do with evaluating the intellegence of humans whith whom you interact and "opponents" with whom you must contend. In many ways, many machines already out-think humans in creative ways, but they are savants for the most part, only capable of thinking in narrowly pre-determined areas. We are constrained this way too. We cannot think four-dimensionally, for example. But, we do not consider that to be a major limitation. Perhaps someone who could think four-dimensionally would think of a human mind as "unintelligent".
Bottom-line: machines keep getting smarter, but the problem of CONVINCING A HUMAN that you are smart means having some sort of survival and/or communication skills. Those problems are probably still 5-20 years off and involve massive learning simulations that will take years to evolve a suitable program. In the end, we'll probably be able to cut down on the time it took nature to create a human brain by a factor of several million, and improve on it substantially (removing a lot of the archaic reflexive responses, and replacing them with the ability to work in very large groups without breaking down, etc).
Re:One of Todays Big Blunders (Score:4, Insightful)
IME, many are not. This might lead one to the thought that maybe our machines are nearer to our intelligence level than we think.
Soko
Re:One of Todays Big Blunders (Score:2)
And to be even smarter, convincing a human that you're not as smart as you actually are (a much harder communication task, which many smart humans fail at).
Re:One of Todays Big Blunders (Score:2)
This sort of communication is frought with pitfalls and traps and seeming illogic. It may not even be an interesting problem to solve, as much of the complexity involved has to do with human defence mechanisms, which will not be present in full in any AI we produce (unless we do so my copying the structure of a human brain, which seems to be a technology that is quite a ways off).
A machine should be able to, for example, explain a concept slowly and in ways that can be understood by the listener without feeling that their dominance is in question (thus resorting to sarcasm or being condescending) or that they need to respond to a challenge to their dignity (thus giving up or pushing the person to understand things they aren't ready for).
That covers much of the problem with teaching. Then the reverse has many of the same pitfalls. You have to be able to know when to accept incorrect information or incomplete responses or to give incorrect information.
I remember a time in High School when I realized that people who said "what's up" didn't want an answer, but just an acknowlegement. The problem? I could not bring myself to "violate" my own understanding of what it meant to communicate. I understood that sayind "s'up" in response would be sufficient, and even appreciated, but I couldn't say it. It seemed alien and wrong. Therein lies the rub!
Re:One of Todays Big Blunders (Score:2)
I don't think it's worthwhile in all situations. However, it's useful:
Teaching environments are almost definitively ones which have a more intelligent/educated/experienced person and one who is less so (note the difference between teaching and education which is much more likely to be a shared discovery). If you're in a teaching role, you do need to balance your greater whatever (which is explicit in your role) in the subject with a bit of humility that that subject is not the entirety of human wisdom. But that's different from hiding your intelligence.
Re:One of Todays Big Blunders (Score:2)
Machines aren't getting Smarter - they're getting Faster.
Quantitative change does not imply Qualitative change.
Re:One of Todays Big Blunders (Score:1)
The difference between human intelligence and computer "intelligence" is much more subtle and obvious than what you're looking for. It has more to do with concepts, linguistics, how we define intelligence, and perhaps even conciousness. Then again, who says intelligence is not a fundamentally irreducible concept? I've never heard a satisfactory definition of "intelligence" (most failed attempts actually use "intelligence" in the definition, or simply state a tautology).
Machines are not getting smarter. While I was doing my graduate work in AI at UIUC, I slowly started to realize that AI is nothing but a farce, which is why I eventually switched my studies over to comp architecture. Sure, there are "good" algorithms written by intelligent people, but we've only shown (through Deep Blue and similar projects) that computers seem intelligent when we pair these algorithms with brute force methods, and come up with a satisfactory result. Is this intelligence?
There are many examples of complex processes performed in nature that seem to be the result of intelligence, but when they're dissected further, only result in being simple tasks that are performed over and over again, perhaps millions of times, with an impressive outcome. Is this the way that the human brain works? Probably. But the difference between our brains and my computer is so severe that at the current rate of "progress," artificial intelligence is perhaps millions of years away, if it ever happens at all. There just isn't very much to work with, and we don't even know what we're looking for.
I once heard it said, "If the human brain were simple enough for us to understand, we would be so simple that we couldn't."
Re:One of Todays Big Blunders (Score:1)
No, the distinction between human intelligence and computer intelligence IS abstract thought. It separates self-reference from self-awareness, syntax from semantics and referencing from understanding. Without abstract thought, programs will only "compute" things. They will not think like we do. On what basis are you suggesting that computers keep getting smarter? I admit they keep getting faster, but they are only as "smart" as their algorithm. If you run a program on a faster computer, it comes up with the same stupid answer, only faster. Algorithms may be improving, but not at the rate the hardware is improving. Neural nets and fuzzy logic have been around for a long time.
Re:One of Todays Big Blunders (Score:4, Interesting)
Is a computer ever going to invent mathematics without previous knowledge of it just because it find it to be a useful utility for solving problems?
No, we'll tell it about math. Note that I didn't think of math by myself, nor did you. It took humanity thousands of years to invent and perfect it , with millions of people using the state of the art of their time because that's what they were taught to do.
It's conceivable that an AI could figure out some things like this from scratch, but in practice we won't do that (since we can teach it math, or hard code it). It's enough if it can sometimes think of some new method to solve a problem to be considered as intelligent as us, in my opinion.
Your comment is like "how can a computer ever print a text? Is it going to invent writing, and an alphabet by itself?" :-). We're "allowed" to teach it the same things we teach our kids, and hardwire stuff that needs to be hardwired (like a lot of things are hardwired in our brain, vision, language structure, etc).
And as for language translation, in my personal opinion, you need general AI before you can have human-language understanding, and you need that for translation.
Re:One of Todays Big Blunders (Score:3, Informative)
One website for you. babelfish.altavista.com. While it might goof up occasionally, it generally translates well enough for me to get a good idea of the contents. Also, computers *have* been getting better at turing tests (though only for limited domain interactions). I see no reason that computers cannot recreate some "abstract" (or atleast seemingly so) patterns. Hell, if a computer can play chess, thats abstract enough "thinking" for me.
Re:One of Todays Big Blunders (Score:1)
It is not really much better than using a English to Whatever dictionary to translate something. The program is computing the translation--it does not understand what the contents of what it is translating are. What is the formal distinction between something being conscious and not being conscious? If consciousness can be formalized (modeled with an algorithm), then this distinction must be formalizable. The turing test is about as fuzzy of a distinction as you can get. Computer chess algorithms are not even close to what is needed for human intelligence. As far as game algorithms go, with chess there is a fairly limited set of possible outcomes to traverse and there is no hidden information. This type of "thinking" is right up a computational system's alley.
Re:One of Todays Big Blunders (Score:4, Interesting)
Take a game of Go (aka, Baduk). You have a 19x19 grid. One player gets white stones, the other gets black. The players alternate playing stones on the intersections of the board (not in the boxes). This very, VERY simple setup leads to amazingly complex results such that no existing Go program can even come close to challenging a mid-level player much less a master.
The point I'm trying to make is that extremely simple beginnings can lead to extremely complex behavior. Just because we seem complex does not mean that we are more than just a lot of very simple bits working together, in other words. I'm with Kurzweil in the sense that the brain is nothing more than matter operating under physical constraints. Mimic the parts and understand the constraints and you have, for all intent and purpose, a brain. And by extension a thinking thing.
The question then becomes "have we captured the bits that matter?" ie, is there a soul?
I'm an atheist. I'm not the guy you want to answer this question. And I'll refrain from touching on Wolfram's A New Kind of Science at this point... =)
Re:One of Todays Big Blunders (Score:3, Insightful)
How do people do it? Until we can answer that question, you certainly can't rule out that computers can achieve the same.
Is a computer program ever going to invent mathematics without previous knowledge of it just because it finds it to be a useful utility for solving problems?
Yes. Herb Simon (a nobel prize/turing award winning professor) always gave the example of BACON, a program that discovered Kepler's 3rd Law of Planetary Motion. Not bad. He always believed computers can and will think [omnimag.com], and I agree with him.
Re:One of Todays Big Blunders (Score:2)
Which is at this point an irrational, unfounded belief. Even a basic understanding of AI can show how incredibly difficult it would be: computers can perform rote mathematical computations billions of times per second, far greater than any human, but have an incredibly difficult time with language and abstract thought, which young children can learn easily.
Re:One of Todays Big Blunders (Score:2)
I have several objections to this, but first and easiest is; why would anyone want a machine to be as stupid as a human? When we're talking about thinking machines, I daresay we're talking about machines that work far better than we do.
Re:One of Todays Big Blunders (Score:2)
For the life of me, I can't figure out why everybody's so obsessed with the idea of human-like AI when we could be focussing on optimizing the behaviors that computers already excel at.
Re:One of Todays Big Blunders (Score:2)
The most fascinating and horrifying thing about computers designing computers is just how fast technology will evolve once that point is reached. Theoretically, software would have no bugs, hardware tolerances would be incredible. We'll laugh at the slowness of Moore's law.
My question is, in a society where everything is designed and built by thinking machines with perfect memory and infinite endurance, what will us humans do? How will the economy work if nobody "works"? I guess we'll just be left to making art and writing fiction, as I doubt that even thinking machines would become fully proficient at that for quite some time.
Re:One of Todays Big Blunders (Score:2)
OMG (Score:1)
khl
So this is better? (Score:5, Insightful)
For me, I'd rather spend a little more time outside and with real people instead of wiring myself more than I already am.
Technology has it's place...serving me not usurping me.
Re:So this is better? (Score:4, Insightful)
If somebody enjoys Jerry Springer and the WWF, and they're perfectly happy to sit around eating junk food and getting fat, then who are you to stop them? They probably find it just as baffling that somebody would want to go walking through the woods and just look at plants.
It's difficult to see extra free time as a bad thing (unless you think about more abstract effects, like motivation and the value of unhappiness (necessity is the mother of invention, after all)). You use yours how you choose, as will I. Is it really better for a human to spend all of their time working, than to have a machine do it for them, so that human can at least "piss away" their time in a way that brings them pleasure?
It's tough to spend time outside, when you're stuck in a factory all day long.
Re:So this is better? (Score:1)
JER-RY! JER-RY! JER-RY! JER-RY!
FIGHT! FIGHT! FIGHT! FIGHT!
Re:So this is better? (Score:1)
I'm not asking anyone to do or see things my way. I've just had enough of technology. Been working in it for more than I care to think about. Is the world a better place because of it? Maybe. Maybe not.
Keep your perspective is all I'm saying.
As far as questioning motives, I'm not questioning anyone's motive's.
A Point or Two (Score:4, Interesting)
"older technologies should be treated with respect as we seek to supplement or replace them"
This is something that most launches of new and amazing gadgets fail to see. An ebook is not better if it cannot offer more that an ordinary book. An ordinary book is usually the best book there is.
In the why section: "Be protected from sending or receiving information that you don't want "
Like "bug reports" to M$ with so much irrelevant info in 'em that they aught to pay the poor sucker's [who send them in] internet bill.
In the last section it looks like he is trying to get more funding: "By getting industry to pay the bills for targeted, practical research, using the Media Lab model TTT"
That is so true... (Score:3, Interesting)
But that's only really useful for reference texts. For fiction, only the lack of space is much of a benefit that is overwhelmed by all of the other complications ebooks offer (like needing to have power to read or have to deal with an interface to change pages).
I think the most successful eBook will be when they make a "real" book with pages out of electronic paper, and let books "flow" in and out of the eBook. Then you still have a paperback that doesn't require power to read, but you can carry hundreds or thousands of books with you in the space of one physical book.
Re:That is so true... (Score:2)
The Diamond Age (Score:4, Interesting)
This bears resemblance to "Molecular Compilers" as imagined by Neal Stephneson [well.com] in everyone's favourite nanotechnology novel, The Diamond Age [amazon.com], a device where you simply insert the program describing the object you want, plus payment, and return in an hour or so to retrieve your newly formed item.
Gives a whole new meaning to Internet Shopping...
Re:The Diamond Age (Score:1)
Re:The Diamond Age (Score:1)
Stereolithography (Score:1)
There's also this [slashdot.org] if you fancy a model that melts...
I've worked with Gershenfeld (Score:4, Interesting)
to charm tech companies into donating to the Media
Lab. He's been spouting this stuff for so long he starting to believe it.
I also read several of his books: beware the typos and far-reaching statements. Although, "The Physics
of Information Technology" is something I believe
most
use any of the formulas in that book, look them up elsewhere... they're always slightly wrong.)
Re:I've worked with Gershenfeld (Score:3, Informative)
I found that there was a mix of pure BS and interesting if not necessarily useful work being done in the Physics and Media Group. Honestly, though some was BS, this was still better than most of what is done in the Media Lab, where most work is 90% BS. Go look through the current publications list here [mit.edu]. While not much of this is what I would consider "basic research", a lot of it is potentially interesting - physical one-way functions (have been discussed on
Re:I've worked with Gershenfeld (Score:2, Insightful)
"By reorganizing education on the model of the Media Lab, where students learn things as they need them for practical projects, not all at once in a huge, abstract lump."
What a joke! It looks like the Media Lab is getting a little nervous about Olin college, whose focus is exactly that which is described, or his definition of "practical projects" is a little different than mine.
A week? (Score:5, Funny)
10:27am up 46 days, 18:02, 19 users, load average: 0.69, 0.35, 0.23
I must be late.
-JPJ
Uptime (Score:4, Funny)
Maybe you need a different PC?
Re:Uptime on W2k (Score:3, Funny)
'uptime' is not recognized as an internal or external command,
operable program or batch file.
C:\>Windows has found unknown command and is executing command for it.
C:\>Don't try to save your work because I'm rebooting now.
C:\>Warning, could not upload pirated software registry to Microsoft
Re:Uptime on W2k (Score:1)
Heres a sample from a few of our servers:
Re:Uptime on W2k (Score:2)
Could be, but it's not unbelievable. W2K can be quite stable, as long as you load it with only a couple of stable applications and let it just sit there and run, like any server installation should. I've seen server installations that didn't do that, of course, but not everyone running Windows is stupid. ;)
At the same time, test it under conditions more common for a home user (or a server with a poor admin) with a dozen or two random applications being started and stopped fairly frequently, and it crashes just like all of its predecessors. That's why I've been very impressed with my Mac, I use it fairly heavily, dozens of odd programs, games and all sorts of other strange stuff... unstable alpha software all over it. Never seen it crash yet. Corrupted the file system once, but that still didn't crash it, kept right on running while I repaired it. A friend of mine managed to crash his, but he won't tell me how, just that it took a lot of work. :)
Mine hasn't crashed once... (Score:2)
But my uptime output isn't nearly so impressive, because I shut it down to save batteries sometimes.
My x86 box had an uptime of 28 days once in linux... I rebooted to play an old game in windows. Even that box never crashed except when either running windows or when critical hardware failed... I agree, the poster needs a different PC, or maybe just a different OS.
Re:Speaking Of Uptime (Score:2)
Re:Speaking Of Uptime (Score:2)
It doesn't mean other can't stay online for as long as they want. I am pretty sure most unises can stay for as long as they want or until hacked.
And Then... (Score:2, Funny)
When the smart machine logically concludes that the human infestation is harmfull to the planet.....
Good review , questionable future (Score:4, Interesting)
Of course, if one is talking about the work place then there's an entierly differnt issue. That of unemployment. (I'm not saying wheter it's good or bad to introduce technology that can do another's job. I'm only saying it *is* an issue, esp. if you're somone who's job is at risk.)
Re:Good review , questionable future (Score:3, Insightful)
Funny Anecdote about optimism... (Score:2)
My mother tells me a story about all of the wonderful optimistic products that she used to see right before movies. "The Chrysler Jet Car of the Future," or the "Push Button Kitchen."
The most outrageous claim was that with all of those labor saving devices, that people would have a work week of about 22 hours, leaving all of this ample time for family which never materialized. Matter of fact, we are more efficient than ever, and have no free time at all. No one just pulls a 40 anymore... unless their company is in financial trouble.
So my family made up this statement, that serves us well, and keeps us sane.
"Increased performance in anything creates even more increased expectation, complication, and increased harassment."
Speeding toward meaninglessness (Score:4, Insightful)
This is the same old nonsense that's been touted ever since the age of the washing machine. Considering the thousands of labor-saving devices we've acquired throughout the 20th century, by this logic we ought to be living lives of perfect leisure now. But this isn't what happens. In industrial societies, "labor-saving" devices don't. Work expands to fill the time available. When things think, I'm sure you and I will be freed from the tedious chores of cooking, driving, cleaning, and living. We can become machines ourselves, consumed with work until we burn out or die.
(More at Talbot's Netfuture [oreilly.com], if you're interested.)
Re:Speeding toward meaninglessness (Score:4, Insightful)
Those labor saving devices do save labor, and I'm thankful for them. Just start washing your family's clothes by hand for a while and you'll see what they mean by labor saving.
If I had to do all the chores that need to be done the way they were done in 1900, I'd sure as hell have a lot less leisure time. It ain't perfect leisure, but it's more leisure, and that's pretty good considering the alternatives.
Re:Speeding toward meaninglessness (Score:2)
Unions in the early part of the twentieth century had to agitate for a reduction in the 100-hour workweek. Think about that for a while.
Research (Score:5, Interesting)
How would he know? MIT Media Lab, under Nicholas Negroponte, don't do anything that any academic or industry practitioner would consider to be "research". You see, in the words of Negroponte, they live in a world not of "atoms" but of "bits". In the world of atoms, researchers have to produce such things as peer-reviewed papers and working prototypes. In the world of "bits", researchers are measured by the number of column inches they get in Wired magazine. MIT Media lab churns out books and articles by the tonne, but it's little better than scifi, most of it, and very little of it is even original.
You would think that the hard-headed engineers at MIT would have seen that the Emperor has no clothes and would have cut off their funding by now, but mysterious the Media Lab clings to life. They are an embarassment to real futurists everywhere. Contrast them with the work done at IBM's labs, or BT's, or even Nokia, where stuff is made that actually makes an impact on the real world a decade or two later.
Re:Research (Score:1, Interesting)
There's this funny misconception about the Media Lab because it has gotten tons of publicity in Wired-type futurist magazines, but if you actually stopped and tried to back up your statements, you'd find that there is an amazing amount of peer-reviewed research that comes out of most groups there. Just like any other good school. But I can see how most people would be blinded by their darling status.
Re:Research (Score:1)
It's actually a hypnotic psychic antenna, broadcasting cool waves and attracting the impressionable to write big checks.
Re:Research (Score:1)
Crashing computers (Score:1)
What ? That means that you actually try to run it for several days without reboots ? You don't compile and try a new kernel twice a day ? What the hell do you do on /. ?
Our Disposable Society (Score:4, Insightful)
By making them small and cheap.
The invisible addendum to this sentence is expendable. Small, cheap, and expendable - the mantra of the Japanese economy. Someday we'll be so deep in silicon poisoning [svtc.org] that it will be a worldwide crisis, and we'll have to have a resolution like the Kyoto Protocol so that our president can ignore it. But like our automobile industry fifty years ago, we should march relentlessly ahead with abandon until we reach a crisis point, rather than attempt to head it off now.
If machines could truly think they would be screaming at us: "Don't Throw Us Out!!!".
Re:Our Disposable Society (Score:2)
Doctor, stay away from the beach!
See it as an overview of the possibilities of AI (Score:4, Interesting)
My grandfather once gave me a copy of this book. Being interested in what I do learning Artificial Intelligence he also read it. He found it clarifying the possibilities of AI and IT in general a lot. Him not having the slightest experience with computers generally would mean that it's not so interesting for someone deeper into the subject.
But while it's true that the book doesn't get really technical and left me wondering for a lot of the details, the enthusiastic way it's written and the really original projects that are described make it a really nice read. It's really motivating and can help the known problem of having learned a programming language and not having the slightest clue what to program in it.
I think that when you don't see it as a computer book but as reading material for a holiday the book deserves more than a 5. Borrow it from someone and read it, it's not like it'll take a lot of time.
Smart OS (Score:1, Funny)
Hmmm... ->Elightenment-Fan, I wonder what unstable OS he's running.
Do we want machines to think? (Score:2, Funny)
my 2 cents (Score:1)
2. i can make your computer never, ever need rebooting if you promise getting chipped. you should be accounted for at all times.
Re:my 2 cents (Score:2)
Toaster philosophy:
'If God is infinite, and the Universe is also infinite... would you like some toast?'
Lister ended up taking the thing out with an axe, IIRC.
Cogito Ergo Sum (Score:2, Interesting)
This is going to be one of those situations where technology outpaces our ability to deal with the philosophical issues involved.
I know what you're thinking: "Enough with the philosophy bullshit."
And, of course, that response demonstrates exactly why we need to consider the "philosophy bullshit."
Medical advances have burst on the scene so suddenly that we've had to quickly come up with a new area called bio-ethics to deal with all the ramifications of our new abilities.
What happens when washing machines [lifeseller.com] become self-aware?
We need new definitions and new delimiters to help us cope with the new technology. Even the technologists have to create new semantics to help them create the new technologies.
Of course, we could just keep it all to ourselves and say, "To hell with anyone who can't understand our science."
But then we would just be a bunch of assholes who don't deserve the gift of intellect with which we've been endowed.
Things Don't Think - People Do (Score:3, Insightful)
Materialism can never offer a satisfactory explanation of the world.
For every attempt at an explanation must begin with the formation of
thoughts about the phenomena of the world.
Materialism thus begins with the thought of matter or material
processes. But, in doing so, it is already confronted by two different
sets of facts: the material world, and the thoughts about it.
The materialist seeks to make these latter intelligible by regarding
them as purely material processes. He believes that thinking takes
place in the brain, much in the same way that digestion takes place in
the animal organs. Just as he attributes mechanical and organic
effects to matter, so he credits matter in certain circumstances with
the capacity to think.
He overlooks that, in doing so, he is merely shifting the problem from
one place to another. He ascribes the power of thinking to matter
instead of to himself.
And thus he is back again at his starting point. How does matter come
to think about its own nature? Why is it not simply satisfied with
itself and content just to exist?
The materialist has turned his attention away from the definite
subject, his own I, and has arrived at an image of something quite
vague and indefinite. Here the old riddle meets him again. The
materialistic conception cannot solve the problem; it can only shift
it from one place to another.
(Philosophy of Freedom, Chapter 2 [elib.com])
Re:Things Don't Think - People Do (Score:2)
It didn't until just now! THANKS ALOT.
Re:Things Don't Think - People Do (Score:2)
Scooping the loop snooper (Score:5, Funny)
No program can say what another will do.
Now, I won't just assert that, I'll prove it to you:
I will prove that although you might work til you drop,
you can't predict whether a program will stop.
Imagine we have a procedure called P
that will snoop in the source code of programs to see
there aren't infinite loops that go round and around;
and P prints the word "Fine!" if no looping is found.
You feed in your code, and the input it needs,
and then P takes them both and it studies and reads
and computes whether things will all end as the should
(as opposed to going loopy the way that they could).
Well, the truth is that P cannot possibly be,
because if you wrote it and gave it to me,
I could use it to set up a logical bind
that would shatter your reason and scramble your mind.
Here's the trick I would use - and it's simple to do.
I'd define a procedure - we'll name the thing Q -
that would take and program and call P (of course!)
to tell if it looped, by reading the source;
And if so, Q would simply print "Loop!" and then stop;
but if no, Q would go right back to the top,
and start off again, looping endlessly back,
til the universe dies and is frozen and black.
And this program called Q wouldn't stay on the shelf;
I would run it, and (fiendishly) feed it itself.
What behaviour results when I do this with Q?
When it reads its own source, just what will it do?
If P warns of loops, Q will print "Loop!" and quit;
yet P is supposed to speak truly of it.
So if Q's going to quit, then P should say, "Fine!" -
which will make Q go back to its very first line!
No matter what P would have done, Q will scoop it:
Q uses P's output to make P look stupid.
If P gets things right then it lies in its tooth;
and if it speaks falsely, it's telling the truth!
I've created a paradox, neat as can be -
and simply by using your putative P.
When you assumed P you stepped into a snare;
Your assumptions have led you right into my lair.
So, how to escape from this logical mess?
I don't have to tell you; I'm sure you can guess.
By reductio, there cannot possibly be
a procedure that acts like the mythical P.
You can never discover mechanical means
for predicting the acts of computing machines.
It's something that cannot be done. So we users
must find our own bugs; our computers are losers!
by Geoffrey K. Pullum
Stevenson College
University of California
Re:Scooping the loop snooper (Score:1)
"Here's the trick I would use - and it's simple to do.
I'd define a procedure - we'll name the thing Q -
that would take and program and call P (of course!)
to tell if it looped, by reading the source; "
How does the last verse map to calling P(proc,proc)?
Re:Scooping the loop snooper (Score:1)
And you're getting the blues:
Just remember to always
Watch your Ps and Qs.
I'm still waiting (Score:5, Insightful)
To quote Joe vs. the Volcano: '99% of people go through life asleep; the remaining 1% walk around in a state of constant amazement.'
To add to that I'd say: 99% of people *think* they're awake; the remaining 1% know they've got some waking up to do.
There you have it, your Zen moment of the day.
To be quite honest, if I'm still waiting for a Photoshop render, or a level to load in RTCW, our machines aren't ready to think.
Re:I'm still waiting (Score:1)
Stop. Now, picture EVERY detail of a RTCW map. The textures, the physics, and the exact (not estimated) dimensions of all of the objects.
Took you more than five seconds, right? And if I were to look at a refernence map of the same map you imagined, you'd have to "think" again to get the details I ask for--assuming that you have perfect memory.
Contrast this with the behavior of the bots in RTCW. IANAFPSD, (First Person Shooter Developer) but I suspect that the "bot code" and the "map code" reside in two different spots of the program, and talk no more than word and winamp do.
Computers will think when admining them is "idiotproof"--and when we can make an idiot-bot. I say take Clippy, teach him how Windows works, and make him an administrator!
Hmm....
Bad words (Score:1)
Can be a scary thought (Score:3, Insightful)
To quote the [bad] movie Runaway:
"Humans aren't perfect so why should machines be perfect?"
Honestly, I see engineers and developers walking down the hall with their shirt half-tucked in and their shoes untied. A sign that either
I dunno. Maybe I'd feel better about all this if every time I turn around I didn't see Yet Another stack-overflow or buffer-overrun bug (yes, the quality of code is getting better but there is still too much of this crap.) Maybe I'm just a pessimistic pisser. Perhaps I enjoy laughing at an engineer when they fall flat on their face after tripping over their untied shoelace.
Wait for RedWolves2 to post a link (Score:2)
If you don't get the joke, you should look through his previous posts. About half of them are shills for amazon using his referrer tag.
Undergrad "futurist" literature (Score:2)
The examples aren't all that well-chosen, for one thing. The eBook isn't at a price point where people are going to adopt it -- and are there stable standards for files and so on yet? -- but it's not a great example of new technology that didn't "respect" the one it was trying to replace (or be an adjunct to, more like). The displays on those things got a ton of attention, because the designers knew they needed to be as easy on the eye as paper and ink. There are lots of tradeoffs between the two -- which is more "portable" if the one that can run out of batteries can also carry a large number of books in one small package? -- and the eBook just hasn't hit that sweet spot yet. But the companies behind its development, those were all big publishing companies, weren't they? They know books, they "respect" them. It's an okay point, but a shaky example. Anyway, the question of why and when things will thing isn't nearly as interesting as the question of why and when people don't think... ;)
Promises, promises ... (Score:1)
I gave up on futurists (Score:4, Funny)
I mean, maybe he's right. But who cares?
We do not have a clue about AI (Score:4, Insightful)
"Thinking" has been ascribed to mechanical devices for quite some time. Watt's flyball governor for steam engines yielded such comments in its day. Railroad switch and signal interlocking systems were said to "think" early in the 20th century. At that level, we can do "things that think".
But strong AI seems further away than ever. After years in the AI field, and having met most of the big names, I'm now convinced that we don't have a clue. Logic-based AI hit a wall decades ago; mapping the world into the right formalism is the hard part, not crunching on the formalism. Hill-climbing in spaces dominated by local minima (which includes neural nets, genetic algorithms, and simulated annealing) works for a while, but doesn't self-improve indefinitely. Reactive, bottom-up systems without world models (i.e. Brooks) can do insect-level stuff, but don't progress beyond that point.
I personally think that we now know enough to start developing something with a good "lizard brain", with balance, coordination, and a local world model. That may be useful, but it's still a long way from strong AI. And even that's very hard. But we're seeing the beginnings of it from game developers and from a very few good robot groups.
Related to this is that we don't really understand how evolution works, either. We seem to understand how variation and selection result in minor changes, but we don't understand the mechanism that produces major improvements. If we did, genetic algorithm systems would work a lot better. (Koza's been working on systems that evolve "subroutines" for a while now, trying to crack this, but hasn't made a breakthrough.)
It's very frustrating.
Re: "Strong AI", definition of (Score:2)
singing furniture in Beauty and the Beast (Score:2)
More wishful thinking from the AI establishment (Score:1, Insightful)
To the AI practitioners: You guys are no closer to understanding how human-level intelligence works today than you were thirty years ago, when the spectacular results that you got on very specific, well-defined problems made your head swell up.
In my view, the guy takes a large chunk of the blame is Marvin Minsky, who, after having seen not many (if any) of his extravagant forecasts realized, he still refuses to adopt a more circumspect attitude. I am sure he was an AI guru during the 60s, but he has shown little capabilities to adapt and learn - and to stop making silly public announcements.
Having read the book... (Score:4, Interesting)
A week, huh? (Score:2, Funny)
$ uptime
1:31pm up 27 days, 14:04, 2 users, load average: 5.44, 6.23, 6.58
Re:A week, huh? (Score:2)
Plenty of stupid thinking creatures already. (Score:2)
There are already billions upon billions of "thinking" beings, most smarter than any existing man-made thinking machine but many costing less (go to your pet store/SPCA etc for examples). And when I last checked the world isn't anything like the utopia they are talking about.
Sure when there were human slaves, things were reasonably good most of the time for the slave owners, but slaves didn't and couldn't always do what you want either. There were plenty of other problems too.
As for humans being free to do things they find appealing, do you think we would easily be allowed to parasite the thinking machines? I doubt it. Do we all get the same quota of blood to suck? Would other humans or the machines themselves allow it?
Why I doubt it - we have more than enough food in the world to feed everyone, but yet masses are still starving.
Now if they are talking about some of us being able to have more toys and entities to play with then that's different.
Somebody Mod this RedWolves Scumbag Down (Score:1, Insightful)
So he's copping a buck. . . (Score:2, Insightful)
Personally, I don't see why RedWolves2 shouldn't post a link to Amazon and make a dollar if you follow that link.
If you don't like it, don't click. If he were offering free porn and you went to his site from which he makes advertising dollars, would you feel the same?
RedWolves2's post is on-topic and for some /.'ers a service.
Re:What about MS (Score:1)