Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Technology Books Media Book Reviews

When Things Start to Think 187

EnlightenmentFan writes "In When Things Start to Think, MIT Media Lab whiz Neil Gershenfeld predicts an appealing future of seamless, foolproof computers. User alert: Relentless optimism ahead. (I am ready to let MIT graft smart chips into my skin some day after my PC goes a week without crashing.) This is the book to buy for your folks to get them excited about nerds. It does also have some interesting stuff for nerds themselves." Read on for Enlightenment Fan's review.
When Things Start to Think
author Neil Gershenfeld
pages 225
publisher Owl Books (paperback)
rating For Slashdotters: 5 to read, 9 to give your folks
reviewer EnlightenmentFan
ISBN 080505880X
summary Seamless, foolproof mini-computers coming up.

One underlying theme dear to Gershenfeld's heart is the death of traditional academic distinctions between physics and engineering, or between academia and commerce. Applied research is real research.

Another major theme is that older technologies should be treated with respect as we seek to supplement or replace them. For example, a laptop's display is much harder to read in most light than the paper in a book.

The book starts by drawing a contrast between Digital Revolution and Digital Evolution. Digital Revolution is the already-tired metaphor for universal connectivity to infinite information and memory via personal computers, the Internet, etc. Digital Evolution describes a more democratic future, from Gershenfeld's point of view, when computers are so smart, cheap, and ubiquitous that they do many ordinary chores to help ordinary people. When things talk to things, human beings are set free to do work they find more appealing.

"What are things that think?" asks the first section of the book.

Gershenfeld's whizbang examples won't be big news to Slashdot readers. My favorite, the Personal Fabricator, ("a printer that outputs working things instead of static objects")-- whose relationship to a full machine shop analog is like that of the Personal Computer to the old-fashioned mainframe. Gershenfeld actually has one of these in his lab (it outputs plastic doohickeys)--seeing it was one of the high points of my visit there.

"Why should things think?" asks the second section.

My favorite here is the Bill of Rights for machine users. (In true Baby-Boom style, it's of list of wants arbitrarily declared to be rights.) "You have the right to

  • Have information available when you want it, where you want it, and in the form you want it

    Be protected from sending or receiving information that you don't want

    Use technology without attending to its needs"

Under the heading "Bad Words," Gershenfeld offers a snide but useful summary of many high-tech pop-sci buzzwords, showing how they get misused by people who don't understand their real content or context.

"How will things that think be developed?"

By making them small and cheap. By getting industry to pay the bills for targeted, practical research, using the Media Lab model TTT ("Things That Think.") By reorganizing education on the model of the Media Lab, where students learn things as they need them for practical projects, not all at once in a huge, abstract lump.

The book concludes with directions to various websites, including the Physics and Media Group (One of their projects these days is "Intrabody Signaling.") Slashdotters might also be interested in Gershenfeld's textbooks The Nature of Mathematical Modeling and The Physics of Information Technology.


You can purchase When Things Start To Think from bn.com, and Amazon has the book paperback discounted to $11.20. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.

This discussion has been archived. No new comments can be posted.

When Things Start to Think

Comments Filter:
  • by Anonymous Coward on Monday October 28, 2002 @10:19AM (#4547594)
    I mean, come'on. We have pattern recognition, and bots that have huge libraries of information. We aren't anywhere near true AI, and won't be for several decades, unless some huge breakthrough occurs in learning algorithms.
    • I agree completely. I like the Media Lab folks' optimism, but the stuff they come up with seems so far-out that it's just not worth getting to excited about. Did you ever read "Being Digital?" It's a pleasant read, but it's so in-space that I have to wonder what sort of world Nicholas Negroponte lives in. It seems to be nothing like mine (and I live in a fairly high tech environment [CMU student, etc.]).
      • It doesn't sound like they care about "true AI" at all. I get the impression that they're focused on breaking simple tasks down so that simple electronic devices can make useful decisions about them. Manufacture millions of these cheap, problem-specific devices, and hand over an increasing amount of the "busywork" of living to them. Not true AI, but also already do-able.
    • That right, AI hasn't started as you might imagine it having a look at Sci-Fi movies. Anyway, I think that it will be something like a boom when it will start. I mean, lots of research are still on the heap, but interresting results are coming up every day, showing that intelligence evolution looks more like a log(n) than anything else (just look at us - human being - 200 years ago). When it will start, we'll see, but it is certainly not too early to discuss it ..
    • Our ability to mimic and/or produce human intelligence in machines is severely hampered by our poor understanding of how human intelligence works. The problem is so glaring experts can even barely agree as to what human intelligence is.

      Until we understand how our own minds work, we're going to have a hard time getting machines to think as we do.

    • I've been working on a system that acquires a language of nouns and verbs in a bottom-up fashion based on visual perception. It is by no means intelligent, but I truly believe that grounded, bottom-up learning is the key to A.I.

      My dissertation is available online from my web site [greatmindsworking.com]. I hope to have the (open source) source code posted later today.

  • by leodegan ( 144137 ) on Monday October 28, 2002 @10:24AM (#4547637)
    I think we are going to look back a hundered years from now and say how silly we were to ever believe computers could think like we do.

    How is a computer program ever going to adopt abstract thinking and creativity? Is a computer program ever going to invent mathematics without previous knowledge of it just because it finds it to be a useful utility for solving problems?

    Heck, if someone could write a decent language translation program I might think there is a hope.
    • If you knew your biology well enough, you would realize that people are nothing more than a series of extremly complex chemical reactions set into motion by enzymes, unless by some chance we all have a "soul". This can and will be modeled by software someday.

      • If you knew your biology well enough, you would realize that people are nothing more than a series of extremly complex chemical reactions set into motion by enzymes, unless by some chance we all have a "soul". This can and will be modeled by software someday.
        It is naïve for you to suggest that this is understood with certainty. We are a long ways away from decoding the brain, and there are many theories that imply that the brain is actually a magnifier for quantum processes. For example, it is believed that the microtubules in the neuron's cell structure may be chambers that can amplify quantum processes to the point that they impact macroscopic processes in the brain. If this turns out to be the case, then we may never be able to decode the brain. For the past century physics has hit a barrier as far as our being able to understand how and why things work at the quantum level. There could be an ocean of mechanics and means behind this quantum barrier, but we may never have the capability to see it.
        • It is naive for you to suggest this is understood with certainty. We are a long ways from decoding the interaction of matter and energy, and there are many theories that imply energy and matter might actually be one. For example, it is believed that radioactive substances are really converting matter into pure energy in a way we cannot yet comprehend. If it turns out to be the case, then we may never be able to decode the nature of the universe. For the past century, this new field of physics has hit a barrier as far as our being able to understand how and why things work at the atomic level. There could be an ocean of mechanics and means behind this quantum barrier, but we may never have the capability to see it.

          ~Slashdot post in 1902

          Never understimate the power of an inquisitive human building on the knowledge of all humanity. Human society is the most complex machine in the universe, but I have no doubt in my mind with enough study even a simple human brain is capable of reducing it to symbols.
    • Well, here's the rub. It can only work out two ways, both of which the average slashdot user will find vaguely unsettling... Either:

      A) Computers can never think like we do. Well, if not, why not? There's no reason why you couldn't simulate the actions of neurons with sufficient numbers of transistors. If computers can never think like we do, it's either because they can't because we're insufficiently intelligent to recreate the human brain (unsettling) or, for intelligent thought, maybe you need something like a soul. (unsettling to the average slashdot athiest)

      B) Computers can think like we do. Isn't that unsettling enough as it is? Free will might as well not be real, since it can be simulated. So how do you know that you actually have it, and not a simulacrum?

      Really, there's no way that this can work out comfortably.
      • Free will might as well not be real, since it can be simulated.

        From Eliezer Yudkowsky's FAQ about the Meaning of Life [sysopmind.com] which is much too Singularity-optimistic and generally raving about AI, but still a good thing to read:

        4.5 Do I have free will?

        "Free will" is a cognitive element representing the basic game-theoretical unit of moral responsibility. It has nothing whatsoever to do with determinism or quantum randomness. Free will doesn't actually exist in reality, but only in the sense that flowers don't actually exist in reality.

        I'll go with your point B) :-)

      • There's no reason why you couldn't simulate the actions of neurons with sufficient numbers of transistors.


        That's not a given. We don't understand enough about how brains work to know that a whole bunch of transistors will be big and fast enough to simulate the brain. There are physical limits to consider.


        If computers can never think like we do, it's. . .because we're insufficiently intelligent to recreate the human brain (unsettling)


        Why is that so unsettling? Our minds evolved to solve problems such as finding food and shelter, and getting along with other humans. Artificially recreating the human brain has never been a criteria for survival. As it happens, evolution has provided us with a nifty system for generating new minds with natural materials such as food and water - you just have to tolerate some crying and spitting up for a few years.

      • There's no reason why you couldn't simulate the actions of neurons with sufficient numbers of transistors.

        Uhmmm. Not neccessarily. There's also no reason why you couldn't simulate the weather of the earth with a sufficient number of transistors. That doesn't mean that this "sufficient number of transistors" will be able to fit into the know universe. Until we learn a heck of alot more on the how the brain works, both at a low level and at a high level (how do you recognize it when you smell it again?) conjecture about the "computability" of the brain is just that, conjecture. This, of course, is discounting the non-deterministic (possibly quantum) nature of our brains which may be impossible to duplicate with deterministic transistors. Penrose makes some interesting points in his book, "The Emperor's New Mind", which I don't completely agree with, but seems more reasonable some other books about the future of computing and AI.

        The one thing we are sure of is that Kramnik doesn't process a chess board in the same why that Deep Fritz does. And we really don't have any idea what is happening inside of Kramnik's brain. Yet.

        EnkiduEOT

    • by ajs ( 35943 ) <ajs@ajs . c om> on Monday October 28, 2002 @10:37AM (#4547764) Homepage Journal
      Will computers ever think like we do?

      I hope not.

      Will computers ever out-think humans?

      Almost certainly.

      How soon?

      That depends on your metrics. When you speak of abstract throught, you're automatically applying a set of logical "filters" that have to do with evaluating the intellegence of humans whith whom you interact and "opponents" with whom you must contend. In many ways, many machines already out-think humans in creative ways, but they are savants for the most part, only capable of thinking in narrowly pre-determined areas. We are constrained this way too. We cannot think four-dimensionally, for example. But, we do not consider that to be a major limitation. Perhaps someone who could think four-dimensionally would think of a human mind as "unintelligent".

      Bottom-line: machines keep getting smarter, but the problem of CONVINCING A HUMAN that you are smart means having some sort of survival and/or communication skills. Those problems are probably still 5-20 years off and involve massive learning simulations that will take years to evolve a suitable program. In the end, we'll probably be able to cut down on the time it took nature to create a human brain by a factor of several million, and improve on it substantially (removing a lot of the archaic reflexive responses, and replacing them with the ability to work in very large groups without breaking down, etc).
      • by Soko ( 17987 ) on Monday October 28, 2002 @10:57AM (#4547943) Homepage
        Nice post, but you assume that any human is capable of basic intelligent thought.

        IME, many are not. This might lead one to the thought that maybe our machines are nearer to our intelligence level than we think. ;^)

        Soko
      • machines keep getting smarter, but the problem of CONVINCING A HUMAN that you are smart means having some sort of survival and/or communication skills.

        And to be even smarter, convincing a human that you're not as smart as you actually are (a much harder communication task, which many smart humans fail at).

        • I disagree that hiding intellegence is a worthwhile goal. I know of many people that do not (and probably could not) hide their intellegence, and yet fit into social groups that are not as intelligent. What's hard for many people who can reason faster than most of their peers is that they never achieve the ability to communicate

          This sort of communication is frought with pitfalls and traps and seeming illogic. It may not even be an interesting problem to solve, as much of the complexity involved has to do with human defence mechanisms, which will not be present in full in any AI we produce (unless we do so my copying the structure of a human brain, which seems to be a technology that is quite a ways off).

          A machine should be able to, for example, explain a concept slowly and in ways that can be understood by the listener without feeling that their dominance is in question (thus resorting to sarcasm or being condescending) or that they need to respond to a challenge to their dignity (thus giving up or pushing the person to understand things they aren't ready for).

          That covers much of the problem with teaching. Then the reverse has many of the same pitfalls. You have to be able to know when to accept incorrect information or incomplete responses or to give incorrect information.

          I remember a time in High School when I realized that people who said "what's up" didn't want an answer, but just an acknowlegement. The problem? I could not bring myself to "violate" my own understanding of what it meant to communicate. I understood that sayind "s'up" in response would be sufficient, and even appreciated, but I couldn't say it. It seemed alien and wrong. Therein lies the rub!
          • I disagree that hiding intellegence is a worthwhile goal. I know of many people that do not (and probably could not) hide their intellegence, and yet fit into social groups that are not as intelligent.

            I don't think it's worthwhile in all situations. However, it's useful:

            1. In adversarial or negotiation situations (games, battles, courtrooms, dealmaking) where being underestimated can have substantial impacts on your opponents level of alertness and quality of assumptions.
            2. Because not all less intelligent groups are willing to integrate those who are obviously more intelligent, or have the cultural markers of being more intelligent.

            Teaching environments are almost definitively ones which have a more intelligent/educated/experienced person and one who is less so (note the difference between teaching and education which is much more likely to be a shared discovery). If you're in a teaching role, you do need to balance your greater whatever (which is explicit in your role) in the subject with a bit of humility that that subject is not the entirety of human wisdom. But that's different from hiding your intelligence.


      • Machines aren't getting Smarter - they're getting Faster.

        Quantitative change does not imply Qualitative change. ...and that is the biggest blunder of AI types like Kurzweil and his mindless drones.

      • Can't think fourth dimensionally? That's very easy. If you use time as the fourth dimension, it's extremely simple to think of a 4th dimensional line, "sphere", or any other 4th dimensional object. Thinking fifth dimensionally, however? I haven't tried that yet.

        The difference between human intelligence and computer "intelligence" is much more subtle and obvious than what you're looking for. It has more to do with concepts, linguistics, how we define intelligence, and perhaps even conciousness. Then again, who says intelligence is not a fundamentally irreducible concept? I've never heard a satisfactory definition of "intelligence" (most failed attempts actually use "intelligence" in the definition, or simply state a tautology).

        Machines are not getting smarter. While I was doing my graduate work in AI at UIUC, I slowly started to realize that AI is nothing but a farce, which is why I eventually switched my studies over to comp architecture. Sure, there are "good" algorithms written by intelligent people, but we've only shown (through Deep Blue and similar projects) that computers seem intelligent when we pair these algorithms with brute force methods, and come up with a satisfactory result. Is this intelligence?

        There are many examples of complex processes performed in nature that seem to be the result of intelligence, but when they're dissected further, only result in being simple tasks that are performed over and over again, perhaps millions of times, with an impressive outcome. Is this the way that the human brain works? Probably. But the difference between our brains and my computer is so severe that at the current rate of "progress," artificial intelligence is perhaps millions of years away, if it ever happens at all. There just isn't very much to work with, and we don't even know what we're looking for.

        I once heard it said, "If the human brain were simple enough for us to understand, we would be so simple that we couldn't."


      • When you speak of abstract throught, you're automatically applying a set of logical "filters" that have to do with evaluating the intellegence of humans whith whom you interact and "opponents" with whom you must contend.
        No, the distinction between human intelligence and computer intelligence IS abstract thought. It separates self-reference from self-awareness, syntax from semantics and referencing from understanding. Without abstract thought, programs will only "compute" things. They will not think like we do.
        Bottom-line: machines keep getting smarter, but the problem of CONVINCING A HUMAN that you are smart means having some sort of survival and/or communication skills. Those problems are probably still 5-20 years off and involve massive learning simulations that will take years to evolve a suitable program. In the end, we'll probably be able to cut down on the time it took nature to create a human brain by a factor of several million, and improve on it substantially (removing a lot of the archaic reflexive responses, and replacing them with the ability to work in very large groups without breaking down, etc).
        On what basis are you suggesting that computers keep getting smarter? I admit they keep getting faster, but they are only as "smart" as their algorithm. If you run a program on a faster computer, it comes up with the same stupid answer, only faster. Algorithms may be improving, but not at the rate the hardware is improving. Neural nets and fuzzy logic have been around for a long time.
    • by Scarblac ( 122480 ) <slashdot@gerlich.nl> on Monday October 28, 2002 @10:43AM (#4547816) Homepage

      Is a computer ever going to invent mathematics without previous knowledge of it just because it find it to be a useful utility for solving problems?

      No, we'll tell it about math. Note that I didn't think of math by myself, nor did you. It took humanity thousands of years to invent and perfect it , with millions of people using the state of the art of their time because that's what they were taught to do.

      It's conceivable that an AI could figure out some things like this from scratch, but in practice we won't do that (since we can teach it math, or hard code it). It's enough if it can sometimes think of some new method to solve a problem to be considered as intelligent as us, in my opinion.

      Your comment is like "how can a computer ever print a text? Is it going to invent writing, and an alphabet by itself?" :-). We're "allowed" to teach it the same things we teach our kids, and hardwire stuff that needs to be hardwired (like a lot of things are hardwired in our brain, vision, language structure, etc).

      And as for language translation, in my personal opinion, you need general AI before you can have human-language understanding, and you need that for translation.

    • Heck, if someone could write a decent language translation program I might think there is a hope

      One website for you. babelfish.altavista.com. While it might goof up occasionally, it generally translates well enough for me to get a good idea of the contents. Also, computers *have* been getting better at turing tests (though only for limited domain interactions). I see no reason that computers cannot recreate some "abstract" (or atleast seemingly so) patterns. Hell, if a computer can play chess, thats abstract enough "thinking" for me.


      • One website for you. babelfish.altavista.com. While it might goof up occasionally, it generally translates well enough for me to get a good idea of the contents.
        It is not really much better than using a English to Whatever dictionary to translate something. The program is computing the translation--it does not understand what the contents of what it is translating are.
        Also, computers *have* been getting better at turing tests (though only for limited domain interactions).
        What is the formal distinction between something being conscious and not being conscious? If consciousness can be formalized (modeled with an algorithm), then this distinction must be formalizable. The turing test is about as fuzzy of a distinction as you can get.
        Hell, if a computer can play chess, thats abstract enough "thinking" for me.
        Computer chess algorithms are not even close to what is needed for human intelligence. As far as game algorithms go, with chess there is a fairly limited set of possible outcomes to traverse and there is no hidden information. This type of "thinking" is right up a computational system's alley.
    • by limekiller4 ( 451497 ) on Monday October 28, 2002 @10:47AM (#4547840) Homepage
      I think that what appears to be overly complex and, if you'd like to call it this, "subtle," is really nothing more than the illusion of complexity. Let me explain...

      Take a game of Go (aka, Baduk). You have a 19x19 grid. One player gets white stones, the other gets black. The players alternate playing stones on the intersections of the board (not in the boxes). This very, VERY simple setup leads to amazingly complex results such that no existing Go program can even come close to challenging a mid-level player much less a master.

      The point I'm trying to make is that extremely simple beginnings can lead to extremely complex behavior. Just because we seem complex does not mean that we are more than just a lot of very simple bits working together, in other words. I'm with Kurzweil in the sense that the brain is nothing more than matter operating under physical constraints. Mimic the parts and understand the constraints and you have, for all intent and purpose, a brain. And by extension a thinking thing.

      The question then becomes "have we captured the bits that matter?" ie, is there a soul?

      I'm an atheist. I'm not the guy you want to answer this question. And I'll refrain from touching on Wolfram's A New Kind of Science at this point... =)
    • How is a computer program ever going to adopt abstract thinking and creativity?

      How do people do it? Until we can answer that question, you certainly can't rule out that computers can achieve the same.

      Is a computer program ever going to invent mathematics without previous knowledge of it just because it finds it to be a useful utility for solving problems?

      Yes. Herb Simon (a nobel prize/turing award winning professor) always gave the example of BACON, a program that discovered Kepler's 3rd Law of Planetary Motion. Not bad. He always believed computers can and will think [omnimag.com], and I agree with him.

      • Herb Simon (a nobel prize/turing award winning professor) always gave the example of BACON, a program that discovered Kepler's 3rd Law of Planetary Motion. Not bad.
        Kepler (and others before him) had to observe the phenomena of the planets' orbits, infer there was a relationship, then use trial and error until he found a formula that worked in each case. Simon had the computer assume the first two, and have the last as a goal. That didn't take any creativity or original thought, only testing possible formulas until it found one that worked.

        He always believed computers can and will think, and I agree with him.
        Which is at this point an irrational, unfounded belief. Even a basic understanding of AI can show how incredibly difficult it would be: computers can perform rote mathematical computations billions of times per second, far greater than any human, but have an incredibly difficult time with language and abstract thought, which young children can learn easily.
    • I have several objections to this, but first and easiest is; why would anyone want a machine to be as stupid as a human? When we're talking about thinking machines, I daresay we're talking about machines that work far better than we do.

    • Valid, but totally irrelevant. These guys are talking about breaking simple tasks down into steps that simple machines can perform, and even make useful decisions about. Leave creative thinking to the humans, if you like, but by all means shove off all the busywork on cheap, specialized eproms!

      For the life of me, I can't figure out why everybody's so obsessed with the idea of human-like AI when we could be focussing on optimizing the behaviors that computers already excel at.
    • Our current methods of invoking technological self-sustained thought may be flawed or dead ends, but that's not to say that someone may begin to take a completely new and elemental approach to the problem.

      The most fascinating and horrifying thing about computers designing computers is just how fast technology will evolve once that point is reached. Theoretically, software would have no bugs, hardware tolerances would be incredible. We'll laugh at the slowness of Moore's law.

      My question is, in a society where everything is designed and built by thinking machines with perfect memory and infinite endurance, what will us humans do? How will the economy work if nobody "works"? I guess we'll just be left to making art and writing fiction, as I doubt that even thinking machines would become fully proficient at that for quite some time.

    • Frankly, I have serious doubt about the ability of members of the human race to "think"...
  • So this is better? (Score:5, Insightful)

    by rimcrazy ( 146022 ) on Monday October 28, 2002 @10:24AM (#4547639)
    Humans already have loads of free time now and what do we do? We piss it away watching Jerry Springer and WWF eating cheezy poof's on the sofa turning into fat slobs.

    For me, I'd rather spend a little more time outside and with real people instead of wiring myself more than I already am.

    Technology has it's place...serving me not usurping me.

    • by SlightlyMadman ( 161529 ) <slightlymadman.slightlymad@net> on Monday October 28, 2002 @10:52AM (#4547892) Homepage
      I dislike the American tradition of television and cheezy poofs as much as you do, but I really don't think it's your place to judge whether or not that's a worthwhile way for someone else to spend their time.

      If somebody enjoys Jerry Springer and the WWF, and they're perfectly happy to sit around eating junk food and getting fat, then who are you to stop them? They probably find it just as baffling that somebody would want to go walking through the woods and just look at plants.

      It's difficult to see extra free time as a bad thing (unless you think about more abstract effects, like motivation and the value of unhappiness (necessity is the mother of invention, after all)). You use yours how you choose, as will I. Is it really better for a human to spend all of their time working, than to have a machine do it for them, so that human can at least "piss away" their time in a way that brings them pleasure?

      It's tough to spend time outside, when you're stuck in a factory all day long.
    • Humans already have loads of free time now and what do we do? We piss it away watching Jerry Springer and WWF eating cheezy poof's on the sofa turning into fat slobs.

      JER-RY! JER-RY! JER-RY! JER-RY!

      FIGHT! FIGHT! FIGHT! FIGHT!
  • A Point or Two (Score:4, Interesting)

    by e8johan ( 605347 ) on Monday October 28, 2002 @10:27AM (#4547664) Homepage Journal
    Sounds like he makes a point or two:

    "older technologies should be treated with respect as we seek to supplement or replace them"
    This is something that most launches of new and amazing gadgets fail to see. An ebook is not better if it cannot offer more that an ordinary book. An ordinary book is usually the best book there is.

    In the why section: "Be protected from sending or receiving information that you don't want "
    Like "bug reports" to M$ with so much irrelevant info in 'em that they aught to pay the poor sucker's [who send them in] internet bill.

    In the last section it looks like he is trying to get more funding: "By getting industry to pay the bills for targeted, practical research, using the Media Lab model TTT"
    • That is so true... (Score:3, Interesting)

      by SuperKendall ( 25149 )
      The primary advantage of ebooks is pretty much the ability to search text, and take up little physical space.

      But that's only really useful for reference texts. For fiction, only the lack of space is much of a benefit that is overwhelmed by all of the other complications ebooks offer (like needing to have power to read or have to deal with an interface to change pages).

      I think the most successful eBook will be when they make a "real" book with pages out of electronic paper, and let books "flow" in and out of the eBook. Then you still have a paperback that doesn't require power to read, but you can carry hundreds or thousands of books with you in the space of one physical book.
      • As I understand it, even electronic paper needs power to change apperance. However, you can probably integrate some sort of solar cells into the cover of the book by the time that you can make electronic paper.
  • The Diamond Age (Score:4, Interesting)

    by djkitsch ( 576853 ) on Monday October 28, 2002 @10:28AM (#4547680)
    My favorite, the Personal Fabricator, ("a printer that outputs working things instead of static objects")

    This bears resemblance to "Molecular Compilers" as imagined by Neal Stephneson [well.com] in everyone's favourite nanotechnology novel, The Diamond Age [amazon.com], a device where you simply insert the program describing the object you want, plus payment, and return in an hour or so to retrieve your newly formed item.

    Gives a whole new meaning to Internet Shopping...
    • The Personal Fabricator really just sounds like a jumped up 3-D printing device, of which there are many in use today (mainly in making molds and models in computer integrated manufacturing). Some use various polymers, making shapes, some older ones use hundreds of sheets of paper and glue (the excess paper is cut away as the sheets are glued together), and still others (my personal favorites) use cornstarch and glue. They are all really interesting to use. It is amazing to create an object in AutoCad, Solid Works, or whatever, and then "print" it off. Of course, the objects created really can't be used for anything except seeing how it would turn out or making molds.
      • You say "jumped up", but there'd be a world of difference between something that could make a solid plastic model of a CAD drawing and a device that could build you a fully working digital camera from scratch.
  • by Anonymous Coward on Monday October 28, 2002 @10:28AM (#4547681)
    Don't buy into the same hype that he uses
    to charm tech companies into donating to the Media
    Lab. He's been spouting this stuff for so long he starting to believe it.

    I also read several of his books: beware the typos and far-reaching statements. Although, "The Physics
    of Information Technology" is something I believe
    most /. readers would love. (If you ever actually
    use any of the formulas in that book, look them up elsewhere... they're always slightly wrong.)
    • by Anonymous Coward
      LOL. I have worked for Gershenfeld as well. Agreed about his writing skills, and he has definitely gotten so into his hype that he may not be able to discern what's realistic and/or useful from what's not.


      I found that there was a mix of pure BS and interesting if not necessarily useful work being done in the Physics and Media Group. Honestly, though some was BS, this was still better than most of what is done in the Media Lab, where most work is 90% BS. Go look through the current publications list here [mit.edu]. While not much of this is what I would consider "basic research", a lot of it is potentially interesting - physical one-way functions (have been discussed on /. before, parasitic power harvesting, electric field sensing). Then some is just hokey beyond all belief (Electronic Music Interfaces: New Ways to Play, Instrumented Footwear for Interactive Dance). And some of it is stuff like NMR QC which may someday pan out, though frankly it doesn't seem like anything terribly innovative has been done with this recently in the Gershenfeld group, that may change now that Isaac Chuang has moved there from IBM Almaden - still 50 years out from usefulness if ever.

      • I haven't worked with Gershenfeld, but have followed the Media Lab with some interest. At first, I approached news about the Media Lab with the awe that I believed appropriate to an elite institution, but after comparing what I knew from working in the technology field (in companies that are producing real products) with what Negroponte and others were saying it became apparent that most of what the Media Lab spins about the future is pure marketing hype at best and total bullshit at worst. The Media Lab should be called the Media Playground. Mostly its a bunch of talented people who play with technology. Playing with technology is fine and valuable things can come from it especially in basic research, howevever, by the very fact that it is grounded in play (i.e. something without an end or telos), rather than work, it is not going to be a good indicator of where society will be in 10, 20, or 100 years because society, for the most part, is driven by economics, and economics has a very definite end, profit. Essentially the folks at the Media Lab are parlaying MIT's well-deserved reputation as an excellent engineering school into a claim of credibility in an unrelated field, product marketing, in order to attract funding. How many products developed in the Media Lab actually make money? I don't mean how many products that have passed through the Media Lab (they do see a lot of the cool stuff first), but how many products that are based on research that originated in the Media Lab are making money? I am willing to bet fairly few, but I haven't run the numbers myself. That's why this quote is the funniest one in the whole review:
        "By reorganizing education on the model of the Media Lab, where students learn things as they need them for practical projects, not all at once in a huge, abstract lump."
        What a joke! It looks like the Media Lab is getting a little nervous about Olin college, whose focus is exactly that which is described, or his definition of "practical projects" is a little different than mine.
  • A week? (Score:5, Funny)

    by John Paul Jones ( 151355 ) on Monday October 28, 2002 @10:29AM (#4547689)
    [jpj@soul jpj]$ uptime
    10:27am up 46 days, 18:02, 19 users, load average: 0.69, 0.35, 0.23

    I must be late.

    -JPJ

  • Uptime (Score:4, Funny)

    by Arthur Dent ( 76567 ) on Monday October 28, 2002 @10:29AM (#4547692)
    My PC does run for more than a week without crashing:

    % uname -a
    SunOS <hostname-deleted> 5.7 Generic_106541-12 sun4u sparc SUNW,Ultra-2
    % uptime
    7:21am up 160 day(s), 19:11, 2 users, load average: 4.95, 4.40, 4.33

    Maybe you need a different PC?

    :)

    • C:\>uptime
      'uptime' is not recognized as an internal or external command,
      operable program or batch file.

      C:\>Windows has found unknown command and is executing command for it.

      C:\>Don't try to save your work because I'm rebooting now.

      C:\>Warning, could not upload pirated software registry to Microsoft
      • uptime is a Resource Toolkit application for WinNT/2k/XP. The average machine does not have it.
        Heres a sample from a few of our servers:

        [Version 5.1.2600]
        (C) Copyright 1985-2001 Microsoft Corp.

        D:\Documents and Settings\jpitts>uptime [W2kAdvSrv1]
        \\[W2kAdvSrv1] has been up for: 62 day(s), 19 hour(s), 51 minute(s), 37 second(s)

        D:\Documents and Settings\jpitts>uptime [W2kAdvSrv2]
        \\[W2kAdvSrv2] has been up for: 62 day(s), 19 hour(s), 44 minute(s), 15 second(s)

        D:\Documents and Settings\jpitts>uptime [W2kAdvSrv3]
        \\[W2kAdvSrv3] has been up for: 61 day(s), 1 hour(s), 6 minute(s), 24 second(s)


        ...obviously, this could be a fake....
        • ...obviously, this could be a fake....

          Could be, but it's not unbelievable. W2K can be quite stable, as long as you load it with only a couple of stable applications and let it just sit there and run, like any server installation should. I've seen server installations that didn't do that, of course, but not everyone running Windows is stupid. ;)

          At the same time, test it under conditions more common for a home user (or a server with a poor admin) with a dozen or two random applications being started and stopped fairly frequently, and it crashes just like all of its predecessors. That's why I've been very impressed with my Mac, I use it fairly heavily, dozens of odd programs, games and all sorts of other strange stuff... unstable alpha software all over it. Never seen it crash yet. Corrupted the file system once, but that still didn't crash it, kept right on running while I repaired it. A friend of mine managed to crash his, but he won't tell me how, just that it took a lot of work. :)

    • But my uptime output isn't nearly so impressive, because I shut it down to save batteries sometimes.

      % uname -a
      Darwin <hostname-deleted> 6.1 Darwin Kernel Version 6.1: Fri Sep 6 23:24:34 PDT 2002; root:xnu/xnu-344.2.obj~2/RELEASE_PPC Power Macintosh powerpc
      %uptime
      5:04PM up 9 days, 13:28, 3 users, load averages: 0.72, 0.64, 0.60


      My x86 box had an uptime of 28 days once in linux... I rebooted to play an old game in windows. Even that box never crashed except when either running windows or when critical hardware failed... I agree, the poster needs a different PC, or maybe just a different OS.
  • And Then... (Score:2, Funny)

    by Anonymous Coward

    When the smart machine logically concludes that the human infestation is harmfull to the planet.....

  • by Ted_Green ( 205549 ) on Monday October 28, 2002 @10:34AM (#4547740)
    As the header says, it does seem a bit overly optimistic. Esp: "When things talk to things, human beings are set free to do work they find more appealing." It just seems to scream utopia socalism, but more to the point in our history with all the great time saving inventions and methods, many "ordinary people" still spend as much time doing "chores" as they did 50, or even a 100 years ago.

    Of course, if one is talking about the work place then there's an entierly differnt issue. That of unemployment. (I'm not saying wheter it's good or bad to introduce technology that can do another's job. I'm only saying it *is* an issue, esp. if you're somone who's job is at risk.)
    • The difference is that 100 years ago, you might have worked 10-12hours a day to earn enough money to feed your family, and you wife would work at home all day doing landry, mending clothes, cooking, etc... Now with many chores automated we get to own TV's, A/C etc. It not the elimination of work, it removing some work so that we can focus on other things. History has shown that people don't use the extra free time machines gove them to loaf around, they use it to produce more, and make their lives better, cleaner, and healthier.

    • My mother tells me a story about all of the wonderful optimistic products that she used to see right before movies. "The Chrysler Jet Car of the Future," or the "Push Button Kitchen."

      The most outrageous claim was that with all of those labor saving devices, that people would have a work week of about 22 hours, leaving all of this ample time for family which never materialized. Matter of fact, we are more efficient than ever, and have no free time at all. No one just pulls a 40 anymore... unless their company is in financial trouble.

      So my family made up this statement, that serves us well, and keeps us sane.

      "Increased performance in anything creates even more increased expectation, complication, and increased harassment."
  • by Dan Crash ( 22904 ) on Monday October 28, 2002 @10:37AM (#4547765) Journal
    ...when computers are so smart, cheap, and ubiquitous that they do many ordinary chores to help ordinary people. When things talk to things, human beings are set free to do work they find more appealing.

    This is the same old nonsense that's been touted ever since the age of the washing machine. Considering the thousands of labor-saving devices we've acquired throughout the 20th century, by this logic we ought to be living lives of perfect leisure now. But this isn't what happens. In industrial societies, "labor-saving" devices don't. Work expands to fill the time available. When things think, I'm sure you and I will be freed from the tedious chores of cooking, driving, cleaning, and living. We can become machines ourselves, consumed with work until we burn out or die.

    (More at Talbot's Netfuture [oreilly.com], if you're interested.)

    • by FeloniousPunk ( 591389 ) on Monday October 28, 2002 @10:48AM (#4547848)
      All work is not the same. I much prefer the sort of work where I can sit at my computer, and from time to time visit Slashdot, than being out in the elements digging ditches.
      Those labor saving devices do save labor, and I'm thankful for them. Just start washing your family's clothes by hand for a while and you'll see what they mean by labor saving.
      If I had to do all the chores that need to be done the way they were done in 1900, I'd sure as hell have a lot less leisure time. It ain't perfect leisure, but it's more leisure, and that's pretty good considering the alternatives.
  • Research (Score:5, Interesting)

    by sql*kitten ( 1359 ) on Monday October 28, 2002 @10:41AM (#4547791)
    One underlying theme dear to Gershenfeld's heart is the death of traditional academic distinctions between physics and engineering, or between academia and commerce. Applied research is real research.

    How would he know? MIT Media Lab, under Nicholas Negroponte, don't do anything that any academic or industry practitioner would consider to be "research". You see, in the words of Negroponte, they live in a world not of "atoms" but of "bits". In the world of atoms, researchers have to produce such things as peer-reviewed papers and working prototypes. In the world of "bits", researchers are measured by the number of column inches they get in Wired magazine. MIT Media lab churns out books and articles by the tonne, but it's little better than scifi, most of it, and very little of it is even original.

    You would think that the hard-headed engineers at MIT would have seen that the Emperor has no clothes and would have cut off their funding by now, but mysterious the Media Lab clings to life. They are an embarassment to real futurists everywhere. Contrast them with the work done at IBM's labs, or BT's, or even Nokia, where stuff is made that actually makes an impact on the real world a decade or two later.
    • Re:Research (Score:1, Interesting)

      by Anonymous Coward
      Media Lab is not funded by MIT-- it's funded by corporate sponsors, and I believe all three of the companies you mention actually pay the Meda Lab for rights to their inventions. (I'm not sure, it's been a while.)

      There's this funny misconception about the Media Lab because it has gotten tons of publicity in Wired-type futurist magazines, but if you actually stopped and tried to back up your statements, you'd find that there is an amazing amount of peer-reviewed research that comes out of most groups there. Just like any other good school. But I can see how most people would be blinded by their darling status.

    • The reason they cling to life is actually the bizzare concrete sculpture sticking out one side of the lab building.

      It's actually a hypnotic psychic antenna, broadcasting cool waves and attracting the impressionable to write big checks.
    • They don't cut-off the funding because the Media Lab brings in a lot of money. Think of it as the Marketing division of MIT (see my other response).
  • I am ready to let MIT graft smart chips into my skin some day after my PC goes a week without crashing

    What ? That means that you actually try to run it for several days without reboots ? You don't compile and try a new kernel twice a day ? What the hell do you do on /. ?

  • by drhairston ( 611491 ) on Monday October 28, 2002 @10:43AM (#4547809) Homepage
    How will things that think be developed?"

    By making them small and cheap.


    The invisible addendum to this sentence is expendable. Small, cheap, and expendable - the mantra of the Japanese economy. Someday we'll be so deep in silicon poisoning [svtc.org] that it will be a worldwide crisis, and we'll have to have a resolution like the Kyoto Protocol so that our president can ignore it. But like our automobile industry fifty years ago, we should march relentlessly ahead with abandon until we reach a crisis point, rather than attempt to head it off now.

    If machines could truly think they would be screaming at us: "Don't Throw Us Out!!!".

  • My grandfather once gave me a copy of this book. Being interested in what I do learning Artificial Intelligence he also read it. He found it clarifying the possibilities of AI and IT in general a lot. Him not having the slightest experience with computers generally would mean that it's not so interesting for someone deeper into the subject.

    But while it's true that the book doesn't get really technical and left me wondering for a lot of the details, the enthusiastic way it's written and the really original projects that are described make it a really nice read. It's really motivating and can help the known problem of having learned a programming language and not having the slightest clue what to program in it.

    I think that when you don't see it as a computer book but as reading material for a holiday the book deserves more than a 5. Borrow it from someone and read it, it's not like it'll take a lot of time.

  • Smart OS (Score:1, Funny)

    by Anonymous Coward
    I am ready to let MIT graft smart chips into my skin some day after my PC goes a week without crashing

    Hmmm... ->Elightenment-Fan, I wonder what unstable OS he's running.
  • After reading Dune: The Butlerian Jihad, I'm not sure I like the idea of thinking machines.
  • 1. if my toaster starts to think I swear to God I'll shoot it. think about it.
    2. i can make your computer never, ever need rebooting if you promise getting chipped. you should be accounted for at all times.
    • if my toaster starts to think I swear to God I'll shoot it. think about it.

      Toaster philosophy:

      'If God is infinite, and the Universe is also infinite... would you like some toast?'

      Lister ended up taking the thing out with an axe, IIRC.

  • Cogito Ergo Sum (Score:2, Interesting)

    by scottennis ( 225462 )


    This is going to be one of those situations where technology outpaces our ability to deal with the philosophical issues involved.

    I know what you're thinking: "Enough with the philosophy bullshit."

    And, of course, that response demonstrates exactly why we need to consider the "philosophy bullshit."

    Medical advances have burst on the scene so suddenly that we've had to quickly come up with a new area called bio-ethics to deal with all the ramifications of our new abilities.

    What happens when washing machines [lifeseller.com] become self-aware?

    We need new definitions and new delimiters to help us cope with the new technology. Even the technologists have to create new semantics to help them create the new technologies.

    Of course, we could just keep it all to ourselves and say, "To hell with anyone who can't understand our science."

    But then we would just be a bunch of assholes who don't deserve the gift of intellect with which we've been endowed.

  • by johnrpenner ( 40054 ) on Monday October 28, 2002 @11:05AM (#4547974) Homepage

    Materialism can never offer a satisfactory explanation of the world.
    For every attempt at an explanation must begin with the formation of
    thoughts about the phenomena of the world.

    Materialism thus begins with the thought of matter or material
    processes. But, in doing so, it is already confronted by two different
    sets of facts: the material world, and the thoughts about it.

    The materialist seeks to make these latter intelligible by regarding
    them as purely material processes. He believes that thinking takes
    place in the brain, much in the same way that digestion takes place in
    the animal organs. Just as he attributes mechanical and organic
    effects to matter, so he credits matter in certain circumstances with
    the capacity to think.

    He overlooks that, in doing so, he is merely shifting the problem from
    one place to another. He ascribes the power of thinking to matter
    instead of to himself.

    And thus he is back again at his starting point. How does matter come
    to think about its own nature? Why is it not simply satisfied with
    itself and content just to exist?

    The materialist has turned his attention away from the definite
    subject, his own I, and has arrived at an image of something quite
    vague and indefinite. Here the old riddle meets him again. The
    materialistic conception cannot solve the problem; it can only shift
    it from one place to another.

    (Philosophy of Freedom, Chapter 2 [elib.com])

  • by m0i ( 192134 ) on Monday October 28, 2002 @11:12AM (#4548032) Homepage
    Here is a poem that illustrate the limitations of a computerized brain:

    No program can say what another will do.
    Now, I won't just assert that, I'll prove it to you:
    I will prove that although you might work til you drop,
    you can't predict whether a program will stop.

    Imagine we have a procedure called P
    that will snoop in the source code of programs to see
    there aren't infinite loops that go round and around;
    and P prints the word "Fine!" if no looping is found.

    You feed in your code, and the input it needs,
    and then P takes them both and it studies and reads
    and computes whether things will all end as the should
    (as opposed to going loopy the way that they could).

    Well, the truth is that P cannot possibly be,
    because if you wrote it and gave it to me,
    I could use it to set up a logical bind
    that would shatter your reason and scramble your mind.

    Here's the trick I would use - and it's simple to do.
    I'd define a procedure - we'll name the thing Q -
    that would take and program and call P (of course!)
    to tell if it looped, by reading the source;

    And if so, Q would simply print "Loop!" and then stop;
    but if no, Q would go right back to the top,
    and start off again, looping endlessly back,
    til the universe dies and is frozen and black.

    And this program called Q wouldn't stay on the shelf;
    I would run it, and (fiendishly) feed it itself.
    What behaviour results when I do this with Q?
    When it reads its own source, just what will it do?

    If P warns of loops, Q will print "Loop!" and quit;
    yet P is supposed to speak truly of it.
    So if Q's going to quit, then P should say, "Fine!" -
    which will make Q go back to its very first line!

    No matter what P would have done, Q will scoop it:
    Q uses P's output to make P look stupid.
    If P gets things right then it lies in its tooth;
    and if it speaks falsely, it's telling the truth!

    I've created a paradox, neat as can be -
    and simply by using your putative P.
    When you assumed P you stepped into a snare;
    Your assumptions have led you right into my lair.

    So, how to escape from this logical mess?
    I don't have to tell you; I'm sure you can guess.
    By reductio, there cannot possibly be
    a procedure that acts like the mythical P.

    You can never discover mechanical means
    for predicting the acts of computing machines.
    It's something that cannot be done. So we users
    must find our own bugs; our computers are losers!

    by Geoffrey K. Pullum
    Stevenson College
    University of California
    • Nice halting proof.There's a little problem in defining Q:

      "Here's the trick I would use - and it's simple to do.
      I'd define a procedure - we'll name the thing Q -
      that would take and program and call P (of course!)
      to tell if it looped, by reading the source; "

      How does the last verse map to calling P(proc,proc)?
    • If you're programs are stupid,
      And you're getting the blues:
      Just remember to always
      Watch your Ps and Qs.
  • I'm still waiting (Score:5, Insightful)

    by teamhasnoi ( 554944 ) <teamhasnoi@yahoo.cLIONom minus cat> on Monday October 28, 2002 @11:16AM (#4548073) Journal
    for the majority of *people* to think.

    To quote Joe vs. the Volcano: '99% of people go through life asleep; the remaining 1% walk around in a state of constant amazement.'

    To add to that I'd say: 99% of people *think* they're awake; the remaining 1% know they've got some waking up to do.

    There you have it, your Zen moment of the day.

    To be quite honest, if I'm still waiting for a Photoshop render, or a level to load in RTCW, our machines aren't ready to think.

    • To be quite honest, if I'm still waiting for a Photoshop render, or a level to load in RTCW, our machines aren't ready to think.

      Stop. Now, picture EVERY detail of a RTCW map. The textures, the physics, and the exact (not estimated) dimensions of all of the objects.

      Took you more than five seconds, right? And if I were to look at a refernence map of the same map you imagined, you'd have to "think" again to get the details I ask for--assuming that you have perfect memory.

      Contrast this with the behavior of the bots in RTCW. IANAFPSD, (First Person Shooter Developer) but I suspect that the "bot code" and the "map code" reside in two different spots of the program, and talk no more than word and winamp do.

      Computers will think when admining them is "idiotproof"--and when we can make an idiot-bot. I say take Clippy, teach him how Windows works, and make him an administrator!

      Hmm....
  • Under the heading "Bad Words," Gershenfeld offers a snide but useful summary of many high-tech pop-sci buzzwords, showing how they get misused by people who don't understand their real content or context.
    Does this list include "hacker"? IMNQHO, that should be at the top of the list.
  • by D0wnsp0ut ( 321316 ) on Monday October 28, 2002 @11:19AM (#4548101) Homepage Journal

    To quote the [bad] movie Runaway:
    "Humans aren't perfect so why should machines be perfect?"

    Honestly, I see engineers and developers walking down the hall with their shirt half-tucked in and their shoes untied. A sign that either

    • they can't think for themselves
    • they don't care enough
    Now, both of those indicators give me serious pause when I consider that they may be designing machines that "think." If the developers can't think for himself/herself, how is his/her "thinking" machine going to think? If the developer doesn't even care enough to tie his/her shoes, do they care enough to engineer a "thinking" machine to the very high degree it requires and can I trust them to care enough?

    I dunno. Maybe I'd feel better about all this if every time I turn around I didn't see Yet Another stack-overflow or buffer-overrun bug (yes, the quality of code is getting better but there is still too much of this crap.) Maybe I'm just a pessimistic pisser. Perhaps I enjoy laughing at an engineer when they fall flat on their face after tripping over their untied shoelace.

  • Oh wait, he already did [slashdot.org].

    If you don't get the joke, you should look through his previous posts. About half of them are shills for amazon using his referrer tag.
  • Reads like an essay from an undergrad "futurism" class, yeah?

    The examples aren't all that well-chosen, for one thing. The eBook isn't at a price point where people are going to adopt it -- and are there stable standards for files and so on yet? -- but it's not a great example of new technology that didn't "respect" the one it was trying to replace (or be an adjunct to, more like). The displays on those things got a ton of attention, because the designers knew they needed to be as easy on the eye as paper and ink. There are lots of tradeoffs between the two -- which is more "portable" if the one that can run out of batteries can also carry a large number of books in one small package? -- and the eBook just hasn't hit that sweet spot yet. But the companies behind its development, those were all big publishing companies, weren't they? They know books, they "respect" them. It's an okay point, but a shaky example. Anyway, the question of why and when things will thing isn't nearly as interesting as the question of why and when people don't think... ;)

  • We've heard all this before. I'm still waiting for my 1) Robot butler, 2) Flying car, 3) Fusion reactor, 4) Moon resort hotel, 5) Slidewalk. Futurists always get the future wrong. Whenever anybody, no matter how knowledgeable, makes a prediction about how things are going to be in more than five years, it always turns out differently. I'll believe in the thinking machines when I talk to one. On the other hand, the things that we do get turn out to be a complete surprise. In the 1960s nobody imagined that so many people would have computers in their homes.
  • by Elwood P Dowd ( 16933 ) <judgmentalist@gmail.com> on Monday October 28, 2002 @11:58AM (#4548457) Journal
    I gave up on futurists when Alvin Toffler predicted that "in the future" we'd wear paper clothing.

    I mean, maybe he's right. But who cares?
  • by Animats ( 122034 ) on Monday October 28, 2002 @12:21PM (#4548709) Homepage
    I'm suprised to hear the Media Lab guru talking about "things that think". This is meaningful only for a very low definition of "think".

    "Thinking" has been ascribed to mechanical devices for quite some time. Watt's flyball governor for steam engines yielded such comments in its day. Railroad switch and signal interlocking systems were said to "think" early in the 20th century. At that level, we can do "things that think".

    But strong AI seems further away than ever. After years in the AI field, and having met most of the big names, I'm now convinced that we don't have a clue. Logic-based AI hit a wall decades ago; mapping the world into the right formalism is the hard part, not crunching on the formalism. Hill-climbing in spaces dominated by local minima (which includes neural nets, genetic algorithms, and simulated annealing) works for a while, but doesn't self-improve indefinitely. Reactive, bottom-up systems without world models (i.e. Brooks) can do insect-level stuff, but don't progress beyond that point.

    I personally think that we now know enough to start developing something with a good "lizard brain", with balance, coordination, and a local world model. That may be useful, but it's still a long way from strong AI. And even that's very hard. But we're seeing the beginnings of it from game developers and from a very few good robot groups.

    Related to this is that we don't really understand how evolution works, either. We seem to understand how variation and selection result in minor changes, but we don't understand the mechanism that produces major improvements. If we did, genetic algorithm systems would work a lot better. (Koza's been working on systems that evolve "subroutines" for a while now, trying to crack this, but hasn't made a breakthrough.)

    It's very frustrating.

  • In B & B the furniture gossiped, sang and dance. Cant wait for the day when the things in my house sing and dance too, thanks to the Media Lab. :-)
  • by Anonymous Coward
    Is it that these guys can't learn, or won't learn? They have been preaching the same delirant projections for some three decades now, and look where we are. Have you guys tried to interact with ALICE, the most recent Loebner Prize winner? It's really pathetic.

    To the AI practitioners: You guys are no closer to understanding how human-level intelligence works today than you were thirty years ago, when the spectacular results that you got on very specific, well-defined problems made your head swell up.

    In my view, the guy takes a large chunk of the blame is Marvin Minsky, who, after having seen not many (if any) of his extravagant forecasts realized, he still refuses to adopt a more circumspect attitude. I am sure he was an AI guru during the 60s, but he has shown little capabilities to adapt and learn - and to stop making silly public announcements.
  • by dmorin ( 25609 ) <dmorin@gmail . c om> on Monday October 28, 2002 @12:56PM (#4549037) Homepage Journal
    ...three things stay with me (although honestly I think only two of them are explicitly mentioned and I am extracting a third from that).
    1. The throwaway technology example of those little plastic/metal strips that set off the security alarm if you steal clothes. You have to be able to make such things for less than a penny and assume that they will all be thrown away. Years ago when I started talking to people about smart cards, they cost a few dollars a piece and the first question was always "Wait...I have to buy these and then give them to people?" Once you can make smart devices (and by smart I believe he defines it as needing enough memory to having a unique id, or something like that, and maybe transmit it?) then you are well on your way to a level of ubiquitous computing that you can't imagine *without* that. Imagine the audience of people that own a PDA. Now imagine the audience of people that, say, wear clothes. The numbers are staggeringly different. Will everybody eventually own a PDA? Unlikely. Could we potentially imbed PDA-like technology in clothes? Sure.
    2. Power. Batteries are a huge problem in their clunkiness, weight, and generally short lives. If I recall this book talked about things like a power source in your shoe that would recharge throughout the day as you walked. "Ubiquitous recharging", anyone? If we combine this with the first point about throwaway technology, people will no longer think "Damn, time to recharge my coat" they will expect to just buy a new one. Therefore if the batteries die out too often, this is no good. The batteries need to last as long as the coat lasts, without explicit recharging.
    3. Thinking. (Here's the one I'm not sure was specifically mentioned in the book). A famous quote is attributed to Minsky where he says "My thermostat has opinions. It has three of them. It is too cold in here, it is too hot in here, it is just right in here." By that logic, one could argue that the penny-costing strip "thinks" that is is still in the store, or thinks that it has just been removed from the store. Much like the emergent behavior found in cellular automata and artificial life, there is no rule that says "thinking" must come from higher level processes. Didn't Minsky's "society of the mind" deal with a similar concept, that higher level thought is really just a collection of lower level ones?
  • (I am ready to let MIT graft smart chips into my skin some day after my PC goes a week without crashing.)

    $ uptime
    1:31pm up 27 days, 14:04, 2 users, load average: 5.44, 6.23, 6.58
  • Why do they believe that when things start thinking and talking to each other autonomously things will get better?

    There are already billions upon billions of "thinking" beings, most smarter than any existing man-made thinking machine but many costing less (go to your pet store/SPCA etc for examples). And when I last checked the world isn't anything like the utopia they are talking about.

    Sure when there were human slaves, things were reasonably good most of the time for the slave owners, but slaves didn't and couldn't always do what you want either. There were plenty of other problems too.

    As for humans being free to do things they find appealing, do you think we would easily be allowed to parasite the thinking machines? I doubt it. Do we all get the same quota of blood to suck? Would other humans or the machines themselves allow it?

    Why I doubt it - we have more than enough food in the world to feed everyone, but yet masses are still starving.

    Now if they are talking about some of us being able to have more toys and entities to play with then that's different.

There are never any bugs you haven't found yet.

Working...