Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Technology Books Media Book Reviews

Review:The Age of Spiritual Machines 260

Remember Hal in Stanley Kubrick's "2001"? He was a wuss compared to the deep thinking digital machines Ray Kurzweil suggests are heading our way over the next century. Forget the debate over human versus artificial intelligence. "The Age of Spiritual Machines" suggests that we and our computers are about to become one, evolving together in a whole new bio-digital species. By 2020, computers will be as smart as we are. By 2099, there will no longer be any clear distinction between humans and computers. Is this techno-hype or prescient futurism?

In l990, inventor Ray Kurzweil predicted in "The Age of Intelligent Machines," that the Internet would proliferate rapidly and that the defeat of a human chess champion by a computer was imminent.

He was right on both counts, so it's worth paying attention to his new book, "The Age of Spiritual Machines," (Viking, $25.95). This round, Kurzweil is making even more radical predictions - namely, that computing will develop so rapidly over the next century that technology and human beings will literally merge in socially, educationally, biological, even spiritual ways.

Kurzweil has ratcheted up the human-versus-artificial intelligence debate a few notches. There will, he makes clear, be no human intelligence versus artificial intelligence. We and our computers will become one.

This theory picks up where Moore's Law leaves off. Gordon Moore, one of the inventors of the integrated circuit and former chairman of Intel, announced in l965 that the surface area of a transistor - as etched on an integrated circuit - was being reduced by approximately 50 per cent every twelve months. In l975, he revised the rate to 24 months. Still, the result is that every two years, you can pack twice as many transistors on an integrated circuit, doubling the number of components on a chip as well as its speed.

Since the cost of an integrated circuit has stayed relatively constant, the implication is that every other year brings twice as much circuitry running at twice the speed for the same price. This observation, known as Moore's Law on Integrated Circuits, has been driving the acceleration of computing for decades.

The most advanced computers are still much simpler than the human brain, currently about a million times simpler. But computers are now doubling in speed every twelve months. This trend will continue , Kurzweil predicts, with computers achieving the memory capacity and computing speed of the human brain by approximately the year 2020.

This is a stunning idea. Human evolution is seen by scientists as a billion-year drama that led to its greatest creation: human intelligence. Computers will get to the same point in less than a hundred years. It's time - past time, actually - to start asking where they will go from here.

Kurzweil doesn't argue that next year's computers will automatically match the flexibility and subtlety of human intelligence. What he predicts is the rapid rise of what he calls the software of intelligence. Scanning a human brain will be achievable early in the next century, and one future approach to computing will be to copy the brain's neural circuitry in a "neural" computer designed to simulate a massive number of human neurons.

"There is a plethora of credible scenarios for achieving human-level intelligence in a machine," writes Kurzweil. "We will be able to evolve and train a system combining massively parallel neural nets with other paradigms to understand language and model knowledge, including the ability to read and understand written documents."

Kurzweil's own law of accelerating growth and return is centered on the idea that this new bio-digital species becomes increasingly learned and sophisticated, life will become more orderly and efficient, while technological development continues to accelerate.

Kurzweil's premise - that computers will become as smart as we are and then merge their intelligence with ours -- is not only challenging and provocative; it also makes sense. But he isn't as clear or coherent when it comes to divining just what kind of intelligence computers will have - how intuitive they can be, how individualistic or ethical.

By the second decade of the next century, there will be reports of computers passing the Turing Intelligence test, says Kurzweil. The rights of machine intelligence will become a public policy issue. But machine intelligence will still largely be the product of collaborations between humans and machines, computers still programmed to maintain a subservient relationship to the species that created them. But not for long.

Where his book and his vision stumble is in grasping what will happen to us when computers become smarter than we are, then sensual, social or spiritual. Will we better off? Will the computer be moral? Will it have a social or other consciousness? Do we wish to merge with computers into one species? Will we have any choice? We could be heading for a sci-fi nightmare or, alternatively, for another of those utopian visions that used to pepper Wired magazine before it became the property of Conde Nast.

While futurists can measure or plot the computational skills of tomorrow's computers, can anyone really know the precise nature of that intelligence, and whether or not it can replicate the functions of the human brain?

The idea of our being outsmarted, thus dominated and endangered by computers, has been portrayed as a nightmare in Stanley Kurbrick's "2001" (Kubrick apparently greatly underestimated the virtual person Hal would become). It's also surfaced in various rosy intergalactic Disney-like visions in which machines perform labor, clean the air, heal humans, teach kids. Kurzweil doesn't say which notion, if either, sounds more plausible.

The latter half of the book becomes essentially a time-line: Kurzweil somberly walks us through the evolution of computing intelligence, and the eventual merging of digital technology and human beings into a new species.

By 2009, Kurzweil predicts, human musicians will routinely jam with cybernet musicians. Bioengineered treatments will have greatly reduced the mortality from cancer and heart disease. But human opposition to advancing technology will also be growing, an expanding neo-Luddite movement.

By 2019, nonetheless, Kurzweil predicts that computers will be largely invisible, embedded in walls, furniture, clothing and bodies - sort of like the artwork in Bill Gates' massive new mansion. People will use three-dimensional displays built into their glasses, "direct eye" displays that create highly realistic, virtual visual environments that overlay real environments. Paraplegics will routinely walk and climb stairs through a combination of computer-controlled nerve stimulation and exoskeletal robotic devices. This display technology projects images directly onto the human retina, exceeds the resolution of human vision, and will be widely used regardless of visual impairment.

In 2009, human opposition to advancing technology will be growing - as in the spread of the Neo-Luddite movement.

By 2019, there will be almost no human employment in production, agriculture, or transportation, yet basic life needs will be met for the vast majority of the human race. A $1,000 computing device will approximate the computational ability of the human brain that year, and a year later, the same amount of money will buy the computing capacity of about 1,000 human brains.

By the year 2099, a strong trend towards the merger of human thinking with the world of machine intelligence that humans created will be underway. There will no longer by any clear distinction between humans and computers. Most conscious entities will not have a permanent physical presence. Life expectancy will no longer be a viable term in relation to intelligent beings.

Small wonder Kurzweill expects a growing discussion about the legal rights of computers and what constitutes being "human." Direct neural pathways will have been perfected for high-bandwidth connection to the human brain. A range of neural implants will be available to enhance visual and auditory perception, machine-generated literature and multi-media material.

"The Age of Spiritual machines" surpasses most futuristic predictions of sci-fi writers and technologists. Scientists and programmers may not be the best judge of the nature of artificial digital intelligence. Some input from biologists and neurologists might have been useful. Sometimes, Kurzweil's predictions read like the numbingly familiar, gee-whiz techno-hype that infect mass media discussions of the Internet.

Yet Kurzweil is someone to be taken seriously. No nutty academic or cyber-guru, MIT named him inventor of the year in 1988; and he received the Dickson Price, Carnegie Mellon top science award, in 1994.

Caution is still in order. Kurzweil's earlier predictions about the Net and chess were short term, thus much more cautious and feasible. Only the new bio-digital species will know if these visions turned out to be right.

Predictions about the future of technology have a checkered history, itself a cautionary tale for futurists. Walt Disney was convinced we'd be whizzing back and forth to Saturn on weekends by now. We were surely supposed to be controlling the earth's climate rather than worrying about holes in the ozone layer. And whatever happened to cancer cures and hover cars?

But it's hard to find any parallel with the history of computing. The growth of digital machines suggests the future of computers be taken more, not less, seriously. "The Age of Spiritual Machines" is a wake-up call. It reminds us that the relation between human beings and the remarkable machines they've invented is almost sure to change radically. Perhaps it's time to start thinking seriously about how.

Buy this book here.

You can e-mail me at jonkatz@slashdot.org.

This discussion has been archived. No new comments can be posted.

Review:The Age of Spiritual Machines

Comments Filter:
  • by Anonymous Coward
    I think that you're missing the point of the Turing test.

    What we start with is the question of what intelligence is. Is it the power to complete a mathematical proof? The power to debate the merits of a Parlimentary government structure? The power to love?

    If we sat down and tried to think up a set of criteria that separates the intelligent from the non-inteligent, we'd fail. Computers can already do mathematical proofs (check out OTTER), puppies can love, and, well, what's there to debate about the parlimentary system? :)

    What we can all agree on is that (most?) people are intelligent. Why? What's to lead us to belive that Mrs. X has intelligence? Well, we have nothing concrete. We assume they do because they're human, and so are we, and since we're intelligent, so are they. But we don't really have any evidence that, for example, they have emotions.

    Now, the beauty of the Turing test is that it avoids the definition of intelligence. All it says is that something that can consistenly pass the test is indistinguishable from a person. Now, if that doesn't mean that the subject is intelligent, then I have to accept that I can't tell if all of those pesky people out there are intelligent.

    In that case, where are we? Philosophically, we're fine, but in a functional sense, we're lost.

  • Reproducing intelligence in computers may be possible, but only because we define intelligence in our own narrow way. Artificial intelligence is no intelligence, it is the closest thing resembling the notion of intelligence we have at this moment. People don't even understand themselves yet. They may know the outter shell, the molecules, the plumbing, but they don't know the essence of things, they can only barely see a distorted image of the shallow surface they perceive as the truth. That, they can mimic in computers, and because their own conscience is limited they will not be able to tell the difference. This is reflected in the Turing test, which is not more than "what i see is there". Then again, if you want to get to know more about conscience maybe you should ask a buddhist instead of a mathematician. So in my opinion, this book is not much different from those people in the 60's who believed that we'd all live in glass bubbles all over the solar system by the year 2000. Evolution itsself is much more advanced than these futurists can grasp, and i believe the next hundred years will definately be interesting and bring us things we could have never imagined. But i also think they will not even resemble "spiritual computers", which in itsself is a contradiction in terms.

    ps: About Deep Blue: A computer beating a chess player is the same as a bulldozer running over a flower. Brute force will get you there but it doesn't mean shit in terms of intelligence.

    ps2: If i want to "merge" with something or reproduce i'd get myself a date and not some piece of metal. If you think otherwise it's time you get away from your computer and sniff in some fresh air from outside.
  • Borg Me, Linux [airwindows.com]
    I'll be the brain, personality and instincts. I'm good at that, but I cannot add with any serious impressiveness.
    You be the nervous system doing my every whim ;)
  • 1) AI is going to need some *major* advances in the next 20 years, once that I don't think will happen fast enough. For one thing, AI now is still the same it was 20 years ago.
    The best the computer could do was learn from its mistakes, and anyone who ever took a programming class can write something like that. Noone is still quite sure how the brain works, and until we understand that, we can't duplicate its functionality.
    Take a look at a real AI example: speech recognition. It's been around for years, and the technology and accuracy is improving, but it still can't handle context of words. Go try the latest version of Dragon Dictate as an example.

    2) Technology is starting to hit its limits. NPR had a report a few months ago about the fact that with chip manufacturers using smaller and smaller chips, there's no way to chips them without using X-rays (someone with a better knowlege of this back me up here). This indicates that Moore's law may be running into a brick wall in the next few years. With a limit in growth of CPU horsepower, you'll start to see limits on AI, since you need a lot of CPU speed to try and emulate the human brain.
  • Posted by HolyMackeralAndy:

    Over ten years ago I saw Timothy Leary speak. He discussed this very thing. Nothing new.....
  • Posted by jonrx:

    :)
  • Posted by Konstantinos Margaritis:

    It is definately not, but of course that's what you learn in the universities.
    If we naively consider the universe isotropic then it might perhaps be constant
    But it has been proven that it behaves like a crystal, albeit its crystalic properties are really miniscule
    But it does make c (that is the speed of light) dependant on the direction.
    This has to do with the fact that Einstein based his theories on Riemann space.
    The relatively new theories that are to complement Einstein's are based on Finsler spaces that are anisotropic by default and the speed of light is just a variable.
    This sounds simple as an idea, to me at least. I mean why should the universe be isotropic? Isotropy is just a mathematical properties, however universe is characterized by properties like self-similarity, fractality, chaos, variety and quantum properties.
    If one reads some cosmology, a crude and underestimatin analogy is that we are just a small ant colony that is out to explore the Pacific in a leaf.
    We don't even know what is an electron, and we want to duplicate a brain full of correlating atoms and electrons!
    Ha!
  • by gavinhall ( 33 )
    Posted by Dacre:

    There is a new train of philosophy that suggests that the brain holds the answer to the conundrum of deterministic time.

    If time is analysed in relativistic terms, then it becomes a dimension along which movement is possible but whose topography in relation to other dimensions is fixed - just as all events that happened last Thurday are unchangeable, all events that will happen next thursday are unchangeable.

    Most people are personally convinced that it doesn't work this way, although they remain open to the possibility that in an unconscious universe (matter, atoms, stars and space dust etc)things might - accurate predictions could be possible.

    If consciousness can be established, through quantum actions of the brain, to manipulate time in some way, then we regain free will, and the ability to manipulate events that we percieve as yet to take place by "warping" time itself. This suggests we each and all carry with us our little pockets of Heisenbergian uncertainty.

    There is certainly activity taking place in the brain that demonstrates a potential for the organ to be functioning at a quantum level, and that it is these quantum interactions that may explain consciousness as the biological process governing this manipulation.

    If this philosophy is true then it is unsurprising that transistor based neural nets remain "dumb" machines.
    Perhaps the article on quantum dots posted a couple of articles after this one may be describing a milestone in the birth of the first true AI
  • Before you belive this go to a university and study logic. I did it in Math, but I understand philosphy has similar studies that aren't as difficult. All these arguements needs to be reconciled with Godell's incompleteness therom.

    I think electricial engineers will tell you that moore's law isn't expect to hold our that long because the size of atoms is larger then the predicted size of a chip. Note sure exactly here since that isnt' my field.

    I welcome the days when comptuers can do the boring difficult tasks. (there will be farmers though, but farmers have never made money, and so robots killing any possibility of money doesn't stop them.)

    Musicains regularly work with computers to create music. I prefer the sound of accoustic music though, and I have heard musicians who cannot play a keyboard and never will be able to becuase they are outplaying a mechanical piano and the comptuers can't capture the feeling of a real piano.

    Comptuers will help. The disabled will love the new mobility. The rest of us will enjoy other benifits. They won't take over. They can't. Go see some logic.

    I suppose that this book will appeal to those who know nothing about technology. They buy (and belive) the national enquirer, the weekly world news and similear pappers.

  • First, a couple of facts:

    It's not Kubrick's 2001 - it's Arthur C. Clarke's. Kubrick just did the movie.

    Second, none of these predictions are new, and they're not particularly original. Ever read the cyberpunk authors, like William Gibson, Bruce Sterling, or Walter Jon Williams? They all predicted these things in the early '80s (or even before). I'm sort of underwhelmed by someone who came along nearly 10 years later and "predicted" these very same things, when in fact we were well on our way to fulfilling them and the "trends" weren't very hard to see at all.

    The term Neo-Luddite is lifted from William Gibson, by the way. (but then, so is the term Cyberspace, so that's probably OK).

    "The Age of Spiritual machines" surpasses most futuristic predictions of sci-fi writers and technologists.

    Wrong. It apes them, and not very well from the sound of things. Sounds like John Katz needs to do a bit more reading before extolling the virtues of this kind of literature. It raises some interesting points, yes, so it's fodder for good discussions, but at the same time it's not original and doesn't seem to bring anything new to the table.

    I'll check out a used copy sometime.
  • Go read up on neural nets. Go read up on cognitive science -- yes, all the disciplines besides AI.

    You'll notice something interesting: the engineering endrun attempted with AI has been failing miserably. Strong AI -- which is what you're talking about -- has been following in the footsteps of cognitive psychology and philosophy since its outset: it has repeated all the same mistakes, fallen into all the same fundamental quandaries... At least it's crystallized the frame problem.

    Also of interest is that many of the big names behind computational functionalism as a theory of cognition are now jumping ship. These would be the founders of the field.
    --

  • When was the last time that you saw an object move in digital space?
    Just now, probably. The best of our current understanding indicates that the universe is, at its basest level, digital, with the "sampling rate" being Planck time. I mappen to think that digital can be better than analog, if the bitrate is such that I can't tell the difference between it and analog.

    I do think it'll be a while before we understand how consciousness works.


    --Phil ("The Meta-Turning test defines a species as intelligent if it seeks to apply Turing tests to devices of its own creation.")
  • This guy obviously is bored.
  • Anyone who says that the human brain is 'a million times more intelligent' than a computer is a twit, end of story.

    1. Just how do you measure intelligence? (see the SJ Gould book 'the mismeasure of man' for discussion on why you can't). (And the recent /. IQ vote)

    2. This assumes that computers are intelligent AT ALL. They are no more intelligent than a rock, just more useful for some tasks.

    While I'm at it, Moore's Law is heading for the rocks, because it's pretty obvious you CAN'T keep doubling the transistors/area forever, unless you invent sub-atomic transistors, which seem just a little unlikely. We are already nearing the barrier caused by the indivisibility of atoms - some storage devices have a bit density approaching the atomic density of the medium (IBM are getting close to one bit stored on just a few molecules)

    But MOST OF ALL:

    NO-ONE has even a vague shadow of a theory on how the brain actually works. We have no idea what it does, or how it does, much less how to start imitating it. Functional PET scans vaguely indicate that certain bits of it are more concerned with some functions than other bits, that's not very good is it?

    Kurzweil can join Negroponte in the pit of fools who predict the exciting so as to get fame and media attention. Twit.


    P.S. Just don't get me started on how genetic algorithms and neural nets are going to save the world of AI, because they probably aren't.

  • " Obviously the amount of logic necessary for human intelligence fits inside the human brain"

    You assume:
    1. Logic is a necessary and sufficient requirement for intelligence
    2. The mind is the brain

    Today we compare the brain to computers
    Before that we compard the brain to clockwork
    Before that we compared it to a windmill

    We don't seem to be learning. There is nothing about consciousness that suggests it requires a physical mechanism. There is simply a numerical correlation (not necessarily causal) between the existence of minds and the existence of brains.

    Neither is proven or obvious. This is why more scientists should read philosophy (and vice versa)
  • hm. it's 1999. is it the future yet?

    Let me go down the checklist. . .

    flying cars? nope
    robot maid? nope
    eternal youth? nope
    matter transport? nope
    cashless economy? nope
    truth detecting machines? nope

    Gee. Maybe by 2020, we'll have flying cars, and maybe Mr. Spacely will give me a raise so I can buy one. . .
  • by jafac ( 1449 )
    Oh, we have some pretty good ideas about how the brain works.

    but we don't have a fucking clue when it comes to the mind.
  • . . . which says that Deep Blue wasn't really "intelligent" -
    but still a FANTASTIC tool for whooping-ass on other human chess players.
  • oh, the BRAIN may be deterministic, and operate by physical laws. But it's still open for debate whether the MIND is/does. . .
  • -
    an evil computer turning on it's creators and taking over the world. . .

    that's a pretty old story, dates back to Frankenstien - no, Genesis. . .
  • -
    same with fusion. You realize that we've been 10 years away from fusion as an industrial power source since the 60's?

    In the same vein, Apple has been going out of business for 20 years. . .
  • "For example, although we have mastered flight and can fly higher and faster than anything God created on
    this earth we still cannot faithfully reproduce the skill and the grace of a swallow or the nimbleness of a
    dragonfly and I'll wager we never will"

    swallow and dragonfly: prolly easy compared to the human mind.
  • Moore's law won't break down - Intel will just sell more Dual and Quad machines to make up for continuing to fall behind the curve.
  • maybe the industrialized nations should have a holiday, two days every year, where they shut off all computers, and live life.

    that way, when a failure occurs, people won't be so paranoid that the world will come to an end when their Windoze machines all blue-screen.
  • " The social challenge will be how to re-distribute the
    huge profits this brings to those that don't have the skills to get a job in a non-production environment"

    Oh come on, we've already solved that problem.
    rich++
    poor--

  • : Free will is an illusion anyway.

    Guess I'm not at fault and shouldnt be punished for murder? Sillyness.

    You are a highly complex probabilistic information processor. Modern science does not allow for anything as foggy as "free will".

    That doesn't mean that punishments don't make sense however. A probabilistic information processor, if informed that certain actions may result in unpleasant consequences, will perform those actions with a lower probability. Just like an intelligent but deterministic chess computer, once it figures out that giving up the queen is usually detrimental, will refrain from doing so.

    All the talk about "responsibility", "guilt and innocence", "free will" etc. is utterly meaningless; it's just a different way of telling those chess computers that they are going to lose if they make a mistake.

    --

  • To quote Bruce Schneier (from the book I have at hand): "A nondeterministic Turing machine [works by] trying all guesses in parallel and checking the guess in polynomial time" (Applied Cryptography, second edition).

    Nondeterministic Turing machines have nothing to do with polynomial time. He was probably confused about NP problems, like pretty much every layman.

    --

  • The Turing Test is inadequate since it really only measures how well the machine simulates the input-output behavior of a human. That does not have much to do with intelligence; hyperintelligent aliens for example could not pass the Turing Test. Instead, we should try to build machines which can pass the Boldt Test: their input-ouput behavior should be as interesting as that of the average human. A conversation is defined to be more interesting than another if a human judge prefers to continue this conversation over the other.

    I've written up this proposal in a bit more detail here [uni-paderborn.de].

    --

  • I never got through GEB, but I think I'll pick it up again as a result of this discussion. I think, however that "using the axioms of the system as part of the system" would constitute an attractor in a chaos theory sense. No rigorous definition of identity would be necessary. The neural "algorithms" of the processor would "orbit" the contradiction, or "logical singularity" as I like to think of it, attempting to solve a "problem" that doesn't exist.
    The "problem" might be the program's own purpose. ----"Who am I?"
    *There's your contradiction.*
    Godel implies a moreness or fuzzy outer boundary to all seemingly logical systems that would force an ever finer grain to the processing that a neural net would be brilliantly suited to. Intelligence MUST result eventually.
    If this makes any sense to anyone, please rock on...
  • You mean someone drank it?! Damn! I was saving that for later...

    dylan_-


    --

  • You are a highly complex probabilistic information processor. Modern science does not allow for anything as foggy as "free will".

    Yes it does. We don't live in a clockwork universe you know....those ideas are out of date. Oh, and free will does exist. Just because you can mimic the behaviour to a limited extent in artificial systems, doesn't mean that that's all there is.....

    Of course, you can't prove this one way or the other (yet!), but I don't like to see opinion being represented as somehow scientific, just because you throw a computer and a chessboard into your examples.....I hope this doesn't come out sounding like a flame....

    dylan_-


    --

  • Remember that living, thinking organisms "half-ass" algorithms to solve real-world problems. Nobody looks at a map and evaulates every route from DC to New York. They just kind of look at it, and decide to take 95, or maybe Route 1.

    Also, DNA hardware is deterministic, it just goes through a whole boatload of permutations in parallel. AFAIK, quantum computing does the same thing, only more so, since the permutations don't "actually" happen. A proof of P=NP, now THAT would be progress. Maybe we should use some of this new cloning technology on Einstein's brain. Hmm...
  • If you could provide one piece of evidence for creation, I would appreciate it. You know it would be the first one.
  • I'm sure as hell glad you weren't in charge of the space program back in the sixties...
  • The "Buy this book here" Amazon link is oddly directed to an overpriced "The Age of Intelligent Machines" rather than the more reasonably priced subject of the review "The Age of Spiritual Machines".

    Possible causes include a) /. trying to boost commissions from Amazon b) editor put to sleep by Katz review or c) plain-old mistake. Naturally I'll plump for c) :-)

    For "Age of Intelligent Machines":
    Amazon $35
    BarnesAndNoble $35
    Shopping $19.25
    Kingbooks $21.50
    1BookStreet $24.75

    For "Age of Spiritual Machines":
    Amazon $15.57
    BarnesAndNoble $14.97
    Shopping $16.86
    Spree $14.97

    (Interesting to see that Acses now seems to use Shopping too. Still no BookPool or Spree tho.)

    Regards, Ralph.
  • And look what that got us! Pissed-off moon men, that's what!
  • I fail to see how a simplistic straight line exrapolation of empirical "laws" like Moore's law lead one to estimate the arrival time of computer based intelligence.

    How about this for a law (Andy's Law).

    "He who can program a thing, understands a thing."

    The inverse is also true:

    "He who understands a thing, can program a thing."

    We do not have even a rudimentary knowledge of the nature of intelligence. Until we do there will be no real AI.
  • You know, if some time in the future (at some unspecified date) some sort of machine intelligence can be demonstrated, what's the bets that some of these "it'll never happen" quotes like the ones above are going to turn up in the future history books?!?!?! ;)


    I wouldn't be too surprised if there's some sort of machine intelligence going in the future, (Not
    before I become an real Old Fart of
    course!!!!!!) but it'll probably be completely
    different from human intelligence, as there's
    some things that computers can do much better
    than human!!!! (eg processing data!!!) And
    course, they'll be in a box marked "computer",
    not in a human frame!!!!! But on a network, there
    could be times when you might be communicating
    with a computer, and not be aware of it, which by
    Turing's Test, means that it must be intelligent!!!

    (Yeah, OK, it might really be a computer, and it isn't really smart, and it doesn't have a "soul", but you would only really say that if you know for sure that it's computer at the other end- if you didn't, you would only have your communications to go on, and if you can't tell the difference between that and a normal human on that basis alone, then how can you say it isn't human??!?!?)


    As for Moore's law hitting a brick wall, well of
    course that's going to happen sooner. But what's stopping people simply building more and bigger computers?!?! I think some readers are automatically associating anything "computer" like with just a computer in a single box like a PC in a "work room", but what about having loads of
    computers all over the place, working in parallel?!?!? There's this really good networking system I've heard of recently- it's called the "Internet" or something...
    ===

  • AI proponets have been predicting this stuff since early sixties. They just keep changing the date.

    It's not the speed of processors or the amount of memory that needs to be compared to the human brain, it's the software. And nobody has any idea how to write it. We can't even agree on what is intelligence (see the IQ discussion) and somehow he expects to program intelligent machines.

    I also predicated in the early 80s that Internet would be a big thing and that a computer program would become world champion chess player. I just didn't write a book about it. It was obvious then.

    As far as jamming with "cybernetic musicians" I'm already doing that with drum machines and my computer.

    ...richie

    P.S. For an interesting take on the philosophical problems of AI I suggest reading books by Stanislaw Lem.
  • Augmenting humans using implanted computers, that is. Using the ability to vastly increase human capabilities could be very bad because human beings will still make mistakes. As an illustration, if someday people are able to plug right into their cars and have super reflexes, etc., there will still be lousy drivers.

    Essentially, I don't want anybody to have near-infinite knowledge without near-infinite wisdom, and nobody's perfected a way to prevent human beings from making bad decisions. When they do, I'll be the first in line to have my own abilities massively augmented.
  • Anyone who understands the 2nd law of
    thermodynamics knows that it only applies to
    CLOSED systems. The Earth is not a closed
    system. The energy released by the nuclear
    reactions in the Sun allows any system which
    can make use of that energy to decrease in
    entropy. Meanwhile, the Sun is increasing
    in entropy at a much faster rate. The net
    change is that the entire Solar System has
    increasing entropy, even though some individual
    portions of the Solar System have decreasing
    entropy.

    Using your argument about the 2nd law, it
    is possible to prove that BIRTH is just a
    myth, because we can't create a new being
    out of the disorder of food, water, and air.

  • 2001 is the work of Arthur C. Clarke. As talented as he is in his own right, Kubrik "simply" directed the film version of Clarke's novel.

    Schwab

  • Who says it has to be human? One word, Furby.

    (Wish I could remember the name of the robo-kitty)

  • In _Understanding_Computers_and_Cognition, Winograd an Flores argue against the possibility of an explicitly constructed intelligence. However, they *DO* allow for the possibility of artificial intelligence that is grown.

    Presumably, these intelligences will be grown (or taught) in the real physical world. This would put their learning on the same time scale as human intelligence. Training in a virtual environment doesn't help. Most of what we consider human intelligence is rooted in our interactions with each other. To train in this interaction means that there must be people around. People interact on a human timescale not the accelerated in which we like to train a computer.

    Given that the timescale for generating intelligence is more bounded by the time required for training than the computing power available, I believe that Kurzweil's timeframe is off. He predicts computers will be as smart as us in 2020. Well, that gives them about 20 years to develop the correct learning techniques and to start raising what is essentially a child. I think we'd be lucky to see intelligent machines by 2099.

    For some good reading check out:

    Winograd, Terry and Fernando Flores, Understanding Computers and Cognition: A New Foundation for Design, Addison-Wesley, Reading MA. 1987

    Dreyfus, Hubert L., Being-in-the-World: A Commentary on Division I of Heidegger's Being and Time, M.I.T. Press, Cambridge, MA 1991

    Drefus, Hubert L., What Computers Still Can't Do: a critique of artificial reason MIT Press, Cambridge, MA 1992

    Heidegger, Martin, Being and Time(tranlated by John Macquarrie and Edward Robinson), Harper and Row, NY 1962

    Also:

    http://www.ai.mit.edu/people/jcma/papers/1986-ai -memo-871/memo.html
  • Most likely any such intelligence would be grown or trained. That means that they'd be more like us than an infallible automaton.

    So a better wording is "Who would you trust to raise this artificial intelligence?" I know that I'm fine with someone I don't know raising my cab driver, but not that okay with not knowing who raised (trained) my doctor.

  • "A computer intelligence would likely not be impatient..."

    --
    This assumes that impatience is a negative trait that serves no purpose. IMO, computer intelligence, when it does emerge, will be evolved or trained -- not built (see a thread called Computer Intelligence and Philosophy for some reading material). Presumably, a computer intelligence generated in this way would find that impatience is a good solution to the forces that caused human to develop the trait in the first place.
  • .. is that man seeks to become more machine by making machine more man..

    -
  • Thanks David. This is an awfully useful post...and not reflect much in Kurzweil's book. I wonder if he didn't get a bit carried away because his predictions are so far into the future that he couldn'tbe called into account for them.
  • If anyone knows a thing about the 2nd law of thermodynamics they wouldn't believe in evolution.

    Oh god I love this one. The second law applies to *closed* systems. The Earth is not a closed system, it gets heat from the the Sun. It is this that allows order to increase. I recommend you read a book on physics before presuming to understand it.

    Any careful study of evolution THEORY ...

    Yes evolution is a theory. Nothing *scientific* can ever be anything else. Belief in creation relies on faith. The desperate attempts to cast doubt on the theories which do not match the creation story given in the bible show nothing but insecurity. If you have faith, just get on with it.

    shows the inaccuracies, the falsifications, that are believed even after disproved. It has been the single largest source of fraud in the scientific community.

    If you can 'disprove' the theory of evolution, I would be amazed. The dearth of contrary evidence is staggering, as the bullsh*t that fills the creationist pamphlets you get your best ideas from demonstrates.

    Going back slightly towards the topic, an AI would know exactly how it came into the world. Evolution would therefore be a rather less pressing subject for it. Would you accept it as impartial enough to adjudicate this, or would you ignore it as well ?

  • "To create any kind of upward, complex organization in a closed system requires outside energy and outside information. Evolutionists maintain that the 2nd Law of Thermodynamics does not prevent Evolution on Earth, since this planet receives outside energy from the Sun. Thus, they suggest that the Sun's energy helped create the life of our beautiful planet. However, is the simple addition of energy all that is needed to accomplish this great feat?

    If this is the best 'proof' of your misinterpretation of the seconf law you can supply , I suggest you examine it carefully. It is not kind of proof at all. The author describes the 'claims' of 'evolutionists' (I love these rhetorical terms) concerning life on 'this beautiful planet'. He does not contradict them, he asks a question which is deliberately design to mislead, whether the addition of energy is *all* that is required to create life on a planet. The answer is 'no', but the author intends you to infer 'evolutionists' believe that shining a table lamp on a rock will bring ti to life.

    Of course noone beleives energy is all that is needed for life. You need a lot of chemical goop as well, and a few million years. The chemical reactions required to start the process have already been conducted in the laboratory. So assuming the 'suchlike' in your rhetorical question can include these - the answer is 'yes'.

  • Bzzt. Wrong. The moths spend much of their time on birch trees. The original population *benefitted* from being light, and was therefore very likely almost entirely light. The post-industrialisation population benefitted from being darker in colour. The change required is small, and the population *may* have has a darker minority, which is why it happened so fast.

  • What is 'macro evolution' then ? Anything that happened long enough ago you can raise doubts as to the evidence ?


    Evolution, as currently conceived, requires 2 steps, mutation and selection. The current examples involving bacteria, fruit flies, etc, involve both. The pepper moth example *may* involve only selection, but the accounts I have heard cast doubt on your claims here.

  • Macro Evolution or increased complexity of a species, is one of those "Concepts" that have yet to be proved and has NOT been observed.

    What is an increse in complexity then ? I can demonstrate non-organic 'increase in complexity' to you till I am blue in the face. For example - if you take a mixture of suitable simple gases and pass huge currents through them you get a small number of organic molecules. The sequence of events required to produce one molecule is tremendously improbable, but given enough energy time and matter it will happen.

    If this is possible outside of an organism, what makes it so hard inside of one ? I could argue that in having this argument I am increasing in complexity by refining my arguments.

  • You will believe whatever your bias allows you to believe. Your Objectivity is subject to your bias.

    Of course it is, as is yours, so get down off your high horse. However, I take exception to your first statement, as I imagine does the first poster. It is the duty of every man who endeavours to follow the scientific method to lean over backwards to disprove what he would prefer to believe. I have personally trawled through the tripe creationists try to pass off as scientific literature, looking for even an ounce of merit, and have gone to the lengths of doing considerable research on particular points.

    This is precisely what scientists find so objectionable about "creation science". You have already decided what is true. It is a matter of faith. You then go out looking for scientific foundations, and of course you find some, or rather you succeed in finding plausible reasons to cast doubt on the theory of evolution.

  • In addition to claim that life is constantly improving, I ask you where the object of your faith lies. I have yet to see clinical or reasonable evidence proving otherwise.

    No modern scientist believes that evolution is progress. It is only change and adaptation.

    I am glad to see you have faith in your God. I have faith in mine too, but I do not have faith in evolution. It is not an object of faith, it is a theory. Why you need to doubt it when it contradicts scripture, when scripture is quite capable of contradicting itself, I have no idea.

  • by SimonK ( 7722 )
    I can and have refuted him, and those reposting permutations of his argument. Earth is an open system, and therefore not subject to the second law in itself. The universe is probably a closed system - therefore evolution has a limited lifespam. There is nothing dogmatic about this - its just the law as it is usually understood.
  • The Mind Uploading Home Page [unc.edu]

    An Introduction to Mind Uploading and its Concepts

    Aurora - Mind Uploading Resources [mcgill.ca]

    Whole Brain Emulation [strout.net]

    Cybernetic Immortality [vub.ac.be]

    Uploading Sub-Page [aleph.se]

    Cheers,
    RAK
  • I seem to remember reading an article from
    some Swiss professor. I believe he used some
    words that you use at the end of your post.
    I vaguely remember the topic of the article:
    WW III will be fought between those who wish
    and don't wish to allow computers to attain
    consciousness. I found it through a link
    from CNN about a year ago.
  • Artificial Intelligence is Stupid!

    Talk about the pit of fools. I once got into a violent argument with Marvin Minsky a few years ago at a conference in Vancouver. My basic appraisal of Minsky is that he was quite excited by the possibility of creating computers that would take the place of human beings, because he had a very limited appreciation for what it meant to be human. I figure his Mom must have treated him badly or something.

    You know the old saying that if all you've got is a hammer, every problem looks like a nail. Well, if Minsky had been a hammer maker, he would have called everyone Spike.

    I also like to laugh at Microsoft's pathetic hyping of natural language interfaces. Fortunately for their competitors (apparently most of the rest of humanity, according to their legal team), their billions of dollars of research will be based on the same fundamental blunder, the belief that human thought -- and speech -- is just computer processing with a big wet mushy chip. The tragic part of this misperception is that it indicates just how dehumanizing technology can be to its chief acolytes; or did they start out that way?
  • Hmm...bile I couldn't find. Disdain yes, bile no :-)

    Oh, and check your religion at the door when you step into the temple of AI. It is its own religion, and it IS a jealous god.

    Minsky's version of AI doesn't believe in Jesus or Buddha or any of those guys. Of course, it wouldn't even believe in them when they were alive. Humanity's not its strong suit.
  • Let's first of all understand that Turing's famous test is the ultimate blonde of ideas, cute but vacant.

    Turing himself suffered from an encrypted brain, having lost his privates key down the commode in a neighborhood pub during an ugly episode, which resulted in his having to goto the hospital for an enigma. When he was done, he looked like he'd been through a World War.

    One big problem with his test for Artificial Intelligence is that is wasn't reflective enough. The question he didn't answer is what would it prove if a computer could spend an hour talking to Giraldo on the other side of the wall and not be able to tell it was a human being. And if the Giraldo suddenly stopped talking in mid-sentence, would the computer assume its counterpart had Blue Screened, Sad Mac'd, or Core Dumped? And would it contribute to his annual support contract fees? Would they marry and spawn threads?

    What he also didn't follow up on were some of the broader implications of the test. For example, what would it prove if Turing spent an hour talking to a computer on the other side of a wall and wound up lending it five quid? Articial Stupidity?
  • Why are we trying to create AI robots when we haven't finished the air cars and autobake ovens? The whole thing is patently stupid. Specifically, why was 2009 chosen for the year that human musicians will "routinely jam" with cyber musicians? Why not 2008? What was the no doubt deep thinking behind this masterpiece? Every AI pronouncement I hear is grandiose, futuristic, and intellectually bankrupt. This is no exception. What do AI researchers do all day but scratch out these fanciful dalliances? Is it not time for some useful, real, artificial intelligence? Like a software diagnostics program that actually worked? Just the basic stale bread and bracken water would be nice. I don't need full tea at 4:00.
  • Same thing was said thirty years ago.
    AI is a dead end. Go with Stanislaus Lem, he mentioned "Artificial Instinct", seems much brighter.
  • Merging humans with technology will remove autonomy and free will. Even a slave today still have free will to some degree. You can tell me what to think. You can even brain wash me. But I can still dream.

    The machine doesn't dream ...

    ---
  • It would behoove everyone to understand the simplest meaning of the Turing Test:
    if you can't tell the difference, there is no difference
    of course humans won't have any respect for the artificial life they create until they anthropomorphize it somehow, byt sticking the computers inside human-looking shells, or giving them human names.
  • Well.. Most of what he writes, I consider destined to happen.. And I've felt that way for a long time (Artificial life was my specialist subject at Uni, and I've followed it since)..
    The idea of packing transistors onto silicon, yes, I can see that hitting a limit very soon, as has been pointed out.. That's about the time that quantum devices take over. And that's a whole other kettle of fish..
    It seems that people here forget about the other computational media available. Optical gateways, Bio-computers using neurons, quantum devices etc...
    I agree that the law of doubling will fail soon. but I also consider that it'll result in the increasing of power by an order of magnitude. The same effect as leaving a horse and cart for a rocket engine.
    As for intelligence... Who is really to define it?? It's stumped the greatest philosophers for many centuries now, and I think it'll carry on doing that, albeit more heatedly now, for centuries to come.
    When you say that machines will take millions of years to evolve, or that they never will become sentient, consider out origins...
    Small molecules that grouped together in a protein soup... That slowly learned how to replicate themselves, and form copies..
    From there, in geological terms, the rise to sentience of the human race was quite fast.
    And also, how do you rate the intelligence of humanity? We may be 'intelligent' now.. but do you consider cro-magnon man as intelligent?? And before that?
    Where did we become intelligent?? At what point?
    Is a Fish intelligent? If so, would a machine that has all the drives and behaviour of a fish be any less intelligent?
    Machine learning will arise much much faster than did biological intelligence.
    It's being nurtured carefully by a parent species.. Most of the people who have studied Alife have been surprised by the behaviour of their constructs.. Watching them behave in ways totally unexpected.
    History is littered with people saying 'It can never happen, you're deluding yourself'... Flight was never possible.. Humans would never travel over 30 miles an hour, as that would prove fatal... The view that no weapon could be more powerful than the bow, as the destructive power would be truly unthinkable...
    All commonly held beliefs at some points in time..
    And relatively recently at that.
    Currently, we exist in a society that views Alife as a threat; something to deny and deride. It's in a lot of human nature to destroy that which it does not understand.
    In time, the next generation will grow up in a world where it is becoming commonplace (as is already starting to happen, at a truly basic level), so the concept will be less alien.
    In the generation afterwards, it'll be accepted as a standard, and they'll laugh at the old views of the primitive society that couldn't comprehend every day life. Much as someon in the late 19th century couldn't comprehend an office worker going to work in the morning, sitting behind a computer, emailing documents halfway round the planet in seconds, and retrieving other information from another country inthe same timescale.. then looking through the 'eyes' of a mechanical construct that sits on another planet, which mankind has put there to gather information.
    We take this for granted. Two generations ago, this would have been unthinkable..
    I'm not sure of the timescales presented in that passage... But I firmly believe that what is proposed in it is not only a possibility, but an inevitability.
    As for the idea of human/computer cybernesis at the cognisent level restricting your thinking... I'd beg to differ..
    Once we obtain that level of understanding of the brain that we can actually bond memories/thought patterns into understandable/transmittable patterns, we get the closest to telepathy/telempathy that is possible.. The sharing of experience and emotion through a 'computer' link. The ability to fast process thought.. Raising intelligence by orders of magnitude.
    As for the machines deciding to 'dispose of us'.. I find it unlikely...
    The most optimal survival pattern is co-operation.
    Time after time, this has been proven, both in theory and test.
    Humanity, sadly, still clings too hard to it's origins in a simian style.. Rationally, I belive most people understand the true value of co-operation, but psychologically aren't equipped to live life fully in this way..
    The hybridising of man with machine is a natural evolutionary step for a tool-using species.. We've used physical tools for all the years we've used physical force to achieve our wonders.
    Now, we develop tools for the intellect.
    The industrial revolution for the mind..
    We've got as far as we have, because we're able to change.. And we'll make the next steps for exactly the same reason.
    It's the point where we can become the greatest of our dreams, or the worst of our nightmares.
    And sooner or later, we'll have to decide which..

    Malk
  • The Turing Test does not speak to conscsiousness. But it describes the next be thing: observable proof of consciousness. What you seem to mean by consciousness is the subjective awareness of it. That is an entirely different notion altogether. The Turing Test is a way to prove scientifically that a thing has consciousness. When you say 'consciousness' you seem to mean your feeling of it. That is the stuff of poetry, not science.
  • if computers were as complex as a human brain, why wouldn't they dream? A dream is nothing more than mental images passing in succession in your mind's eye. I wonder more if the computers will desire. What will they desire. Intelligence is not the hallmark of life. Life is the hallmark of life. Intelligence is life's tool. Beyond our mind there is judgment. Judgment is not a function of intelligence, it is a function of wisdom. Wisdom is the moral limitations with which one governs the use of intelligence. Living organisms are also highly governed by things beyond intelligence. Instinct is one example; hunger, aversion to pain, attraction to pleasure are others. Intelligence is merely one function of an organism. I am all for a computer becoming a smart as the smartest person. If I had less facts to learn, I could spend more time pondering the goals, aspirations, plans, etc., the ways to implement the intelligence for everyone's good. Perhaps we would uncover in short order the fact of God,
    and then we would increase our wisdom accordingly. Nothing can be greater than the creator. To computers, we are gods. If they do devolop intelligence; they will seek our counsel. If they become arrogant, then we will turn them off.

    The farm hand lost his job to a robot. He opened a church, fed the poor, read linux documentation.
  • "The most advanced computers are still much simpler than the human brain, ..., Kurzweil predicts, with computers achieving the memory capacity and computing speed of the human brain by approximately the year 2020."

    Speed and size are not the only relevant factors in intelligence. What computers lack is the ability to learn. How can it analyze the feedback from its own actions and determine if the outcome was sucessful? Although many researchers are tackling this problem (neural nets, HMMs), any real learning is still far away. Currently, computers are told by us humans what is right and what is wrong. If we are able to provide computers with all the correct repsonse to all the possible outcomes in the world, then they will be truely smart. Look for example at Deep Blue. How exactly did it beat Kasparov? By using brute force to identify all next possible moves. The outcomes of these moves were poredetermined weights established by chess Grandmasters. Intelligence my ass....

  • "Correct me if I am wrong but isn't that what your childhood was like? "

    Exactly. My mind has the ability to learn. I am able to infer what is right/wrong now based upon what I learnt as I child.

  • Does the alleged machine intelligence mean that
    we are going to get rid of Microsoft? Or will
    there be Windows BD (BrainDead) for intelligent
    machines which will significantly decrease their
    intelligence, so that "an average person" can use
    the machine?
  • Pretty cool stuff. We've certainly been thinking about it for a very long time. I just hope the machine would want to leave my shared wetware when we're done 'sharing'. Other than that, hook me up.
  • Even if one could manage to simulate the brain with a gargatuan neural net, by the virtue that it simulates the human brain, it will not be omniscient and all powerful the instant it is switched on. At the very least you will need 20 or more years to have it learn everything that we know. Perhaps much longer, if it takes the evolutionary path of some of the "first post" trolls on /.

    Not to mention that the author assumes that the very first prototype they build will work correctly. What about all the years of research to figure out the best neural net configuration (something which even now is something of a black art)...

    Summary: wishful thinking.

    BTW, what makes him think that, with an intelligence equal or higher than ours, the computers would want to join the human race? A little obnoxious assumption, if you ask me.
  • "This indicates that Moore's law may be
    running into a brick wall in the next few years. With a limit in growth of CPU horsepower, you'll start to
    see limits on AI..."


    Moore's law is only relevant here if you assume that future developments will do nothing but refine current technology. Sooner or later we'll run out of steam making silicon IC CPUs. But that doesn't mean that progress in computational processing will end. What about new technologies? Do you really suppose that none will ever be discovered? I don't know what will pick up where the Pentium XVIII leaves off, maybe quantum computers, maybe optical systems, or maybe something not yet discovered. But Moore's law as applied to ICs says nothing about the potential for increasing computational power in the future.

  • It's about time you got one.

    Other posters have already pointed out most
    of the fundamental problems with the predictions
    made in this book. I'd just like to add that in
    the event of such technologies being developed,
    we should all chip in and buy Jon Katz a critical
    thinking module.

    K.
    -
  • Your argument is based on a faulty premise, that living beings create more order than they do entropy. As a living being, I create entropy - for instance, in the form of waste heat. This entropy outweighs any order created by or in me.

    Furthermore, there isn't a single "powerful Evolutionary force" at work, but loads of little successes and failures, that follow a trend.

    My last post on this, btw. You sound like a nutter and life's too short.

    K.
    -
  • The argument that Moore's law is going to break down, therefore there will not be artificial intelligence is wrong. There are still many areas that we can expand computation. Chips right now are mostly 2-D, what happens when we can make them 3-D. We have also only begun to explore multiprocessing platforms. Obviously the amount of logic necessary for human intelligence fits inside the human brain. It doesn't matter if instead of taking 20 it takes 40 years to create a computer capable of intelligent thought.

    Something is going to happen when that much logic is put into a single computer, or network of computers. It won't be the best scenario, and it won't be the worst. Perhaps by thinking about it now we can make the transition to whatever comes a little easier.
  • Ha. We are scheduled to reach the Singularity in the 30's. Barring infrastructural meltdown in Y2K, of course.
  • We need to understund that an intelligence can not understund itself...

    Why not? I've never understood this argument. The basis seems to be that in order to understand the mind, we would need to model the mind, and since if I try to model the mind in my mind then the model will be incomplete because I then need to model the mind with the model of the mind ... and so on down an infinite regress.

    I don't buy it. Just as I can understand a computer without needing to know every program that runs on it, I expect that I can do the same with the brain/mind system. I see no reason that I couldn't understand a mind that could, aong other things, understand a mind. Short circuiting the infinte regress.

    And even if I can't, as one person, understand it, why can't two people each understand half? Or ten people? Or a thousand? And so forth.

    Secondly, you don't have to understand something to build it (although it certainly helps). Alchohol was used by humans long before anyone knew what was going on. Superconducting was known and usable well before the physics that explained it was worked out.

    SteveM
  • I think the problem is this - mind != brain.

    Mind is to brain as digestion is to stomach.

    The AI folks are counting on the fact that mind != brain, otherwise "minds" could not be realized on a different hardware platform. What the AI folks do believe is that minds are physical (and not mystical) entities. They believe, and rightly so, that Cartesian duality is wrong.

    SteveM
  • ...what it meant to be human...

    ...the belief that human thought -- and speech -- is just computer processing with a big wet mushy chip.

    If you are taking a mystic or religious position, and arguing that there is something magical and non-physical about humans (ie not subject to physical laws) then I can't argue with you.

    But if you are not claiming the above, then at one level of description (and by no means the only valid level) humans are physical entities that are subject to the laws of physics. AI argues that we should be able to create physical systems that are isomorphic to humans.

    Now, it may be very hard to do this and our technology may not be up to the task. Or maybe the only physical system that exhibits these properties is regular old humans. But as a scientific discipline AI is in no way stupid.

    SteveM
  • Look for example at Deep Blue. How exactly did it beat Kasparov? By using brute force to identify all next possible moves.

    While we know how Deep Blue plays chess, we do not know how Kasparov or anyone else does.

    We can ask them, and they'll tell us that they analyzed the moves and chose the best one. But what does that mean?

    The brain is a massively parallel system capable of some incredible feats of compution. Close your eyes. Now open them. The brain rendered the scene in realtime. When you play catch, how do you solve the differential equations to determine where the ball will end up, and how to move yourself and your hands into place to catch it? How do you search through everything you know to answer the question, "what is the capital of Afganistan?" The search is usually quite speedy coming up with the answer, Kabul, or with the knowledge that you do not know the answer. How do you do these things?

    My point is that we do not know how the brain does these things. Yet we talk as if we do, when we claim that computers do things differently than humans do. As far as we know, Kasparov did use brute force to evaluate moves, with only the good choices being passed to his concious awareness.

    Do I believe that this is how he plays chess? No. But neither I, nor anyone else, can currently rule it out.

    SteveM
  • by SteveM ( 11242 )
    How self-righteous we are to think that we can accellerate it!

    We will very shortly be able to accelerate it. As we learn more about DNA and how to read and then write programs in genetic code, we will start to experiment on 'improving' ourselves.

    We will be adapting ourselves to the environment, side stepping evolution, and at a much faster pace.

    I hope we know what we're doing.

    SteveM
  • We're still here, and we're still human. And we will forever be human.

    Are you saying that evolution stops with us! How very arrogant.

    SteveM
  • Summed up, it is Everything degenerates, it does not spontaneously develop, or advance. We see examples of this everywhere ie paint decays to dust, people age and die.

    Yes of course, why didn't I see this! Why isn't every physicist proclaiming loud and clear that evolution is a fraud?!

    Hmmm...
    Why is it when I put water in a freezer it turns to ice, that can't happen, it would clearly violate the second law. Therefore: ICE DOES NOT EXIST!

    And wait a minute, how do humans form in the first place, taking raw materials from food and arranging then in a quite complex manner, into livers and brains and muscles. Oh my! All clearly forbidden by the second law. Thus: HUMANS CANNOT EXIST!

    Hmmm...
    Yo bible boy, go get a good physics text book at your local library (you are permitted to read physics books aren't you, or have they been banned (which would explain why you don't understand the 2nd law, or evolution either for that matter)).

    The second law applies to closed systems. The earth is not a closed system. We get plenty of energy from the sun. So while the entropy of some systems on the earth goes down, the entropy of the earth/sun system goes up. And the second law isn't violated.

    So, summed up, things spontaneously develop all the time. Just add energy.

    SteveM
  • The universe as a whole will run down and be incapable of supporting life. In about 100 billion years.

    During this time energy flucuations may come into being that will allow local pockets of order to arise.

    One such fluctuation is the sun. One such pocket is the earth.

    If you are going to attempt to make religious arguments via science, at least understand the science first. Come back when you do.

    SteveM
  • Clarke didn't invent the satellite either, he's a fucking pompous moron for thinking that and ought to be shot for suggesting it.

    But he was the first to suggest geostationary orbits. And for that and 2001 (and a few other novels) his life should be spared!

    SteveM
  • Well, I'm not sure I understand your argument, so my response might be off base but here goes ...

    I agree (I think) that human's are not limited to whatever it means to be a physical entity. At least from the inside. I think therefore I am and all that. I 'know' I have free will. I 'know' my feelings are real.

    But none of that changes the fact that I am a physical entity in this universe and subject to the laws and constraints there in. I cannot do anything that isn't possible under those constraints. And if that means that my 'knowledge' of free will is in error, then so be it. (We used to 'know' that the earth was flat.)

    Thus all those things you seem to be saying are uniquely human are so because physical law allows them to happen. Claiming that this is akin to falling into some kind of intellectual trap is to deny reality.

    Maybe it is true as Penrose argues that we need new physics (i.e. quantum gravity) before we can fully explain the mind. Maybe, but I doubt it. I think he is several levels to far down.

    I have no idea what your reeference to Schrodinger's cat is about. (I'm familiar with said cat, I just don't see your point.)

    I don't see any dichotomy between physical or magical, I just deny that there is anything truely magical, in the sense that it is non-physical. Thus I see no reason that something like 'mind' could not be artificially created. I see no reason that 'mind' can't be revese engineered. I see no reason that what was developed under a methodology of blind search under constraints (i.e. evolution) can't be duplicated via a directed search (engineering).

    And finally, the fact that you spice up (I was going to 'litter' by I think 'spice up' has better connotations :) with statements like, I magically appreciate...and Sorry, just my intuition talking. Oops! That's not physical either...just human. doesn't change the fact that they are physical, in that they are allowed/enabled by the physical laws of this universe, although I would agree that it doesn't make sense to talk about them in physical terms. Just as it doesn't make sense to talk about software in terms of electrons and holes migrating through semiconductors.

    SteveM (did I pass the test?)
  • I have taken philosophy classes. And I've read quite a bit of philosophy since them. And as K. posts below, most of the philosophers that still cling to a Cartesian type mind/brain duality have not kept up with what we have discovered about this system.

    t could be that mind supervenes on the brain (ie, things that affect the brain will have affects on the mind, but not necessarily visa versa)

    Then what purpose would this 'mind' serve? A mind that can't effect the brain would have no input to your life at all. Here's a thought experiment. Imagine that there are two types of beings in the world. Those with a brain/mind of the type you propose, and those with just a brain. Under what circumstances would the behavior of these to types of entities differ? None, since the 'mind' can have no effect on the brain. This argument isn't original with me, and is the type of argument that has been used to show that duality (of the non-physical non-interacting type) just doesn't work. Duality of the software/hardware type, where mind & brain have effects on each other, doesn't suffer from this flaw.

    If you're interested in reading a philosopher that has kept up with research in cognitive science and AI I suggest a good start is Daniel Dennett. His works are readily accessable and fun to read too.

    I don't expect you'll agree with all his arguments. I don't, and my world view is much closer to his than I gather yours is.

    SteveM
  • A major goal of creation science is to point out the weakness of evolutionary theory, because basically there are only two alternatives for how we got here, and if naturalistic processes are incapable of the task, then special creation must be the correct answer.

    This is not science. And your premise, that there are two alternatives and if one is discproved the other must be true is incorrect in two senses. First, it is incorrect to say that there are just two theories. Second it is incorrect to claim that if one theory is shown to be false the other, by default, must be true.

    In addition to your two theories there are the thoeries of spontanious generation and panspermia.

    Disproving one theory does not prove another. If evolution is shown to incorrect, then we are back to square one. If creationism is to be accepted scientifically, it must be done via scientific evidence for creationism. All the evidence against other explanations does nothing to advance creationism as a scientific theory.

    A trivial example of what I mean. Suppose someone tells you that red crows exist. To test this proposal you go bird watching. You see plenty of black crows. Each black crow that you see just tells you that black crows exist. It says nothing about the existance or non-existance of red crows. The only way to prove that red crows exist is to see one (or by analogy with creationism, paint one).

    In the same way, each argument against evolution is a minus for evolution. It is not a plus for any other theory.

    You again show your ignorance of what science is about by stating, ...but much scientific energy has been wasted over the last century in the search for evolutionary evidences and experimental proofs, which have been unsuccessful so far and will continue to be.

    Science works by amassing evidence, such work is never wasted. And by claiming that it will continue to be wasted effort, you show your true stripes. Any scientific theory is only provisional. It represents our best understanding at this time, but is subject to chance as new evidence comes in. (As humans are involved it can be a messy process, but over the long run it works.) By your statement you are stating that there is no evidence that could show evolution to be true. That's not science, that's faith.

    SteveM
  • If *you* are going to attempt to make religious arguments *pro* science, at least understand the religion first. Come back when you do.

    I'm back. And I do.

    And I understand the key difference between science and religion.

    Religion is a belief system. Science is a belief system. However, a belief system does not a religion make.

    Science is always subject to change, based on evidence accumulated. Sometimes the change happens fast, as with the acceptance of relativity over Newtonian mechanics, sometimes it's slow, as with plate techtonics. But it changes.

    Religion is never subject to change. Infact original thought is prohibited (don't question, just believe, god works in mysterious ways) and punished (think inqusition, think about women in Muslim countries, think about Scientology and lawsuits).

    No, science is not a religion.

    SteveM
  • I am also skeptical of this figure. That's enough computing power to simulate every molecule in a honeybee's brain. Are you sure it's not 100,000,000,000 FLOPS (i.e. 100 GFLOPS). If I remember correctly, the computing power of the human brain was estimated at ~100 TFLOPS (in other words, about 1000x as much).
  • I won't say this will never happen, because you never know (the only thing I would be willing to predict will never happen is the ability to create or destroy energy/matter). But, here are two reasons I don't expect this to happen in the near future.

    1) The exponential growth in the spead of CPUs *must* eventually hit a physical limit so we can't predict how fast CPUs will be in 20 years. Even if some sort of super-conductor or fiber-optics were used and even if the bit logic was on the atomic level we would still hit a physical limit.

    2) Even if CPUs were fast enough (which the above *may* be) - you still have the software problem. So a computer won someone at chess - big deal. The logic for "learning" chess had to be created by someone (ie: not another computer) and it only learns from mistakes. Further, this AI can *only* learn chess - if someone wanted it to learn checkers, they would have to program that into the computer. For AI to reach that of humans, it needs the ability to learn *how* to learn. In other words, a computer would have to program itself to play chess - this will be the true test. But even more than this, the computer will have to take the inititive to learn. These are key differences between humans and all other life forms we know of. Humans have the ability to learn/discover new things, inovate on existing ideas, and then pass these ideas/skills to other humans - all other animals do not. For example - there are certian animals that use tools (most often a twig or leaf) to aid in the gathering of food. However, the tools are never improved and no new tools develop out of these existing tools - their use is instinctual. So if a computer must be programed every time it wishes to learn, it is not inovating or learning new things, it is just acting on what it already knows how to do (ie: it is acting on instinct)

    Anyhow, I appoligize for the over general ideas, but I think you see my point. Now let the flames begin!

  • math cannot explain the universe, it can only explain how certain aspects of the universe work. For example, the speed of light is constant regardless of point of reference - this is the basis of the special theory of relativity and why we know time is relative to speed and position. However, mathematicaly this does not make sense. Also, consider all the energy and matter in the universe. They either existed for all time (however, the decomposition of elements seems to suggest otherwise) which is totally illogical, or it came into existance, but this goes against the *laws* of conservation of energy and matter.
  • "The Earth is not a closed system"

    depending on how you define a closed system, the earth is within a closed system - the universe. To say otherwise would be saying that forces outside of physical laws are acting on earth.

  • I read an article recently that highlighted the
    difficulties of maintaining the Moore's Law
    momentum. Basically, it was the quantum-effect
    problem retold, and suggesting that there may
    be a flat spot coming while these difficulties
    are overcome. However, the article also
    suggested that the mass market for processors
    may have to shift away from computing devices
    and into more general appliances in order to
    keep the economic momentum going.

    I happen to believe that there will be a
    relatively small (one or two year) glitch in
    the curve, but that the push will continue
    all the way down to nano eventually.

    As to the economics of CPU (memory/IO/storage)
    production - there are an awful lot of people
    who still have no personal access to the
    technology that we (the Slashdot We) take for
    granted.
  • Downloading the human brain? Come on.

    Realistically - Yes, human will eventually be replaced in most production environments. Totally automated factories are probably within reach in the next 20 years. The social challenge will be how to re-distribute the huge profits this brings to those that don't have the skills to get a job in a non-production environment.

    Yes, computers will be blindingly fast in 20 years time. Capable of simulating insect-like intelligence at a level sufficient for automated carpet cleaners that run around at night without bumping into things, or driving our cars for us on the free-way (yes, a silicon insect intelligence can probably drive better than we do, it is always paying attention)

    Human like intelligence in silico? Sure, when you have a computer that can accurately simulate the human brain - all the neurons, all the connections, and do it in real time. Even if moore's law continues until 2100 I don't think we will get there, as our computers are serial, and the brain is massively parallel. Can anyone who knows more about this than me estimate what kind of processing power it would take to simulate a neural network the size of our brain on a serial silicon chip? If we come up with a way of creating billions of artificial neurons and actually physically wiring them together, then we might have something.

    Then there is the problem of programming this brain...

    -josh
  • it looks like the former is a much bigger challenge than the latter. the problem is that the human brain is a massively parallelized multiprocessing machine, and emulating it in a serial (von neumann) processor will be a futile attempt. we need to develop algorithms that use the serial characteristics of the processor, rather than try to improve hardware...
  • simulating a human brain in hardware? that has to be the silliest myth of modern computing.

    just thinking of it this way - human brain is a massively parallel machine. computer processor is a serial machine. simulating the former in the latter will be linear in the number of neurons simulated, but exponential (and possibly with pretty big constants!) in the number of connections between them! we can't even properly build neural networks of size of an insect brain, let alone human. besides, there's the small issue of neural networks being 'opaque' to the creator - all's good when they work, but when they break it's difficult to figure out why.

    it seems that we'd get farther by concentrating on advancing 'traditional' symbolic artificial intelligence, rather than simulating huge neural networks on puny serial hardware...

    r
    -- "away, connectionism!"
  • I hope that's a quote. I'll vomit if you really mean this.

Heard that the next Space Shuttle is supposed to carry several Guernsey cows? It's gonna be the herd shot 'round the world.

Working...