Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Movies Media

Comparing Clarke/Kubrick's 2001 To Now 177

angkor wrote us about a recent Economist article that explores and compares the differences between Clarke/Kubrick's vision of 2001, and what we've got. Of course, I'd point out that the literary one wasn't meant to be a literal 2001; but this an interesting comparasion nonetheless.
This discussion has been archived. No new comments can be posted.

Comparing Clarke/Kubrick's 2001 To Now

Comments Filter:
  • we did come far (Score:2, Interesting)

    regardless of what we didn't have achived, look at what we have.
  • Well, we could compare today with the space odysee from the movie "Odysee 2001" (sp?).

    At least today is not *that* bad.
  • not literal? (Score:4, Insightful)

    by ceswiedler ( 165311 ) <chris@swiedler.org> on Saturday December 22, 2001 @02:46PM (#2741934)
    What do you mean, it wasn't literal? Clarke and Kubrick obviously thought about things they thought would be happening in the near future. I seem to recall Clarke being pessimistic about an AI as smart as HAL, but that's not quite enough to label the date of 2001 as "not literal." In the book, the events clearly happen in the year 2001 AD (or most of them, anyway). 2001 is much more specific and literal than a dystopian book like 1984 (where I would agree the date is more symbolic).

    Science fiction is never completely accurate, obviously. But Clarke was one of the most accurate and scientifically rational writers of the century. We haven't gotten to convenient interplanetary travel quite yet, but you can be sure that it will happen much like he describes: a large space station using 'centrifigal force' to simulate gravity, and rockets using the station as a waypoint so the same spacecraft doesn't have to be capable of lifting off from Earth as well as travelling to and landing on another planet or moon.

    Now, being able to phone from the station to America for only a few dollars, that's probably a little over-optimistic...
    • Re:not literal? (Score:4, Insightful)

      by jimharris ( 14678 ) on Saturday December 22, 2001 @03:28PM (#2742013) Homepage
      The progression of computer science evolution has far outstripped Clarke and Kubrick's imaginations. They only imagined an intelligent machine without going into the details. The details of computers have been developing at a wild pace since the sixties, which science fiction failed to predict.

      Clarke's and Kubrick's real failure was not seeing how quickly space exploration would die. In 1968 it would be natural to predict that mankind would be traveling to Jupiter by 2001. What was unnatural to imagine was mankind would visit the moon, and then never leave low earth orbit for three decades. And there is no real reason to assume we'll leave low earth orbit for three more decades.

      If they had predicted that in 1968 I would have been blown away by their power of their wisdom.
      At the time I was positive that mankind would reach Mars in the 80's. Humanity's lack of real interest in space exploration has been my lifelong disappointment.

      And, even though I love science fiction, the older I get the more I realize that science fiction is no more than fantasy. The gravity of the mundane keep us tied to this planet.

      2001, the story just plain missed the mark.
      • Re:not literal? (Score:2, Interesting)

        by heptapod ( 243146 )
        Humanity is interested in space exploration, it's just that the people in charge can not find the profitability in space exploration.
        In the beginning space exploration was about showing off how powerful one's defense industry could be to the point that America proved it could put a man on the moon and therefore also establish a lunar base from which to lob missles at the former USSR.
        The science of the lunar missions and the subsequent Mars missions were simply funded by the excess money generated by the defense industry to make space exploration seem legitimate in the first place with the veil of scientific inquiry.
        Back in the good old days of space exploration (late fifties to mid seventies) there was profit in space exploration. Sadly today NASA works on a shoestring (for space exploration) budget making things which could realize the dreams of mankind just dreams.
      • Wrong, on one point at least.

        China is planned a manned space flight by 2005, to be followed by a manned visit to the moon "at a later date". Check out the BBC [bbc.co.uk] for (scarce) details.

        • We'll, see. I have big hopes for Chinese space efforts. If their leaders think it is political valuable, things will happen. And I think the Chinese would like to use space exploration as a way to prove they are an important nation in the 21st century.

          Like I said, I've been waiting for us to leave low earth orbit for thirty years. Space enthusiasts always talk about what will happen in 5 years or ten years, but then nothing happens. Maybe if the Chinese do something, the U.S. and other nations will feel compelled to compete.

          The only reason we went to the moon in the first place was to compete with the Russians. Kennedy was not a pro-space person, but an anti-communist.

          It's too bad space exploration couldn't be accomplished like the development open source code. If you could find 5 million people willing to contribute $1,000 a year, you could have a space program with a $5 billion dollar annual budget. The trouble is finding 5 million people who have a passion to see space exploration happen.
          • No, The Chinese need to find someplace to put the billons of people who will be born the century. So the Chinese will lead the way to mars to solve over population problems.
      • I think space exploration is really neat and I've followed all the various space programs closely since I was a kid. But, when you step back and look at things with reality and pragmatism, you're hit with a major reality check.

        Once you get past the novelty of "wow a guy is walking on the moon" or "wow we're looking at live pictures from Mars", space exploration isn't all that terribly exiting to the average person. After 50 years of science fiction, people have discovered that space exploration isn't anything at all like what you see on TV.

        In your typical scifi (including Clarke and Kubrick), people build enormous , complex and fantastic machines, with absolutely no explanation of how they paid for it all.

        In real-life, space exploration costs huge amounts of money that comes directly out of Joe-Taxpayer's pocket.

        In scifi, people travel in space ships that can fly all over the universe in a few days, and explore worlds full of strange new beings and beautiful exotic scenery.

        In real-life, it takes 6 months just to get to a barren planet with nothing but rocks and red dirt. And a couple of years to get to other lifeless planets that have even less to look at.

        Even *IF* we could somehow travel at twice the speed of light, you're looking at 18 months to the nearest star. Even at 30 - 40 times the speed of light (not technologically possible), you're looking at *YEARS* to reach other stars/solar systems.

        In 1968 it may have been "natural to predict that mankind would be traveling to Jupiter by 2001" but only because people were so caught up in the exitement of the "space race" that nobody bothered to stop and ask "why" -- Why do we want to spend billions of dollars on a two year journey to a frozen ball of gas.
        • I think the real reason why space exploration isn't popular is because it's not linked to sex. Congress will fund anything you can link to family and children and the protection of the family. That's because voters are mostly concerned with their own interests, and preserving themselves and their family come first.

          Security, food supplies, health, jobs and any other program that closely fit the needs of people and their families will get funded. Things that seem to help other people's families are less supported, but are more important than financing things like space travel or particle physics.

          I think Clarke and Kubrick and other science fiction writers failed to understand that. In the sixties science fiction was closely related to space exploration. Few people read science fiction compared to today. The history of rocketry and space exploration coincided with people interested in science fiction, but after the Apollo years, that changed. Starting with Star Wars, science fiction became a major force in the entertainment industry and was no longer linked to the space enthusiasts.

          If you met a science fiction fan in the sixities it was almost a given they would also be a space exploration fanatic. That isn't true today.

          Science fiction saturates our culture with TV shows, movies, video games, roll playing games, comics, graphic novels, etc. If space exploration is such a major artistic motif, why doesn't the space program get a lot of public support?
    • Now, being able to phone from the station to America for only a few dollars, that's probably a little over-optimistic...

      Money was worth more when the movie was made.

      Then again I remember no other references to value of money. That coulda been half his life savings. ;-)
    • by Lemmy Caution ( 8378 ) on Saturday December 22, 2001 @04:14PM (#2742103) Homepage
      While an element of prophesy is part-and-parcel of science fiction, ultimately any work of literature is more about the times that it was written in rather than the times they are writing about.

      A great book about the role of science fiction is Thomas Disch's "The Dreams Our Stuff is Made Of." The science fiction of the past often shapes our present by informing the imaginations of the people who created it. How many AI researchers cite HAL as an inspiration, goal, or benchmark?

    • Re:not literal? (Score:3, Interesting)

      by awol ( 98751 )
      1984 was a completely symbolic date. The book was written in 1948 as a critique of the british society of the day by reversing the digits of the time Orwell cast a dystopian future metaphor for the subject of his ire.
  • If.... (Score:1, Interesting)

    by Merik ( 172436 )
    Process of the univers birthed an intelligence: evolution
    Evolution birthed a greater intelligence: Us
    We birthed(or are birthing) a greater intelligence than us: technology (ai)
    What will technology birth?

    The universe is doing nothing less than attempting to become aware of itself... piece by piece.
  • It took a lot to take down HAL.
    Of course we have nothing near the AI as that, but if we did, a script kiddie could probably bring it down, or make it talk dirty, etc.
  • Chris Black (Score:5, Funny)

    by jorbettis ( 113413 ) on Saturday December 22, 2001 @02:51PM (#2741943) Homepage

    Chris Black was doing his "Year in review" on the daily show when he said:

    "So my review for 2001 the year is the same as for 2001 the space odyssey, It went on too long, it was hard to follow, and you could only enjoy it if you were really, really stoned.

    I think that is a pretty apt analysis of the similarities between the two ;-)

  • by Anonymous Coward on Saturday December 22, 2001 @02:53PM (#2741949)
    Pan Am, Bell Telephone, Howard Johnsons - and their logos which graced 2001 - pretty much all gone. We now live in a world dominated by quickie, cheap, here today, gone tomorrow corporate culture.

    Leveraged buy-outs, insider trading, junk bonds, corporate mergers, golden parachutes - all this has destroyed what was once the paradigm for how to do things right. When 2001 was made, a 10 or 20 year corporate game plan was not unusual. Now you'd be luck to find any corporate plans looking ahead more than 10 or 20 months. Oh, and need I mention the "dot-com" crash as a perfect example of what this new culture breeds?

    • Insightful? Only if you choose to ignore both history and economics. Corporate mergers were practically invented in the 60s, a decade in which corporations, flush with massive amounts of federal spending, decided that adding value by acquisition was less risky and therefore preferable to adding value by innovation. It gave rise to unwieldy behemoths like GM and ITT and added the term "conglomerate" to the economic lexicon. In fact, you can make a case that the 60s laid the foundation for all of the LBOs and divestitures of the 80s as the inefficiencies of size caught up with some of these corporations and they were bought up cheaply then broken up into parts that were individually more valuable than the whole. Not a very pretty legacy.

      In contrast, the 90s saw economic growth that surpassed the 60s by pretty much any economic metric you care to name. And this growth was fueled largely by new companies, new markets, and real increases in productivity.

      Oh, and no one in the modern era has ever used a 10 or 20 year horizon for all but the vaguest, most trite, planning (i.e. "Mission Statement"). Not only that but, at least in the US (which is what 2001 and, I presume you, are referring to), companies were notorious in the 60s for having extremely short-sighted strategies. For more information, see any of the scores of treatises published in the 70s and 80s on how to rectify this short-sightedness by emulating the Japanese.
  • by cybrpnk ( 94636 ) on Saturday December 22, 2001 @02:54PM (#2741950)

    The Economist article outlines three distinct eras of AI research and concludes that none of them had any real hope of success because none mimiced the true nature of the human brain - billions of neurons, each making connections with 10,000 others, for a wiring complexity that is far beyond mere bulk transistors on a 2D spread like current microprocessors. But I wonder - with all the current research about qbits and quantum computing, where a handful of qbits could factor prime numbers of amazing complexity - perhaps the REAL source of artificial consciousness in the future won't be achieved by physical hardwiring of any complexity, but with some sort of "quantum ghost in the machine". Or maybe something even weirder - remember what Clarke said, the future is not only stranger than we imagine, it's stranger than we CAN imagine....


    Then again, what's stranger than three pounds of meat reciting "twinkle, twinke little star..."?

    • Our problem is that we think like the humans we are.. That includes a pretty large amount of overestimation of our own abilities. The human kind of intelligence is probably *not* the only one that can exist. Trying to copy the human brain (neural networks etc.) is not only hardly possible, it wouldn't be what we want. The human brain does not provide the best kind of intelligence for analyzing stock data, creating optimized electrical circuits or whatever. It is optimized on remembering pictures, sounds, faces and communicating with other humans. An intelligent machine would require different abilities. Let's not be too arrogant and conclude that because our first attempts of creating intelligence failed we'll never achieve it.. Maybe just rethink what intelligence actually is.
      • by ToLu the Happy Furby ( 63586 ) on Saturday December 22, 2001 @11:25PM (#2743194)
        Our problem is that we think like the humans we are.. That includes a pretty large amount of overestimation of our own abilities. The human kind of intelligence is probably *not* the only one that can exist....Let's not be too arrogant and conclude that because our first attempts of creating intelligence failed we'll never achieve it.. Maybe just rethink what intelligence actually is.

        But that's precisely the problem with trying to "achieve AI"--defining what the hell "intelligence" is. For better or worse, people have traditionally defined "intelligence" roughly as "the things people can do but animals can't," or, "the things people can do but it makes our noggins hurt after a while." When put this way, the deficiencies in this definition become pretty apparent, but no one has come up with an obviously better version. Instead we usually approach the question of whether a thing is "intelligent" using the standards of the old Supreme Court decision defining obscenity--we think we know it when we see it.

        Or more often, we think we know what it isn't when we see that. The history of "the quest for AI" (I put that in quotes very advisedly) is full of problems that, if solved, would surely be proof of AI...until they are solved, in which case it's still a dumb computer. Computers are now the world champions or competitive with world champions in chess, checkers, backgammon, othello, poker, bridge, and almost any game of mental skill with the significant exception of go. Computers have both proven several important and previously unproved mathematical theorems (e.g. the 4 color map coloring conjecture) and have come up with elegant and/or novel proofs for existing theorems (e.g. a computer proof of Godel's Incompleteness Theorem which "invented" Cantor's diagonalization technique on its own).

        On the other hand, we have yet to make a computer which can navigate and react to its environment as well as, say, a pet dog can (sorry AIBO), nor one which can understand human language in any but the most limited domains. (Of course "understand" is a similarly difficult to define term. As an example of what I mean, look at CYC, a company which gets its name from its initial mission when it was founded IIRC back in 1984--to program a computer which understood enough concepts to understand language well enough that it could read an enCYClopedia (or any other descriptions in natural language) and learn what it didn't already know. While CYC has developed a useful system, it's still a ways from passing the encyclopedia test.)

        Even though we're used to thinking of playing championship-level chess or doing advanced mathematics as hallmarks of particularly intelligent humans, while navigating an environment or understanding language is something that even the dumbest people can do, we find that computers are good at different things. (Or rather, we know how to program computers to be good at some things but not other things.)

        The "problem" has been that in the early days of computers and on into the "golden age" of AI, we didn't know squat about how the human brain worked, nor even about what sorts of steps were needed in order to e.g. understand natural language. Back then, most AI researchers--brilliant people, mind you--figured all that would be necessary for a computer to understand language would be a link to a dictionary and maybe some rudimentary ability to parse grammar. Indeed, in many ways the field of linguistics arose as a result of the attempts and failures of computer scientists to get computers to understand language. Similarly, the successes and failures of AI have been instrumental in guiding or even creating the field of computational neuroscience.

        What we are coming to understand is that the things that only "more intelligent" people can do are not really the hallmarks of "intelligence" but rather are examples of people fitting their brains to tasks they were not really designed for. For AI to truly "be achieved", we will have to get much better at making computers succeed at the tasks which a monkey can do just as well as a human, rather than those which humans can do but monkeys can't.

        Also, we're learning that our instinctive idea of "intelligence" demands that techniques be general rather than specific. In other words, we don't consider exhaustive depth-limited minimax search with static evaluation to be a truly intelligent game playing technique--even though it can allow a computer to become the world chess champion--because it really sucks at go. The fact that go has a branching factor (i.e. avg. # of legal moves) of over 300 while chess has one of around 30 doesn't mean that similar thinking techniques (so far as we can tell) can't be used for a human to play both, but it does mean that exhaustive search is a feasible technique for a chess-playing computer but not a go-playing computer; we tend to interpret this (rightly or wrongly) as saying that exhaustive search is not an "intelligent" technique.

        Next, it's time to stop tossing around that crap about how computers are so much faster or more powerful than human brains. That's complete hogwash. A modern CPU has roughly 10^6 gates, compared to ~10^11 neurons in a human brain. A computer might have 10^9 bits of memory (or even 10^10 if we go really high-end), and 10^11 bits of storage space, but a human brain has ~10^14 synapses, which can be viewed as encoding part of what the brain knows. A human brain has a remarkable 10^14 bits/sec of data bandwidth, compared to ~10^10 bps for a PC and 10^11 bps for e.g. the upcoming Alpha EV7. The only category computers lead in is cycle time, roughly 10^-9 for computers compared to ~10^-3 for the human brain. The upshot of all this is that, when it comes to computers programed as neural networks, a computer can only perform about 10^6 neuron updates/sec compared with 10^14 for a human brain, and the largest computer networks (limited by feasibility not by space) are maybe 10^5 neurons compared to 10^11 in the brain. So, roughly 100,000,000 times slower and 1,000,000,000 times smaller than a brain. (Figures based upon those in _Artificial Intelligence: A Modern Approach_, updated for the 7 years since the book was published.) No wonder computers aren't as intelligent as a human brain! And yet despite the huge disadvantage, neural nets are still the best technique for many AI problems, especially if we are worried about coming up with a technique which seems to be generally intelligent.

        And finally, while it's interesting to talk about why we haven't created HAL yet, it's important not to confuse this with the idea that "the field of AI is a failure". AI is *not* a failure. While some problems have proven much harder than we initially expected, this is almost entirely because our initial expectations were completely ignorant, rather than because progress has not been made. Most importantly, we need to realize that people who are working in the field of AI are not sitting there day after day trying to create Lt. Commander Data or pass the Turing Test. Rather they're working on solutions to limited domain problems where computers can augment or replace the efforts of humans--and they're succeeding in many, many instances. The only real "problem" with the field of AI is defining what exactly it is.
    • >perhaps the REAL source of artificial
      >consciousness in the future won't be achieved by
      >physical hardwiring of any complexity, but with
      >some sort of "quantum ghost in the machine".

      This is a very interesting proposition, and if you're truly interested in it, I would highly recommend reading some of the popular writings of Roger Penrose (The Emporer's new mind, etc.). One of his central theses is that 'mind' is a consequence of quantum effects.

      Pesonally, I don't particularly agree with Penrose; but like it or not, I still find Penrose an excellent (and thought-provoking) read.
    • remember what Clarke said, the future is not only stranger than we imagine, it's stranger than we CAN imagine....

      Darn right. Think back to 1981. Would you have even contemplated that you'd be sitting in front of a computer in ten years time having discussions with thousands of people based all over the world?

      Could we have contemplated that there'd be a free UNIX about in twenty years time that would threaten the domination of one of the world's largest companies? Or, for that matter, could we have even thought that a COMPUTER SOFTWARE company run by some nerds in Seattle would be the world's most powerful company?

      And what about MP3? You can walk around with your entire record collection in your pocket now. With 3G technologies, you can access the Web at broadband speeds on the move and download entire albums in minutes to your handheld devices. This is crazy stuff to even have thought about five years ago, let alone twenty.

      My own prediction is that quantum computing is going to give us a major kick in the ass in the next twenty years, and we can't even possibly imagine what technology will be like then.

      We're currently sitting on the part of the exponential curve of technology growth where it's shooting up fast, but not at an impossibly dizzy rate. Twenty years, we'll probably be there.
    • with all the current research about qbits and quantum computing, where a handful of qbits could factor prime numbers of amazing complexity - perhaps the REAL source of artificial consciousness

      and, in the beginning, we were thinking that a machine that could play chess would be a real source for intelligence. just because it's new and different doesn't mean it provides a breakthrough in the area needed. yes, quantum stuff is new and interesting, but it primarily involves lists, factors, and the like. serial operation is possible, but gets no real benefit from qubits. silicon+3D+FPGA could be the answer as well.

      and conversly, it is seldom wise to cut down the new because it could possibly solve new problems. for all we know, it could help develop ai. afaik, neurons use to some degree quantum effects, so quantum computing is not out of the question. but it is probably only part of the answer.
    • The big flaw is almost everybody thinks that artificial intelligence ought to be like human intelligence. This isn't about numbers of neurons, or their strange interconnections, or about Turing tests, but about the strange things which go on in our brains which are impossible to model. Yesterday I was writing the name William in the condensation in the bathroom (my son Willam, 7, was in the bath), and having put the "W" with the points of the V's rounded rather than angular William remarked to me "That looks just like a bottom". I can't imagine an artificial version of that sort of intelligence - even matching a seven year old's ability to recognise the visual similarity (of a stylised representation of the real object), recognise that it was funny and recognise that it was an appropriate moment to crack the joke.

      Instead we've used the concepts from AI work and applied them elsewhere, as fuzzy logic and neural networks. Some of this statistical logic has been seriously useful to us.

      Back to 2001 - the part which would seem unbelieveable both then and in 1972 at the time of the last Apollo moon shot is that thirty years later we wouldn't have sent men back to the moon. Our space exploration is still at the level of "lob some instruments at Mars and hope they land the right way up".

      Dunstan
  • by Wire Tap ( 61370 ) <frisina AT atlanticbb DOT net> on Saturday December 22, 2001 @03:01PM (#2741960)
    . . . is so full of diversity, and what we have come up with in the past several years has been amazing, to say the least. Science Fiction writers are generally accurate with regards to the underlying technologies that come about, but often miss the mark with the specifics, and therefore the spinoffs. I'm not saying that's bad, on the contrary, Sci Fi writers are often great inspirers of the scientists of the futute - and that's good!

    Every time I read a good Sci Fi book, I am amazed by what I read, but, then, I look around, and I see things that are not even remotely considered by the writers:

    Composite Materials
    Polymers
    VIDEO GAMES
    MP3s!
    Post-It-Notes

    Of course, some of those things are quite frivolous (or are they?), but, that's what makes the human race so beautiful: we come up with things that are truly amazing, in their diversity and simplicity. We are an unruly and unpreditable crew of warriors, writers, diplomats, scientists, researchers, dreamers, and a myriad of other vocations - we are beautiful.

    I hope we continue to pave the path of peace and progress for ever and ever.
    • Dynamic, complex systems like the human/computer world, often demonstrate emergent properties. In other words, higher-order levels of organisation spontaneously appear in these systems over time.

      For example, peer to peer computing has been known about forever, as has file compression, but who could have predicted the success of MP3 trading over Napster?

      Who is to say that the dragons you fight in Everquest today might not take flight above the surface of the earth tomorrow? These are very exciting times.

    • For me, I love to see how authors deal with the overpowering effect of technology. For this reason I love Herbert's Dune. About 10,000 years in the future, the problems of space travel, nuclear (and more powerful) weapons and computers [how could he know how wise that choice was back in 1967] are dealt with so elegantly that the human interaction is centre stage which is so often not the case in SF.
  • I think the review somewhat misunderstands the role of technology in 2001. The technology in the film is secondary although a very important reflection of the progress of the humankind.

    The bone in the hand of an ape is the first twinkle of intelligence. Then, as the humanity advances to its full might, the technology allows humans to create giant space stations and sentient computers. But in the end Dave destroys (murders?) the computer and travels down the star tunnel alone to become something just as different from a modern human as the modern humans are different from their prehistoric ancestors.
    • But in the end Dave destroys (murders?) the computer

      But who gave Dave that idea?

      Note how HAL bounced back nine years later, and the rest of the crew were still dead or worse.

      Of course it boiled down to conflicting orders given to HAL by people who didn't know what they were doing. If you HAD to do everything you were told you'd probably go crazy and kill people too. (shame HAL wasn't programmed to not kill, I guess Asimov could have better inspired the people making/programming HAL. "Kill me!", "I'm sorry dave, I can't do that.", "Kill yourself!", "Okie Dokie Davie.", *pop*)
  • Pedantry (Score:2, Informative)

    by Gumshoe ( 191490 )
    ...apes, mastering primitive tools for the first time. Cut to 2001.
    A space station orbits the earth.


    Not entirely relevent, but the first image from 2001 that wasn't
    prehistoric, was actually a "space bomb", not a space ship or a
    space station as is often thought. Cinematically, this makes more
    sense as it links prehistoric man to futuristic man with the
    concept of violence.

    • Well, to be even more precise, a prehistoric weapon - bone is thrown into the air and becomes an ultramodern weapon - a nuclear weapons platform.
  • Artificial Intelligence [sourceforge.net] has arrived right on time in 2001 as predicted by Stanley Kubrick, but not as the Heuristically programmed ALgorithmic (HAL) computer that tried to get Dave to open the pod bay door. Instead, the A.I. [virtualentity.com] is a primitive, low-intelligence virtual entity striving to establish itself in such forms as Visual Basic Mind.VB [virtualentity.com] and Java-based Mind.JAVA [angelfire.com] -- earthbound AI Minds incapable of space flight.

    When the film 2001: A Space Odyssey came out in 1968, we had not yet even heard of the now onrushing Technological Singularity [caltech.edu] beyond which no science fiction writer can even imagine what things will be like. because it's a Singularity .

  • The article seems to take a shot at AI. Anyone know where they get there facts that the prevailing notion is that computers will never rival human inteligence?

    If you want a different view, read Ray Kurzweil's The Age of Spiritual Machines [amazon.com]. He's a smart guy, whos won several prestigious awards. The National Medal of Technology [kurzweiltech.com] and The Lemelson-MIT prize [mit.edu].

    • A lot of the anti-AI sentiment is based on disappointment from the 80s. We were a long way off from creating any type of useful AI in that time period (and we still are, IMO), but many companies made wild claims to help boost their funding. The government and many private VC-type operations dumped a lot of money into AI at this time -- not quite as much as was dumped into ecommerce-web-sites-selling-pet-clothes-etc, but a significantly large amount.

      Considering the AI 'boom' of the 80s failed to produce anything concrete on almost every level, there's still a deep seated resentment against AI and AI researchers in some circles.

  • by Pope ( 17780 )
    The miniskirt is still around!

    Mmmm... space babes...
  • by mindslip ( 16677 ) on Saturday December 22, 2001 @03:26PM (#2742010)
    It would seem the posts (other than the typical troll/spam) completely miss the meaning of the book. Much like one of his previous masterpieces (I think *very* highly of the philosophical teachings of Clarke), "Childhood's End", "2001: A Space Odyssey" used technology only as a subtext.

    The fact that the environment of 2001 includes a world where computers are "intelligent" is only presented to illustrate the evolution not only of Humans, but as Humans-As-Gods.

    The two most important scenes in the movie (which by the way are *far* more insightful in the book, as almost all book-to-movie translations are) are the following:

    In the opening chapter, "The Dawn Of Man", an ape looks upon a pile of armadillo bones. This is nothing new, but the ape has something happen to him that has never happened before in the history of the Earth: The ape has an insight.
    Picking up a bone, it flops in his wrist and hits some others. The ape picks it up again, and instead of it flopping by accident, he *lets* it flop in his wrist, seeing it hit the other bones and making them jump. This was a beautiful literary demonstration of the spark of intelligence happening in an otherwise "merely-sentient" being.
    A few scenes later, in a triumph of the knowledge and abilities gained by discovering this new tool, and indeed, the ability to use tools at all, an ape after winning a fight for terratory hurls the weapon used (the bone) into the air. The camera pans up slowly with the rising bone, and pans back down with the falling spacecraft as it floats in space.

    The beautiful imagination of Clarke and the wonderful cinematography of Kubrick, without even so much as dialogue, make a startling presentation of how from a tiny spark of insight, and a *lot* of time, Human Beings have evolved to the point where they are able to move even beyond their own world.

    The final scene ("Jupiter, and Beyond the Infinite"), that of Cmdr. Dave Bowman in a white room, completes the progression of evolution as Clarke intended to explain it in his book:
    Bowman, an evolved ape, a Human Being capable of venturing out beyond his own world, finds himself in the realm of his own mind, and his own existance. He observes himself, as if "out-of-body", locked in a space pod. Turning to look elsewhere, he finds himself an older man sitting eating dinner. Becoming that older man, and turning to look elsewhere, he finds himself a very old man laying in a bed. Becoming that old man and looking up from his bed, he finds the Monolith, representative of a God, or "creator-being", seeming to watch over him.

    Then, from the Monoliths point of view, or perhaps it could be explained as becoming the Monolith, becoming that God-Creator-Being which Clarke seems to imply is the final destiny of Human evolution, he sees himself as an embryo, but not the embryo of a Human Being, rather, a "Starchild" as the book (and sequel movie, "2010: The Year We Make Contact") calls it.

    This Starchild is the evolution of Humanity. *THIS* is what the book (much like "Childhood's End") is about: The evolution of Humanity from merely physically aware ape, to intelligent Human Being, able to take control of the world around him, to God-like Creator-Being, existing in a metaphysical sense, and evolved beyond the physical. Indeed, "Beyond the Infinite", as the chapter is called.

    Clarke's startlingly insightful book, indeed his whole philosophy and dream of Humanity's potential, is not at all about technology. It's not at all about Artificial Intelligence, nor about computers becoming sentient. It's about *HUMANS* becoming sentient. It's about Human Beings evolving beyond the physical limitations of merely "in the image of Him" to a being not of body but of energy and an ability beyond our comprehension.

    Much like the statement "Created in the image of God" would imply "Created with the abilities and the potential of God", much like the irrefutable knowledge that Humans pass their abilities, their weaknesses, and their potential on genetically from generation to generation, each generation becoming stronger and more knowledgeable by the rules of self-preservation (in a Darwinian and genetic sense), Clarke's stories and philosophies are about evolving further towards that which created Us, to the destiny of becoming that which can Create.

    Technology (those of AI, space travel, genetic research, cloning, destruction, and healing) is merely one of the tools we have been given the insight and intelligence to develop along our evolutionary path.

    mindslip.
    • Well, while not going anywhere near the depth that you have gone, I'd say 2001 is about the evolution of intellegence IN THE UNIVERSE and humanity's part in this story is just what the two plus hours of the movie was able to focus on. The Monolith and to a lesser extent HAL were both intellegences that evolved independently of humanity, and the ignition of Jupiter and warnings in the sequel about Europa only strengthen the point that the Monolith was trying to develop intellegence anywhere it could and really had no stake in humanity except as just another experiment. Clark has dealt with this idea of humans being incidental in the grand scheme of things before, most notably in Childhood's End. But certainly I do agree with your main point, which is that 2001 wasn't at all about technological gizmos.
    • You are absolutely correct. In fact the comparison with Nietzsche's Thus Spake Zarathustra is quite apt. Three stages of the movie can be thought of as representing three stages in thte book - the camel, the lion and the child (also the Superman).
      The parallel is close to perfect and there is no doubt Kubrick was aware of it (the music in the film is Strauss's Thus Spake Zarathustra, for example).

      However, I would consider black monoliths to be just symbols of transition, rather than actual artifacts or beings.

      Also note that the book had been written after the film, not the other way around.
      • I actually think the book and the movie were written simultaneously by Clarke and Kubrick. I may be mistaken, but the version of the book I have has a preface written by Clarke where he describes how they approached the writing.
        • Is it right? I have always thought the book was written after by Clarke. For one thing Kubrick is not a coauthour. For another, the movie is far superior IMHO.
        • The book was written first, by Clarke alone. In the book, the Discovery went to a moon of Saturn, not Jupiter. Big Brother was sitting upright on the moon like a skyscraper, and Dave fell into it trying to land on it.

          Then, Clarke wrote the second book, instead using Jupiter (I imagine because Europa seemed like a good spot to introduce new life). He retroactively changed the plot of 2001 to a Jupiter mission when he collaborated with Kubrick on the movie script.

          The interesting thing is, both destinations have met with interesting coincidences. Europa has indeed turned out to be a scientific curiosity, with speculation of large oceans of liquid water underneath a covering of ice.

          On the saturn side, the moon was described in 2001 as having a large oval of white (a perfectly shaped field of rocks), with Big Brother standing in the center. The effect was of a large eye with a black pupil at its center, which "blinked" when Dave was sent through the wormhole. An eerie effect, and I think that was the whole reason for the description.

          Later, a probe sent back imagery of the same moon (can't remember which one), and scientists saw... a white oval on the surface. I read one of them quoted saying something like "If there's a black rock in the middle I'm gonna kill Arthur C. Clarke"
          • > The book was written first, by Clarke alone.

            The short story "The Sentinel" was written first, the book and filmscript for "2001" were then done at overlapping times. Like the previous poster says, there is a preface in the book explaining this.

            > Then, Clarke wrote the second book, instead using Jupiter (I imagine because Europa seemed like a good spot to introduce new life).
            > He retroactively changed the plot of 2001 to a Jupiter mission when he collaborated with Kubrick on the movie script.

            No, the second book (2010) used Jupiter because the movie had. (Also because if you want to create a new mini-sun, Jupiter is a better choice than Saturn).

            This is from memory, but a quick Google shows e.g.
            http://scifidimensions.fanhosts.com/Dec00/2001bo ok s.htm supports it.

    • > The two most important scenes in the movie (which by the way are *far* more insightful in the book, as almost all book-to-movie translations are) are the following

      D00D! The screenplay was written by Clarke & Kubrik based on a short story by Clarke. The two scenes you mention were not in the short story.

    • Great post man! I would also like to meld into that: In my interpretation, the progression of the story revolves around the creation of technology to advance intelligence and simplify the act of being. The bone, a piece of landscape was transformed by the ape in that moment of insight into a piece of technology. As a tool, as a weapon, the bone made the act of living and solving problems a little easier. It was also an advance that if one were alive in that day would slap their forehead and say to themselves, now why didn't I think of that? I would draw the conclusion that once this aspect of technology invention became evident to those employing them it allowed the species to focus on creating more technology that would further change their lives for the better. Thus striving to invent allowed our brains to evolve. Technology is an aid to evolution. The more technology you have the easier your life gets and the harder your problems to overcome become. Technology is also an aid to simplifying your life. By these two bits, one could say that, Technology is an aid to evolve into a being that is both on a higher level and yet simple at the same time. This sounds an awful lot like the monolith. The monolith is a symbol of minimal perfection. The apes, seeing the monolith and having the initial insight of creating it's first technology, transforms that higher state of being, the monolith, into a goal for all of human kind. Technology is just another vehicle to continue and expidite the process of reaching out towards that goal.
  • IMHO, it's that in Clarke's "2001", humans have a permanent manned presence in space near Earth and are starting to expand a bit.

    In the real 2001, we don't have shit for a manned presence in space. Let's face it, compared with the vision in "2001", the ISS is a complete joke, and we've basically just been sitting on our asses for the past 30 years when it comes to space.

    But the real bummer of it all is that I don't think we'll have a permanent, independent manned presence in space for at least the next thousand years. Why? Because such a group of people represents a greater threat to the U.S. (or any large, power-greedy government) than any other country on Earth. Think about it: such a group of people could literally drop rocks the size of a football field on any place on the planet, and do so with relative immunity. Such a group would be more or less untouchable, and no government on the face of this planet that cares anything about power could handle that.

    That's why I think the government will regulate any private manned space venture out of existence.

    • Funny you should mention rock droppings. Robert Heinlin's novel 'The Moon is a Harsh Mistress' is basically exactly what you've just talked about. Another very good, possibly accurate vision of the future?
    • That's why I think the government will regulate any private manned space venture out of existence.

      However, they couldn't regulate any private manned space venture, as space isn't theirs. If I didn't live in the US and wanted to go into space using my own stuff, I'm not entirely sure how they could regulate that at all.

      thenerd.
      • However, they couldn't regulate any private manned space venture, as space isn't theirs. If I didn't live in the US and wanted to go into space using my own stuff, I'm not entirely sure how they could regulate that at all.

        That's if you don't live in the U.S. Or in any country that acts as the U.S.'s bitch.

        So let's say you're trying to start a private manned space venture. You need all sorts of relatively exotic and high-tech equipment (the space suits, for one thing). Where exactly are you going to get this stuff from? Any place you might get it from will receive strong "suggestions" from the U.S. government that they refrain from selling it. A few governments on the planet will tell the U.S. where to stick it but most/all of those don't have the tech to sell you anyway.

        Basically, I'd say that any country that has an advanced enough tech base to make your venture possible also has a power-hungry paranoid government running it, or one which likes to kiss the ass of such a government.

        • So let's say you're trying to start a private manned space venture. You need all sorts of relatively exotic and high-tech equipment (the space suits, for one thing).

          Getting into space isn't as high-tech as you think, as long as you have enough scientific brainpower. Look at Russia in the 60s. And speaking of Russia, notice how much good the US and NASA's "strong 'suggestions'" did when Tito wanted to tour space.

          -Legion

    • But the real bummer of it all is that I don't think we'll have a permanent, independent manned presence in space for at least the next thousand years. Why? Because such a group of people represents a greater threat to the U.S. (or any large, power-greedy government) than any other country on Earth.

      I think you're missing the much simpler point: what advantage would come from having a permanent habitat in space? Science and abstract knowledge, yes, and practical knowledge of how to live and work in that environment, but what else?

      Living in space is hard, orders of magnitude harder than setting up a settlement in an uninhabited place on Earth. So our reason for moving into space would have to be orders of magnitude better than our reasons for (for example) colonizing and populating North America in the 1500s.

      The only compelling reason I can think of to set up settlements in space or on other worlds is the "all your eggs in one basket" problem. It is at least theoretically possible that a catastrophe could make our planet uninhabitable, and thereby wipe out our entire species. Setting up settlements on Mars (for example) would help guarantee that no catastrophe that wipes out our whole planet would wipe out our whole species. And even that argument appeals to an ethic-- survival of the species-- that most people find it hard to personalize.

      Of course, even then we have the whole death-of-the-sun thing to worry about. So we should colonize planets around other stars. The we have to keep an eye on this fragile galaxy of ours-- one really big black hole at the whole thing is kaput! And, sooner than you realize, you're worrying about how to stop proton decay and fend off the eventual heat death of the universe, problems so far off that even talking about them requires scientific notation.

      All in all, it just doesn't add up to a very good reason to spend a lot of effort on living in space.
      • All in all, it just doesn't add up to a very good reason to spend a lot of effort on living in space.

        The probable destruction of human civilization isn't a very good reason to start getting into space while we can?

        Hey, if you insist. :)

        -Legion

        • The probable destruction of human civilization isn't a very good reason to start getting into space while we can?

          (Probable?? Discussions of probability become meaningless when the event domain is expanded too far. It's the million-monkey problem. Given a million asteroids in random orbits and an infinite amount of time, one of those asteroids will hit the Earth. This means absolutely nothing.)

          Exactly how much good will it do me to have a million people living on the moon? Not humanity in general, but me, personally.

          This is the point of view through which most humans see the world: self-interest. It's not a moral thing-- not absolutely good or absolutely bad-- it's just the way things are.

          Given the limited resources at our society's disposal, it's hard to convince the population as a whole that setting up homesteads on other planets is a better use of money, time, and raw materials than, say, curing heart disease.

          So given the opportunity costs involved, no, the eventual possibility of the destruction of our planet is not a very good reason to get into space.
    • I think someone has been watching too much Gundam Wing.
  • A few years ago I bought the book 'Hal's Legacy; 2001's Computer as Dream and Reality'. It's a pretty cool comparison of Clarke's vision of 2001 and how far we got in 1997. It compares the diferent abilities of HAL with the state of AI today, writen by experts in those fields, like
    • Foreword by Arthur C. Clarke
    • Interview with Marvin Minsky by David Stork (editor of the book)
    • Speech recognition and understanding, by Ray Kurzweil
    • Computer ethics (When HAL Kills, Who's to blame?), by Daniel C. Dennet
    • Chapters on text-to-speech, computer-chess, supercomputer-design, reliable computing an fault-tolerance, use of language, computer 'eyes', speechreading, emotions and computing, etc...

    It's a cool book to read if you're interested in AI (but not an expert, then it could be all old news I guess), but it is a bit expensive (at least here in Europe)..

    'HAL's Legacy', edited by David G. Stork, MITpress, ISBN 0-262-19378-7. Oh, I just found an online version at MIT, check it out: http://mitpress.mit.edu/e-books/Hal/ [mit.edu]

    NachtVorst
  • Sir Arthur C. Clarke held a webcast interview with my school [wpi.edu] a little while back titled "Imagine in the Future: Visions of the World to Come." Clarke and some others talked about their expectations for the next 100 years. You can watch the video (Windows Media only) at here [wpi.edu]. It was a pretty interesting discussion.
  • Mad did this comparison some issues ago: (a sample)

    • People evolved from apes : The Man Show
    • A giant mysterious black monolith : Shaquille O'Neil
    • Evil computers attempt to take over humankind : Microsoft
    • Bland, tasteless space food : Taco-Bell Chalupas
    • The world will only have white people : NBC's Thursday night lineup
    • some more which I couldn't remember off the top of my head...
  • Besides other way around, there are also some areas of technology that were not predicted by the "2001: A Space Odyssey" and yet they have great impact on the way we live nowadays.

    Namely, there's this scene in the film where Floyd calls home and his child answers the phone saying that he cannot talk to mommy because she went to the hair-dresser. In this case the reality is even more advanced that Kubrick's anticipation - obviously the nowadays wife would carry a mobile phone if her husband was in space on a mission.

  • I quote.
    "Poorly-performing computer code is killed off. Superior code is spliced with sibling programs and bred again."

    I think we can all give some significant counter-examples...

    A possible re-write could state: "Poorly-performing computer code is bred for the purpose of appeasement; superior code spliced into the poor code whenever economically necessary."
  • Movie 2001:We're ruled by a giant monolith from outer space.
    Real 2001:We're ruled by congress
  • by MisterBlister ( 539957 ) on Saturday December 22, 2001 @05:51PM (#2742325) Homepage
    (Who was one of the more famous Amiga users, back in the day...) While Clarke has forecasted some amazing bits of technology, like the satellite, etc, I'm still more constantly amazed at the predictions made in Huxley's "Brave New World", including those of genetic engineering and cloning...

    Considering Huxley wrote that novel in 1932 (the structure of DNA wasn't even found until the 1950s!), its rather amazing how accurate both the technology (in general, not the details, since when he was writing it a lot of this was far off fantasy) and the social aspects of it are compared to the current day.

    Simple amazing...

  • Hal's Legacy [dannyreviews.com] is a nice book on how well Clarke predicted the future of computer science in 2001.

    Danny.

    • Yeah - I have this book. It's very nicely done. A few years old now, so I don't know it's out of date on any of the research, but given that a prevailing theme in the book was that AI is a lot harder than we thought, and the stuff depicted in 2001 is mostly way off, I doubt if it is (out of date).

  • a recent Economist article that explores and compares the differences between Clarke/Kubrick's vision of 2001
    Odd, the article only talks about the aspect of AI. I have the feeling that the author originally wrote that as a tie-in to A.I., but it got cancelled due to 9/11, and he recycled it now.
  • I think a lot of HAL's voice interface was just a dramatic device to make it really, really clear that something was going wrong with the computer. It would have been an even more boring movie if Dave and Gary sat around talking about the erratic performance of the expert system software. Similarly, it would have been far less dramatic if, when Dave is locked out, he simply said to himself, "I guess there's a serious bug in the computer" and disassembles a prop that isn't talking back.

    It's true that HAL became the most interesting character in the movie, but I think that was really unintentional. If you take away the dramatic device, the whole point of HAL is that he doesn't understand the value of life and doesn't think at all like a human, even if he sounds like one. He totally fails the Turing test.
  • by alienmole ( 15522 ) on Saturday December 22, 2001 @07:09PM (#2742507)
    The article quotes sociobiologist Richard Dawkins contemplating willow seeds floating through the air:

    It is raining instructions out there; it's raining programs; it's raining tree-growing, fluff-spreading, algorithms. That is not a metaphor, it is the plain truth. It couldn't be any plainer if it were raining floppy discs.
    Or raining, say, AOL CDs...?
  • The article says:

    But their intelligence does not touch our own, and the prevailing scientific wisdom seems to be that it never will.

    Is this indeed the prevailing scientific wisdom on the subject?

    AI is just a software problem. If necessary, a scaled-down universe can be modeled to simulate the human brain. This is guaranteed to work, although it will require massive processing power. But not a theoretically impossible amount, simply one that we will take decades to develop.

  • The state of A.I. (Score:3, Insightful)

    by Animats ( 122034 ) on Saturday December 22, 2001 @10:10PM (#2743018) Homepage
    It's a very depressing field right now. All the main ideas (mathematical logic, expert systems, neural nets, genetic algorithms, subsumption) have hit a wall. Each one will take you so far, but no farther.

    Most progress has been made by hammering on specific areas as engineering problems. Symbolic integration, chess, fingerprint recognition, and speech recognition each yielded, after heavy effort. But no broadly useful approach has emerged.

    Compute power isn't the problem. We don't have good algorithms that just run too slow. We really have no idea what to do next to get to strong AI.

    I went through Stanford CS during the "strong AI is right around the corner" enthusiasm of the mid-1980s. Today, you can go up to the second floor of the Gates Building and see the empty cubicles, and obsolete computers below the gold letters "Knowledge Systems Lab".

    • It's a very depressing field right now. All the main ideas (mathematical logic, expert systems, neural nets, genetic algorithms, subsumption) have hit a wall.

      I agree that the traditional AI community has reached a brick wall and it's very unlikely that any breakthrough in our understanding of intelligence will come from that sector. They've collected way too much useless baggage over the years.

      However, interesting things are happening in the fields of computational neuroscience and neurobiology. The most exciting revelation that has surfaced in the last decade is that the brain is essentially a temporal processing machine. It seems that what matters is the temporal correlations between neural signals, not the manipulation of symbols (as we were led to believe by the now discredited AI crowd). Check out this interview [technologyreview.com] with Jeff Hawkins. I think Jeff is onto something.
      • neuroscience and neurobiology.

        Those guys haven't even figured out where memory is stored, let alone how the representation works. Any conclusions from that crowd are way premature.

  • It seems that the article in The Economist isn't a true comparison of '2001' and 2001, but more of an evolution of AI.... I've read most of the postings here and perhaps we were carried away with all the geek-ness of the movie and the really kewl possibilities of neural computing and space travel.... and the reality of 2001 is just that... reality. We have items today that Clarke didn't foresee, but, typically, we always want what we can't have.... Happy Holidays and peace to all

A complex system that works is invariably found to have evolved from a simple system that works.

Working...