Comparing Clarke/Kubrick's 2001 To Now 177
angkor wrote us about a recent Economist article
that explores and compares the differences between Clarke/Kubrick's vision of 2001, and what we've got. Of course, I'd point out that the literary one wasn't meant to be a literal 2001; but this an interesting comparasion nonetheless.
we did come far (Score:2, Interesting)
Re:we did come far (Score:1)
Re:we did come far (Score:1)
Re:we did come far (Score:1)
Odysee 2001 (Score:2)
At least today is not *that* bad.
not literal? (Score:4, Insightful)
Science fiction is never completely accurate, obviously. But Clarke was one of the most accurate and scientifically rational writers of the century. We haven't gotten to convenient interplanetary travel quite yet, but you can be sure that it will happen much like he describes: a large space station using 'centrifigal force' to simulate gravity, and rockets using the station as a waypoint so the same spacecraft doesn't have to be capable of lifting off from Earth as well as travelling to and landing on another planet or moon.
Now, being able to phone from the station to America for only a few dollars, that's probably a little over-optimistic...
Re:not literal? (Score:4, Insightful)
Clarke's and Kubrick's real failure was not seeing how quickly space exploration would die. In 1968 it would be natural to predict that mankind would be traveling to Jupiter by 2001. What was unnatural to imagine was mankind would visit the moon, and then never leave low earth orbit for three decades. And there is no real reason to assume we'll leave low earth orbit for three more decades.
If they had predicted that in 1968 I would have been blown away by their power of their wisdom.
At the time I was positive that mankind would reach Mars in the 80's. Humanity's lack of real interest in space exploration has been my lifelong disappointment.
And, even though I love science fiction, the older I get the more I realize that science fiction is no more than fantasy. The gravity of the mundane keep us tied to this planet.
2001, the story just plain missed the mark.
Re:not literal? (Score:2, Interesting)
In the beginning space exploration was about showing off how powerful one's defense industry could be to the point that America proved it could put a man on the moon and therefore also establish a lunar base from which to lob missles at the former USSR.
The science of the lunar missions and the subsequent Mars missions were simply funded by the excess money generated by the defense industry to make space exploration seem legitimate in the first place with the veil of scientific inquiry.
Back in the good old days of space exploration (late fifties to mid seventies) there was profit in space exploration. Sadly today NASA works on a shoestring (for space exploration) budget making things which could realize the dreams of mankind just dreams.
Re:not literal? (Score:1)
China is planned a manned space flight by 2005, to be followed by a manned visit to the moon "at a later date". Check out the BBC [bbc.co.uk] for (scarce) details.
Re:not literal? (Score:1)
Like I said, I've been waiting for us to leave low earth orbit for thirty years. Space enthusiasts always talk about what will happen in 5 years or ten years, but then nothing happens. Maybe if the Chinese do something, the U.S. and other nations will feel compelled to compete.
The only reason we went to the moon in the first place was to compete with the Russians. Kennedy was not a pro-space person, but an anti-communist.
It's too bad space exploration couldn't be accomplished like the development open source code. If you could find 5 million people willing to contribute $1,000 a year, you could have a space program with a $5 billion dollar annual budget. The trouble is finding 5 million people who have a passion to see space exploration happen.
Re:not literal? (Score:1)
Re:not literal? (Score:1)
Once you get past the novelty of "wow a guy is walking on the moon" or "wow we're looking at live pictures from Mars", space exploration isn't all that terribly exiting to the average person. After 50 years of science fiction, people have discovered that space exploration isn't anything at all like what you see on TV.
In your typical scifi (including Clarke and Kubrick), people build enormous , complex and fantastic machines, with absolutely no explanation of how they paid for it all.
In real-life, space exploration costs huge amounts of money that comes directly out of Joe-Taxpayer's pocket.
In scifi, people travel in space ships that can fly all over the universe in a few days, and explore worlds full of strange new beings and beautiful exotic scenery.
In real-life, it takes 6 months just to get to a barren planet with nothing but rocks and red dirt. And a couple of years to get to other lifeless planets that have even less to look at.
Even *IF* we could somehow travel at twice the speed of light, you're looking at 18 months to the nearest star. Even at 30 - 40 times the speed of light (not technologically possible), you're looking at *YEARS* to reach other stars/solar systems.
In 1968 it may have been "natural to predict that mankind would be traveling to Jupiter by 2001" but only because people were so caught up in the exitement of the "space race" that nobody bothered to stop and ask "why" -- Why do we want to spend billions of dollars on a two year journey to a frozen ball of gas.
Re:not literal? (Score:1)
Security, food supplies, health, jobs and any other program that closely fit the needs of people and their families will get funded. Things that seem to help other people's families are less supported, but are more important than financing things like space travel or particle physics.
I think Clarke and Kubrick and other science fiction writers failed to understand that. In the sixties science fiction was closely related to space exploration. Few people read science fiction compared to today. The history of rocketry and space exploration coincided with people interested in science fiction, but after the Apollo years, that changed. Starting with Star Wars, science fiction became a major force in the entertainment industry and was no longer linked to the space enthusiasts.
If you met a science fiction fan in the sixities it was almost a given they would also be a space exploration fanatic. That isn't true today.
Science fiction saturates our culture with TV shows, movies, video games, roll playing games, comics, graphic novels, etc. If space exploration is such a major artistic motif, why doesn't the space program get a lot of public support?
Re:not literal? (Score:1)
Money was worth more when the movie was made.
Then again I remember no other references to value of money. That coulda been half his life savings.
The uses of science fiction. (Score:4, Insightful)
A great book about the role of science fiction is Thomas Disch's "The Dreams Our Stuff is Made Of." The science fiction of the past often shapes our present by informing the imaginations of the people who created it. How many AI researchers cite HAL as an inspiration, goal, or benchmark?
Re:not literal? (Score:3, Interesting)
If.... (Score:1, Interesting)
Evolution birthed a greater intelligence: Us
We birthed(or are birthing) a greater intelligence than us: technology (ai)
What will technology birth?
The universe is doing nothing less than attempting to become aware of itself... piece by piece.
Software difference (Score:2, Funny)
Of course we have nothing near the AI as that, but if we did, a script kiddie could probably bring it down, or make it talk dirty, etc.
Re:Software difference (Score:5, Funny)
> It took a lot to take down HAL. Of course we have nothing near the AI as that, but if we did, a script kiddie could probably bring it down
Chris Black (Score:5, Funny)
Chris Black was doing his "Year in review" on the daily show when he said:
"So my review for 2001 the year is the same as for 2001 the space odyssey, It went on too long, it was hard to follow, and you could only enjoy it if you were really, really stoned.
I think that is a pretty apt analysis of the similarities between the two ;-)
Re:Chris Black (Score:2, Informative)
1960s stable, ordered corporate climate gone (Score:3, Interesting)
Leveraged buy-outs, insider trading, junk bonds, corporate mergers, golden parachutes - all this has destroyed what was once the paradigm for how to do things right. When 2001 was made, a 10 or 20 year corporate game plan was not unusual. Now you'd be luck to find any corporate plans looking ahead more than 10 or 20 months. Oh, and need I mention the "dot-com" crash as a perfect example of what this new culture breeds?
Re:1960s stable, ordered corporate climate gone (Score:2, Informative)
In contrast, the 90s saw economic growth that surpassed the 60s by pretty much any economic metric you care to name. And this growth was fueled largely by new companies, new markets, and real increases in productivity.
Oh, and no one in the modern era has ever used a 10 or 20 year horizon for all but the vaguest, most trite, planning (i.e. "Mission Statement"). Not only that but, at least in the US (which is what 2001 and, I presume you, are referring to), companies were notorious in the 60s for having extremely short-sighted strategies. For more information, see any of the scores of treatises published in the 70s and 80s on how to rectify this short-sightedness by emulating the Japanese.
Re:1960s stable, ordered corporate climate gone (Score:1)
Re:1960s stable, ordered corporate climate gone (Score:1)
Stranger Than We Can Imagine... (Score:3, Insightful)
The Economist article outlines three distinct eras of AI research and concludes that none of them had any real hope of success because none mimiced the true nature of the human brain - billions of neurons, each making connections with 10,000 others, for a wiring complexity that is far beyond mere bulk transistors on a 2D spread like current microprocessors. But I wonder - with all the current research about qbits and quantum computing, where a handful of qbits could factor prime numbers of amazing complexity - perhaps the REAL source of artificial consciousness in the future won't be achieved by physical hardwiring of any complexity, but with some sort of "quantum ghost in the machine". Or maybe something even weirder - remember what Clarke said, the future is not only stranger than we imagine, it's stranger than we CAN imagine....
Then again, what's stranger than three pounds of meat reciting "twinkle, twinke little star..."?
Re:Stranger Than We Can Imagine... (Score:2, Interesting)
Re:Stranger Than We Can Imagine... (Score:4, Interesting)
But that's precisely the problem with trying to "achieve AI"--defining what the hell "intelligence" is. For better or worse, people have traditionally defined "intelligence" roughly as "the things people can do but animals can't," or, "the things people can do but it makes our noggins hurt after a while." When put this way, the deficiencies in this definition become pretty apparent, but no one has come up with an obviously better version. Instead we usually approach the question of whether a thing is "intelligent" using the standards of the old Supreme Court decision defining obscenity--we think we know it when we see it.
Or more often, we think we know what it isn't when we see that. The history of "the quest for AI" (I put that in quotes very advisedly) is full of problems that, if solved, would surely be proof of AI...until they are solved, in which case it's still a dumb computer. Computers are now the world champions or competitive with world champions in chess, checkers, backgammon, othello, poker, bridge, and almost any game of mental skill with the significant exception of go. Computers have both proven several important and previously unproved mathematical theorems (e.g. the 4 color map coloring conjecture) and have come up with elegant and/or novel proofs for existing theorems (e.g. a computer proof of Godel's Incompleteness Theorem which "invented" Cantor's diagonalization technique on its own).
On the other hand, we have yet to make a computer which can navigate and react to its environment as well as, say, a pet dog can (sorry AIBO), nor one which can understand human language in any but the most limited domains. (Of course "understand" is a similarly difficult to define term. As an example of what I mean, look at CYC, a company which gets its name from its initial mission when it was founded IIRC back in 1984--to program a computer which understood enough concepts to understand language well enough that it could read an enCYClopedia (or any other descriptions in natural language) and learn what it didn't already know. While CYC has developed a useful system, it's still a ways from passing the encyclopedia test.)
Even though we're used to thinking of playing championship-level chess or doing advanced mathematics as hallmarks of particularly intelligent humans, while navigating an environment or understanding language is something that even the dumbest people can do, we find that computers are good at different things. (Or rather, we know how to program computers to be good at some things but not other things.)
The "problem" has been that in the early days of computers and on into the "golden age" of AI, we didn't know squat about how the human brain worked, nor even about what sorts of steps were needed in order to e.g. understand natural language. Back then, most AI researchers--brilliant people, mind you--figured all that would be necessary for a computer to understand language would be a link to a dictionary and maybe some rudimentary ability to parse grammar. Indeed, in many ways the field of linguistics arose as a result of the attempts and failures of computer scientists to get computers to understand language. Similarly, the successes and failures of AI have been instrumental in guiding or even creating the field of computational neuroscience.
What we are coming to understand is that the things that only "more intelligent" people can do are not really the hallmarks of "intelligence" but rather are examples of people fitting their brains to tasks they were not really designed for. For AI to truly "be achieved", we will have to get much better at making computers succeed at the tasks which a monkey can do just as well as a human, rather than those which humans can do but monkeys can't.
Also, we're learning that our instinctive idea of "intelligence" demands that techniques be general rather than specific. In other words, we don't consider exhaustive depth-limited minimax search with static evaluation to be a truly intelligent game playing technique--even though it can allow a computer to become the world chess champion--because it really sucks at go. The fact that go has a branching factor (i.e. avg. # of legal moves) of over 300 while chess has one of around 30 doesn't mean that similar thinking techniques (so far as we can tell) can't be used for a human to play both, but it does mean that exhaustive search is a feasible technique for a chess-playing computer but not a go-playing computer; we tend to interpret this (rightly or wrongly) as saying that exhaustive search is not an "intelligent" technique.
Next, it's time to stop tossing around that crap about how computers are so much faster or more powerful than human brains. That's complete hogwash. A modern CPU has roughly 10^6 gates, compared to ~10^11 neurons in a human brain. A computer might have 10^9 bits of memory (or even 10^10 if we go really high-end), and 10^11 bits of storage space, but a human brain has ~10^14 synapses, which can be viewed as encoding part of what the brain knows. A human brain has a remarkable 10^14 bits/sec of data bandwidth, compared to ~10^10 bps for a PC and 10^11 bps for e.g. the upcoming Alpha EV7. The only category computers lead in is cycle time, roughly 10^-9 for computers compared to ~10^-3 for the human brain. The upshot of all this is that, when it comes to computers programed as neural networks, a computer can only perform about 10^6 neuron updates/sec compared with 10^14 for a human brain, and the largest computer networks (limited by feasibility not by space) are maybe 10^5 neurons compared to 10^11 in the brain. So, roughly 100,000,000 times slower and 1,000,000,000 times smaller than a brain. (Figures based upon those in _Artificial Intelligence: A Modern Approach_, updated for the 7 years since the book was published.) No wonder computers aren't as intelligent as a human brain! And yet despite the huge disadvantage, neural nets are still the best technique for many AI problems, especially if we are worried about coming up with a technique which seems to be generally intelligent.
And finally, while it's interesting to talk about why we haven't created HAL yet, it's important not to confuse this with the idea that "the field of AI is a failure". AI is *not* a failure. While some problems have proven much harder than we initially expected, this is almost entirely because our initial expectations were completely ignorant, rather than because progress has not been made. Most importantly, we need to realize that people who are working in the field of AI are not sitting there day after day trying to create Lt. Commander Data or pass the Turing Test. Rather they're working on solutions to limited domain problems where computers can augment or replace the efforts of humans--and they're succeeding in many, many instances. The only real "problem" with the field of AI is defining what exactly it is.
Re:Stranger Than We Can Imagine... (Score:2, Insightful)
>consciousness in the future won't be achieved by
>physical hardwiring of any complexity, but with
>some sort of "quantum ghost in the machine".
This is a very interesting proposition, and if you're truly interested in it, I would highly recommend reading some of the popular writings of Roger Penrose (The Emporer's new mind, etc.). One of his central theses is that 'mind' is a consequence of quantum effects.
Pesonally, I don't particularly agree with Penrose; but like it or not, I still find Penrose an excellent (and thought-provoking) read.
Re:Stranger Than We Can Imagine... (Score:1)
Darn right. Think back to 1981. Would you have even contemplated that you'd be sitting in front of a computer in ten years time having discussions with thousands of people based all over the world?
Could we have contemplated that there'd be a free UNIX about in twenty years time that would threaten the domination of one of the world's largest companies? Or, for that matter, could we have even thought that a COMPUTER SOFTWARE company run by some nerds in Seattle would be the world's most powerful company?
And what about MP3? You can walk around with your entire record collection in your pocket now. With 3G technologies, you can access the Web at broadband speeds on the move and download entire albums in minutes to your handheld devices. This is crazy stuff to even have thought about five years ago, let alone twenty.
My own prediction is that quantum computing is going to give us a major kick in the ass in the next twenty years, and we can't even possibly imagine what technology will be like then.
We're currently sitting on the part of the exponential curve of technology growth where it's shooting up fast, but not at an impossibly dizzy rate. Twenty years, we'll probably be there.
Re:Stranger Than We Can Imagine... (Score:1)
and, in the beginning, we were thinking that a machine that could play chess would be a real source for intelligence. just because it's new and different doesn't mean it provides a breakthrough in the area needed. yes, quantum stuff is new and interesting, but it primarily involves lists, factors, and the like. serial operation is possible, but gets no real benefit from qubits. silicon+3D+FPGA could be the answer as well.
and conversly, it is seldom wise to cut down the new because it could possibly solve new problems. for all we know, it could help develop ai. afaik, neurons use to some degree quantum effects, so quantum computing is not out of the question. but it is probably only part of the answer.
Re:Stranger Than We Can Imagine... (Score:1)
Instead we've used the concepts from AI work and applied them elsewhere, as fuzzy logic and neural networks. Some of this statistical logic has been seriously useful to us.
Back to 2001 - the part which would seem unbelieveable both then and in 1972 at the time of the last Apollo moon shot is that thirty years later we wouldn't have sent men back to the moon. Our space exploration is still at the level of "lob some instruments at Mars and hope they land the right way up".
Dunstan
The human race. . . (Score:3, Insightful)
Every time I read a good Sci Fi book, I am amazed by what I read, but, then, I look around, and I see things that are not even remotely considered by the writers:
Composite Materials
Polymers
VIDEO GAMES
MP3s!
Post-It-Notes
Of course, some of those things are quite frivolous (or are they?), but, that's what makes the human race so beautiful: we come up with things that are truly amazing, in their diversity and simplicity. We are an unruly and unpreditable crew of warriors, writers, diplomats, scientists, researchers, dreamers, and a myriad of other vocations - we are beautiful.
I hope we continue to pave the path of peace and progress for ever and ever.
Re:The human race. . . (Score:1)
For example, peer to peer computing has been known about forever, as has file compression, but who could have predicted the success of MP3 trading over Napster?
Who is to say that the dragons you fight in Everquest today might not take flight above the surface of the earth tomorrow? These are very exciting times.
Re:The human race. . . (Score:2)
Re:The human race. . . (Score:1)
technology (Score:1)
The bone in the hand of an ape is the first twinkle of intelligence. Then, as the humanity advances to its full might, the technology allows humans to create giant space stations and sentient computers. But in the end Dave destroys (murders?) the computer and travels down the star tunnel alone to become something just as different from a modern human as the modern humans are different from their prehistoric ancestors.
Re:technology (Score:1)
But who gave Dave that idea?
Note how HAL bounced back nine years later, and the rest of the crew were still dead or worse.
Of course it boiled down to conflicting orders given to HAL by people who didn't know what they were doing. If you HAD to do everything you were told you'd probably go crazy and kill people too. (shame HAL wasn't programmed to not kill, I guess Asimov could have better inspired the people making/programming HAL. "Kill me!", "I'm sorry dave, I can't do that.", "Kill yourself!", "Okie Dokie Davie.", *pop*)
Pedantry (Score:2, Informative)
A space station orbits the earth.
Not entirely relevent, but the first image from 2001 that wasn't
prehistoric, was actually a "space bomb", not a space ship or a
space station as is often thought. Cinematically, this makes more
sense as it links prehistoric man to futuristic man with the
concept of violence.
Re:Pedantry (Score:1)
Well, to be even more precise, a prehistoric weapon - bone is thrown into the air and becomes an ultramodern weapon - a nuclear weapons platform.
It's 2001 and AI is here but not HAL. (Score:2, Interesting)
Artificial Intelligence [sourceforge.net] has arrived right on time in 2001 as predicted by Stanley Kubrick, but not as the Heuristically programmed ALgorithmic (HAL) computer that tried to get Dave to open the pod bay door. Instead, the A.I. [virtualentity.com] is a primitive, low-intelligence virtual entity striving to establish itself in such forms as Visual Basic Mind.VB [virtualentity.com] and Java-based Mind.JAVA [angelfire.com] -- earthbound AI Minds incapable of space flight.
When the film 2001: A Space Odyssey came out in 1968, we had not yet even heard of the now onrushing Technological Singularity [caltech.edu] beyond which no science fiction writer can even imagine what things will be like. because it's a Singularity .
Re:It's 2001 and AI is here but not HAL. (Score:4, Funny)
"I was written with WHAT????"
(+1, MS-bashing)
Re:It's 2001 and AI is here but not HAL. (Score:1)
Kurzweil Would be pissed (Score:2, Informative)
If you want a different view, read Ray Kurzweil's The Age of Spiritual Machines [amazon.com]. He's a smart guy, whos won several prestigious awards. The National Medal of Technology [kurzweiltech.com] and The Lemelson-MIT prize [mit.edu].
Re:Kurzweil Would be pissed (Score:2, Interesting)
Considering the AI 'boom' of the 80s failed to produce anything concrete on almost every level, there's still a deep seated resentment against AI and AI researchers in some circles.
Hooray! (Score:1)
Mmmm... space babes...
Missing the meaning of the book... (Score:5, Informative)
The fact that the environment of 2001 includes a world where computers are "intelligent" is only presented to illustrate the evolution not only of Humans, but as Humans-As-Gods.
The two most important scenes in the movie (which by the way are *far* more insightful in the book, as almost all book-to-movie translations are) are the following:
In the opening chapter, "The Dawn Of Man", an ape looks upon a pile of armadillo bones. This is nothing new, but the ape has something happen to him that has never happened before in the history of the Earth: The ape has an insight.
Picking up a bone, it flops in his wrist and hits some others. The ape picks it up again, and instead of it flopping by accident, he *lets* it flop in his wrist, seeing it hit the other bones and making them jump. This was a beautiful literary demonstration of the spark of intelligence happening in an otherwise "merely-sentient" being.
A few scenes later, in a triumph of the knowledge and abilities gained by discovering this new tool, and indeed, the ability to use tools at all, an ape after winning a fight for terratory hurls the weapon used (the bone) into the air. The camera pans up slowly with the rising bone, and pans back down with the falling spacecraft as it floats in space.
The beautiful imagination of Clarke and the wonderful cinematography of Kubrick, without even so much as dialogue, make a startling presentation of how from a tiny spark of insight, and a *lot* of time, Human Beings have evolved to the point where they are able to move even beyond their own world.
The final scene ("Jupiter, and Beyond the Infinite"), that of Cmdr. Dave Bowman in a white room, completes the progression of evolution as Clarke intended to explain it in his book:
Bowman, an evolved ape, a Human Being capable of venturing out beyond his own world, finds himself in the realm of his own mind, and his own existance. He observes himself, as if "out-of-body", locked in a space pod. Turning to look elsewhere, he finds himself an older man sitting eating dinner. Becoming that older man, and turning to look elsewhere, he finds himself a very old man laying in a bed. Becoming that old man and looking up from his bed, he finds the Monolith, representative of a God, or "creator-being", seeming to watch over him.
Then, from the Monoliths point of view, or perhaps it could be explained as becoming the Monolith, becoming that God-Creator-Being which Clarke seems to imply is the final destiny of Human evolution, he sees himself as an embryo, but not the embryo of a Human Being, rather, a "Starchild" as the book (and sequel movie, "2010: The Year We Make Contact") calls it.
This Starchild is the evolution of Humanity. *THIS* is what the book (much like "Childhood's End") is about: The evolution of Humanity from merely physically aware ape, to intelligent Human Being, able to take control of the world around him, to God-like Creator-Being, existing in a metaphysical sense, and evolved beyond the physical. Indeed, "Beyond the Infinite", as the chapter is called.
Clarke's startlingly insightful book, indeed his whole philosophy and dream of Humanity's potential, is not at all about technology. It's not at all about Artificial Intelligence, nor about computers becoming sentient. It's about *HUMANS* becoming sentient. It's about Human Beings evolving beyond the physical limitations of merely "in the image of Him" to a being not of body but of energy and an ability beyond our comprehension.
Much like the statement "Created in the image of God" would imply "Created with the abilities and the potential of God", much like the irrefutable knowledge that Humans pass their abilities, their weaknesses, and their potential on genetically from generation to generation, each generation becoming stronger and more knowledgeable by the rules of self-preservation (in a Darwinian and genetic sense), Clarke's stories and philosophies are about evolving further towards that which created Us, to the destiny of becoming that which can Create.
Technology (those of AI, space travel, genetic research, cloning, destruction, and healing) is merely one of the tools we have been given the insight and intelligence to develop along our evolutionary path.
mindslip.
Re:Missing the meaning of the book... (Score:2)
Re:Missing the meaning of the book... (Score:1)
The parallel is close to perfect and there is no doubt Kubrick was aware of it (the music in the film is Strauss's Thus Spake Zarathustra, for example).
However, I would consider black monoliths to be just symbols of transition, rather than actual artifacts or beings.
Also note that the book had been written after the film, not the other way around.
Which came first? (Score:1)
Re:Which came first? (Score:1)
Re:Which came first? (Score:2)
Then, Clarke wrote the second book, instead using Jupiter (I imagine because Europa seemed like a good spot to introduce new life). He retroactively changed the plot of 2001 to a Jupiter mission when he collaborated with Kubrick on the movie script.
The interesting thing is, both destinations have met with interesting coincidences. Europa has indeed turned out to be a scientific curiosity, with speculation of large oceans of liquid water underneath a covering of ice.
On the saturn side, the moon was described in 2001 as having a large oval of white (a perfectly shaped field of rocks), with Big Brother standing in the center. The effect was of a large eye with a black pupil at its center, which "blinked" when Dave was sent through the wormhole. An eerie effect, and I think that was the whole reason for the description.
Later, a probe sent back imagery of the same moon (can't remember which one), and scientists saw... a white oval on the surface. I read one of them quoted saying something like "If there's a black rock in the middle I'm gonna kill Arthur C. Clarke"
Re:Which came first? (Score:2)
The short story "The Sentinel" was written first, the book and filmscript for "2001" were then done at overlapping times. Like the previous poster says, there is a preface in the book explaining this.
> Then, Clarke wrote the second book, instead using Jupiter (I imagine because Europa seemed like a good spot to introduce new life).
> He retroactively changed the plot of 2001 to a Jupiter mission when he collaborated with Kubrick on the movie script.
No, the second book (2010) used Jupiter because the movie had. (Also because if you want to create a new mini-sun, Jupiter is a better choice than Saturn).
This is from memory, but a quick Google shows e.g.
http://scifidimensions.fanhosts.com/Dec00/2001b
Re:Missing the meaning of the book... (Score:2)
> The two most important scenes in the movie (which by the way are *far* more insightful in the book, as almost all book-to-movie translations are) are the following
D00D! The screenplay was written by Clarke & Kubrik based on a short story by Clarke. The two scenes you mention were not in the short story.
Re: Forms of Technology (Score:1)
The biggest difference between "2001" and 2001? (Score:2, Insightful)
In the real 2001, we don't have shit for a manned presence in space. Let's face it, compared with the vision in "2001", the ISS is a complete joke, and we've basically just been sitting on our asses for the past 30 years when it comes to space.
But the real bummer of it all is that I don't think we'll have a permanent, independent manned presence in space for at least the next thousand years. Why? Because such a group of people represents a greater threat to the U.S. (or any large, power-greedy government) than any other country on Earth. Think about it: such a group of people could literally drop rocks the size of a football field on any place on the planet, and do so with relative immunity. Such a group would be more or less untouchable, and no government on the face of this planet that cares anything about power could handle that.
That's why I think the government will regulate any private manned space venture out of existence.
Re:The biggest difference between "2001" and 2001? (Score:1)
Re:The biggest difference between "2001" and 2001? (Score:1)
However, they couldn't regulate any private manned space venture, as space isn't theirs. If I didn't live in the US and wanted to go into space using my own stuff, I'm not entirely sure how they could regulate that at all.
thenerd.
Re:The biggest difference between "2001" and 2001? (Score:2)
That's if you don't live in the U.S. Or in any country that acts as the U.S.'s bitch.
So let's say you're trying to start a private manned space venture. You need all sorts of relatively exotic and high-tech equipment (the space suits, for one thing). Where exactly are you going to get this stuff from? Any place you might get it from will receive strong "suggestions" from the U.S. government that they refrain from selling it. A few governments on the planet will tell the U.S. where to stick it but most/all of those don't have the tech to sell you anyway.
Basically, I'd say that any country that has an advanced enough tech base to make your venture possible also has a power-hungry paranoid government running it, or one which likes to kiss the ass of such a government.
Re:The biggest difference between "2001" and 2001? (Score:2)
Getting into space isn't as high-tech as you think, as long as you have enough scientific brainpower. Look at Russia in the 60s. And speaking of Russia, notice how much good the US and NASA's "strong 'suggestions'" did when Tito wanted to tour space.
-Legion
Re:The biggest difference between "2001" and 2001? (Score:2)
I think you're missing the much simpler point: what advantage would come from having a permanent habitat in space? Science and abstract knowledge, yes, and practical knowledge of how to live and work in that environment, but what else?
Living in space is hard, orders of magnitude harder than setting up a settlement in an uninhabited place on Earth. So our reason for moving into space would have to be orders of magnitude better than our reasons for (for example) colonizing and populating North America in the 1500s.
The only compelling reason I can think of to set up settlements in space or on other worlds is the "all your eggs in one basket" problem. It is at least theoretically possible that a catastrophe could make our planet uninhabitable, and thereby wipe out our entire species. Setting up settlements on Mars (for example) would help guarantee that no catastrophe that wipes out our whole planet would wipe out our whole species. And even that argument appeals to an ethic-- survival of the species-- that most people find it hard to personalize.
Of course, even then we have the whole death-of-the-sun thing to worry about. So we should colonize planets around other stars. The we have to keep an eye on this fragile galaxy of ours-- one really big black hole at the whole thing is kaput! And, sooner than you realize, you're worrying about how to stop proton decay and fend off the eventual heat death of the universe, problems so far off that even talking about them requires scientific notation.
All in all, it just doesn't add up to a very good reason to spend a lot of effort on living in space.
Re:The biggest difference between "2001" and 2001? (Score:2)
The probable destruction of human civilization isn't a very good reason to start getting into space while we can?
Hey, if you insist. :)
-Legion
Re:The biggest difference between "2001" and 2001? (Score:2)
(Probable?? Discussions of probability become meaningless when the event domain is expanded too far. It's the million-monkey problem. Given a million asteroids in random orbits and an infinite amount of time, one of those asteroids will hit the Earth. This means absolutely nothing.)
Exactly how much good will it do me to have a million people living on the moon? Not humanity in general, but me, personally.
This is the point of view through which most humans see the world: self-interest. It's not a moral thing-- not absolutely good or absolutely bad-- it's just the way things are.
Given the limited resources at our society's disposal, it's hard to convince the population as a whole that setting up homesteads on other planets is a better use of money, time, and raw materials than, say, curing heart disease.
So given the opportunity costs involved, no, the eventual possibility of the destruction of our planet is not a very good reason to get into space.
Re:The biggest difference between "2001" and 2001? (Score:2)
-Legion
Re:The biggest difference between "2001" and 2001? (Score:2)
If you think you can colonize space all by yourself, then by all means, be my guest.
But otherwise, it's going to take a lot of money and labor and natural resources. You're going to need to get a lot of people to agree with you before you can even get started.
Re:The biggest difference between "2001" and 2001? (Score:2)
-Legion
Re:The biggest difference between "2001" and 2001? (Score:1)
There's a book about this (Score:2, Interesting)
It's a cool book to read if you're interested in AI (but not an expert, then it could be all old news I guess), but it is a bit expensive (at least here in Europe)..
'HAL's Legacy', edited by David G. Stork, MITpress, ISBN 0-262-19378-7. Oh, I just found an online version at MIT, check it out: http://mitpress.mit.edu/e-books/Hal/ [mit.edu]
NachtVorst
Clarke (Score:1)
MAD Magazine (Score:1)
Mad did this comparison some issues ago: (a sample)
There were no mobiles in the film (Score:1)
Namely, there's this scene in the film where Floyd calls home and his child answers the phone saying that he cannot talk to mommy because she went to the hair-dresser. In this case the reality is even more advanced that Kubrick's anticipation - obviously the nowadays wife would carry a mobile phone if her husband was in space on a mission.
Re:There were no mobiles in the film (Score:1)
Something in the article... (Score:1)
"Poorly-performing computer code is killed off. Superior code is spliced with sibling programs and bred again."
I think we can all give some significant counter-examples...
A possible re-write could state: "Poorly-performing computer code is bred for the purpose of appeasement; superior code spliced into the poor code whenever economically necessary."
Re:Something in the article... (Score:2)
Movie 2001 vs real 2001 (Score:1)
Real 2001:We're ruled by congress
Re:Movie 2001 vs real 2001 (Score:1)
Re:Movie 2001 vs real 2001 (Score:2)
Real 2001: We're ruled by a monopoly from Redmond.
With all due respect to Arthur C Clarke (Score:3, Interesting)
Considering Huxley wrote that novel in 1932 (the structure of DNA wasn't even found until the 1950s!), its rather amazing how accurate both the technology (in general, not the details, since when he was writing it a lot of this was far off fantasy) and the social aspects of it are compared to the current day.
Simple amazing...
Hal's Legacy - book (Score:2)
Danny.
Re:Hal's Legacy - book (Score:1)
No, it doesn't (Score:2)
Not so far off base (Score:1)
It's true that HAL became the most interesting character in the movie, but I think that was really unintentional. If you take away the dramatic device, the whole point of HAL is that he doesn't understand the value of life and doesn't think at all like a human, even if he sounds like one. He totally fails the Turing test.
Article explains success of AOL... (Score:3, Funny)
no real AI ever? (Score:2)
But their intelligence does not touch our own, and the prevailing scientific wisdom seems to be that it never will.
Is this indeed the prevailing scientific wisdom on the subject?
AI is just a software problem. If necessary, a scaled-down universe can be modeled to simulate the human brain. This is guaranteed to work, although it will require massive processing power. But not a theoretically impossible amount, simply one that we will take decades to develop.
The state of A.I. (Score:3, Insightful)
Most progress has been made by hammering on specific areas as engineering problems. Symbolic integration, chess, fingerprint recognition, and speech recognition each yielded, after heavy effort. But no broadly useful approach has emerged.
Compute power isn't the problem. We don't have good algorithms that just run too slow. We really have no idea what to do next to get to strong AI.
I went through Stanford CS during the "strong AI is right around the corner" enthusiasm of the mid-1980s. Today, you can go up to the second floor of the Gates Building and see the empty cubicles, and obsolete computers below the gold letters "Knowledge Systems Lab".
Re:The state of A.I. (Score:2)
I agree that the traditional AI community has reached a brick wall and it's very unlikely that any breakthrough in our understanding of intelligence will come from that sector. They've collected way too much useless baggage over the years.
However, interesting things are happening in the fields of computational neuroscience and neurobiology. The most exciting revelation that has surfaced in the last decade is that the brain is essentially a temporal processing machine. It seems that what matters is the temporal correlations between neural signals, not the manipulation of symbols (as we were led to believe by the now discredited AI crowd). Check out this interview [technologyreview.com] with Jeff Hawkins. I think Jeff is onto something.
Re:The state of A.I. (Score:2)
Those guys haven't even figured out where memory is stored, let alone how the representation works. Any conclusions from that crowd are way premature.
Re-read the article and start over (Score:1)
Re:no HAL, no AI? (Score:2)
I disagree with your premise. Maybe it's sheer hubris, but I believe that people are more than "just complex machines." I have no proof for this. It is an article of faith, and though it's not based on religion, it's almost religious in its intensity.
Let's look at the evidence. Human beings are unique in the known universe: we alone among all creatures and constructs create art, technology, religion, and science. Fencepost cases like termites constructing their castles and chimps learning sign language just reinforce the evidence for a fundamental difference between humans and other creatures or things.
What evidence exists to indicate that we are "just complex machines?"
All in all, I think it's like saying that a bird is really just a complex rock.
Don't be silly (Score:1)
Re:Don't be silly (Score:2)
The thing about the unknown universe is that it's unknown. To even speculate about what's out there, in the face of an overwhelming lack of evidence, is folly.
You can talk all you want about what might be. It might be possible for Venus to be inhabited by seven-foot-tall beaver-people who communicate through flatulence; there may be nothing in the universe that prevents that from being the case. But that doesn't mean you should send probes to Venus with tags on them that say, "With love to the beaver people. Poot!"
Show me one piece of evidence-- evidence, not conjecture or speculation-- that another species like us exists in the universe. Just one.
I read as much science fiction as anybody I know. I love to think about the larger universe, and life on distant worlds, and all of that. But wishing doesn't make it so.
Re:Don't be silly (Score:1)
We speculate and can make predictions quite rationally about all sorts of events in the universe that we may never see--that a star ten times as massive as our sun will exert ten times the gravitational pull, for instance, or that it's heat and light are given off as a result of hydrogen fusion reactions. Even though the overwhelming number of stars will never be catalogued, we can say that, as a category, they follow the same laws of physics as ours does.
But you seem to be saying that nothing about the known laws of biology or physics prevents seven-foot-tall Beaver people from living on Venus, which is simply not the case. It isn't just a matter of "we haven't seen them." It's that their existence would contradict everything we know about biology, chemistry, and evolution.
Speaking of evolution, I believe that's what the original poster is saying: that life began as the result of knowable chemical mechanisms; that, as time wore on, complexity, added through successive mutations, and pruned through natural selection, eventually created us (and every other living thing we see); and that, since no special agents were required, it is not in any way inappropriate to call living organisms complex machines. Very complex machines, no argument, but machines nevertheless.
If that is not how you believe humanity arose, then I strongly doubt your claim that you aren't arguing on religious/superstitious grounds.
Re:Don't be silly (Score:2)
What you describe is nothing more than a model: a mental model of the universe that people have devised over the past 150 years or so. Remember that in recorded history, many models have been believed for a while and then discarded when they were proved wrong. In fact, if you draw it up numerically, you'll see that human beings are much more likely, statistically, to be totally wrong about nature and the universe than we are to be right.
It's very important, as we try to sort out how the world works, that we remember that we don't understand anything. All we have is conjecture that is more likely to be wrong than right. Remembering this keeps us humble.
What do my eyes tell me? That human beings are amazingly complex things. My girlfriend recently got her PhD in molecular genetics. She spent years studying the behavior of one specific set of bases in one specific chromosome. (It had to do with acetyl CoA synthetase, but that's all I know; everything else she talks about is beyond me.) If she chooses, she could make a lifetime's work out of studying that one invisible part of us.
But the same can be said for elm trees, or spider webs. Everything around us is beautiful and terrifying in its complexity.
And yet... through it all, humans are different. Humans argue about the nature of humanity, and as far as we know, that makes us unique in all the world. Why are we unique? Why was I born a person and not a goldfish? Am I a Chinese philosopher who dreamt he was a butterfly, or a butterfly who dreams he is a Chinese philosopher?
I challenge anyone to behold the uniqueness of humanity and come out the other side saying that we're "just complex machines." To reduce us to those terms is to call a bird a complex rock; it denies everything that defines us, and it's foolish.
Re:Don't be silly (Score:1)
You first decide that anything that contradicts you has to be backed up with hard evidence, and not, as you say, speculation. Then you decide to dismiss evolution, which in fact is based on hard evidence, with the hand-waving that it might be overturned on some speculative, as-yet unseen evidence.
Why do you think that being machines is no less wonderful than being...whatever it is you think we are? Why is the wonder and diversity of nature somehow less fascinating because it is orderly, and not the result of arbitrary processes? If anything, the fact that incredible sophistication can arise from organic, physical processes is even more awe-inspiring than resorting to easy cop-outs like special creation.
(And one more thing...lay off the sophomoric imagery, please. "Beautiful and terrifying in its complexity" proves nothing.)
Re:Don't be silly (Score:2)
If I'm a troll, I hope I'm the good kind. The kind that starts conversations. A hell of a lot better than that bozo who just posts long lists of numbers.
Now, as to your points. First of all, I'm not dismissing evolution at all. The mechanism by which successful organisms reproduce and pass their genes on to future generations is well documented, and makes perfect sense. Humanity as we know it today may very well have evolved from more primitive organisms.
But you should remember that evolution takes place over uncountable lengths of time. No human can truly grasp the span of a hundred thousand years, and yet in that time (according to the fossil record) our species has changed very little, in the gross biological sense. In order to see real differences in our ancestors, you have to go back thirty times that far.
These spans of time are utterly beyond comprehension. We can talk about them, and we can understand them in the literal sense, but we can't truly grasp them. Who knows what events took place during that time? Where were you when the foundations of the earth were laid?
The facts that we do have are these: according to the fossil record, humanity has existed in its present physical form for three million years, more or less. But sometime around 8,000-10,000 years ago, people started practicing agriculture. With that came settlements, which eventually grew into cities. Then it was like a big game of Civilization II for several thousand years, and then BOOM! Slashdot.
Why? Trying to answer that question puts you pretty firmly in Von Daniken territory.
Given this circumstance, why is it so hard to believe that there is something fundamentally different about humanity, something that we do not understand?
Once upon a time, diseases in the body were believed to be caused by devils. At another time, physical sickness was thought to be the result of one's state of mind-- melancholia, for example. Then came the germ theory, and a new idea of disease and sickness.
So now we contemplate our uniqueness. All around the world, in every culture, there exists the idea that humanity is divine, created somehow by a god or gods, some kind of primal motive force. The idea of the soul, of the divine spark, is common to all peoples in one form or another.
Personally, I don't believe in the soul. Personally, I don't believe in spiritual things or unseen deities. But I am willing to consider the possibility that the universal belief in the soul-- for every culture has such a belief, even if individuals may not share it personally-- might attempt to explain a real phenomenon.
How's that for trolling?
(And one more thing...lay off the sophomoric imagery, please. "Beautiful and terrifying in its complexity" proves nothing.)
Oh, you're just jealous.
Re:Don't be silly (Score:2)
Yes, humanity appears to have evolved from other, less complex, life forms, and yes, it's reasonable to guess that that process of change-over-time might continue. But are you completely, totally, 100% certain that that's the whole story?
Human beings, as I've tried to say before, are distinctly different from any other species that we've found so far. You seem to disagree with me on this fundamental point. That's fine, but I must say that I can't understand how you can see the evidence of our distinctiveness with your own eyes and still deny it.
You haven't stated anything yet that changes my mind that we will create what will amount to artificial life in the form of machine intelligence (at least) at some future point.
How about this: hypothesize that there is some necessary ingredient for intelligence (whatever that really means) that we have, but that all other life forms on this planet lack. I won't speculate about what that requirement might be, but just imagine that it's there. Maybe it's paprika; it doesn't matter.
It would explain a lot. It would explain why, in all the world, there is no other species like ours. We live only on land, and yet there is no species comparable to ours in the vast ocean. There's room enough in the sea for just about anything, and yet still we are unique. Why? According to our hypothesis, it's because only we humans have the necessary ingredient.
A natural consequence of this hypothesis is the idea that intelligence doesn't just spontaneously appear out of nowhere. If that's true (just bear with me) then making computers that are bigger and faster and more complex (and only the five richest kings of Europe...) will result in bigger, faster, more complex computers, but not intelligent ones. Because, going along that path, we will not have built a computer that includes... paprika. The ingredient. Whatever it is.
Now that our little thought-experiment is over, ask yourself whether any evidence to the contrary exists. We've come up with a hypothesis that would explain some things, so now we have to either prove it or disprove it with real evidence.
Is there any evidence to support either point of view? No, there isn't. Then why jump to the conclusion that one point of view must be the correct one?
I'll acknowledge that it's possible that you may be right. But it seems to me that there are some unexplained facts about the world, and there's an awful lot of room in your world-view for some factor, some ingredient, about which you know nothing. That's all I want: just admit the possibility that I may be right.
Re:Don't be silly (Score:2)
If there were some way to quantify the differences between things, some sort of absolute vector between two items that could be established and measured, then we would see something like this:
The difference between a raven and a writing desk: huge. Birds are animate organisms that consume and excrete and reproduce. Furniture is a made thing, constructed out of other objects by a third party; it cannot reproduce.
Write all the differences down and add them up. Fair to say that, despite the fact that both are made from the same basic elements, birds and furniture are really, really different in very significant ways, no?
Likewise, people and elephants are really, really different. People play football. People commit murder. People enjoy books and songs and pornography. People argue about whether they are unique in the world. Elephants, apes, dolphins, mice, australopithecines, bacteria, furniture, mayonnaise, steam engines, candles, computers, shoes, ships, and sealing wax do none of these things.
The difference between human being and everything else is not small. It's incomprehensibly enormous.
Re:no HAL, no AI? (Score:4, Insightful)
Are people just complex machines? Well, we know that no matter what else we are, we are also complex machines in some sence. We also benefit from symbiosis with other creatures (microorganisms that live inside our bodies) and we consume products that came from other organisms of this planet (I am a vegetarian, to me tomato is one of such products)
Now, let us assume that we do not know whether we just complex machines or we are some special creatures breeded by super-powerful God (or Gods, depending on your religion) So we have two cases to look at: first - we are very complex machines. If this is assumed, then it is not inconcievable that at some point in time we should be able to produce non-organic organisms that somehow imitate our own behaviour and even the train of thoughts. To duplicate our thought patterns, the creature will have to posses qualities that are shared by all living organisms on this planet (ability to see, hear, feel a touch, necessities for food or fuel) and qualities specific to human race - sex drive and necessity to socialize and some others. If we are just very complex machines, duplicating the environment for robots capable of all the above mentioned will probably drive these robots to become more like humans, will teach them to think in abstract ways, will force these robots to evolve (the merits of this evolution are questionable)
Now let's assume we are not simply complex machines, that for us in order to think in an abstract manner we need some divine intervention. In this case we still should be able to produce robots with above mentioned traits, but these robots will not amount to anything beyond social structures found in bee or ant colonies. At best in this case we could hope to produce intelligence comparable to that of a primate ape, a gorilla maybe, but even that would be a major break through. However, if it is completely and totally impossible to create intelligence comparable to human in a manner that humans can comprehend, we can still simulate it. You see, Alan Turin left specifications that allowed many to devise tests that can be used to find out whether you are communicating with a real human or with a machine. In fact, there are already today some AI programs that are capable of fooling some people and make us think that we are talking to a human rather than a machine. But the catch is that it does not really matter what or who you are talking to if you cannot tell the difference between it and an identifiable human. So, we could in principle have machines that would run simulated versions of ourself convincinly.
About us being unique - we are unique on this planet, we are the only creatures capable of handling tools and more importantly of producing a large number of different sounds that can be combined into complex speech. This is our main advantage and not something unidentifiable (if it were identifiable, we would have identified it already, otherwise it does not make any difference if it is there or not.)
Re:no HAL, no AI? (Score:2)
Of course you're correct. Technically. Literally. Deconstruction can be applied to anything, rendering it empty and meaningless.
At what point does "sound" become "music?" Bach's Air on a G-String is just a sequence of sounds, right?
Re:What is *BSD? (Score:1)