Posted
by
michael
from the dreaming-the-future dept.
Anonymous Coward writes: "Ray Kurzweil and other digerati discuss when popular sci-fi concepts will manifest in the real world. See part I or
part II."
This discussion has been archived.
No new comments can be posted.
I can't believe they used that scene from Independence Day as an example. It's the worst, most banal attempt at science fiction that hollywood has ever made. How much did apple pay to have their laptop in it? The idea of Jeff Goldblum as a '133+ h4x0r with a magic powerbook is worse than "This is a unix system, I know this!" from Jurassic Park.
You missed one thing. They didn't show this part so they could maintain the PG-13 rating, but you know they would have appended this to the code on behalf of all our human abductees.
Alien Commander: Are you sure this thing is secure?
Alien MCSE Tech: Trust us, it's unhackable. We built it with our reliable DRM 2 encryption code, and we've told the puny Earthlings not to publish exploits...
Alien MCSE Tech: Trust us, it's unhackable. We built it with our reliable DRM 2 encryption code, and we've told the puny Earthlings not to publish exploits...
Oh, I see... So the aliens were just enforcing the DMCA. Now the whole movie makes sense! Thanks!
Right after this movie came out there was an awesome security alert e-mail going around. (There's a bug in the BGSs [Big Green Shields]that allows even primative lifeforms...])
I can't believe they used that scene from Independence Day as an example. It's the worst, most banal attempt at science fiction that hollywood has ever made.
Actually, with the way IT has been heading, I thought that scene was quite realistic; it might well happen in some distant future - with us humans as the invaders, of course.
How long til our targets 'sploit our mighty WindowsCE3000 mothership?
Hasn't anyone learned from the mistakes of A.C. Clarke and his predictions? I'm quite sick of it. I don't need Ray Kurzweil to tell me to hold my horses until some arbitrarily drawn date - I'm patient enough to wait for it. Worse, the promises of "hard" A.I. are scientifically unsound to begin with.
Also, why can't modern day prophets realize that the next big thing probably hasn't even been guessed at yet. The vacuum tube, computers, transistors, etc. Ray wasn't reading old sci-fi pulp mags about Moog-like synthesizers, they more or less appeared on the scene. Now Ray sells digital synths. Real visionary.
> Hasn't anyone learned from the mistakes of A.C. Clarke and his predictions? I'm quite sick of it.
I'm still waiting for that technology that's indistinguishable from magic. When it hits Radio Shack I'm gonna be the first kid on my block to get it, and then I can fit a brim onto my dunce cap and pass myself off as a wizard.
I'd argue that futurists envision, the inventors read the work of the futurists which inspires them to create something similar, and then politics and money spoils the wonderful symetry of it all...
Machine translation in 0-30 years?! As a person involved in these topics, I can say that 30 years ago people thought this could be solved in 30 years. We are today almost as far away as we were 30 years ago, and I think that there's no way of this being a realitiy in less than 100 years.
To do correct machine translation you have to fully model the world and knowledge. Translation (for humans) is a tedious job, requiring a lot of research and artistic-like choice of words.
I think that we will sooner have machines writing their own novles than full machine transtaltion. The problem is just too hard.
What do you mean? We have excellent salad ion machinemachine salad ion machines already. They are not 30 years out. They need only a little of tweaking and them are perfect.
Thank you, you took the words from my fingers. 30 years...yeah, right. People have consistently underestimated how difficult a lot of AI-related tasks are. I refer you to the old chestnut about Minsky assigning "machine vision" to a graduate as a summer project (an urban legend, I think, but not so far from the truth).
The biggest problem with the Turing test is that it is completely subjective. The smarter of a person you are, the smarter the computer will have to be to give an accurate response. Obviously that trait is not one that reflects intelligence.
Get someone dumb enough and they'll chat with ELIZA for hours at a time.
Ah, but as some (Daniel Dennett comes to mind) would argue, "real" intelligence is itself a matter of subjectivity (and I dare you to argue otherwise). If this is the case, why should we hold AI to a higher standard than we hold human intelligence?
Real Intelligence is not a matter of subjectivity, except in some fringe cases. Even the most idiotic human can be distinguished from an intelligent gorilla. That is precisely why a less subjective test is needed.
The ability to solve problems, draw conclusions, faith, all are harbingers of intelligence. There is no doubt that a machine can be designed to warehouse conversations it can recall when needed, and learn new word definitions and such when needed. We have the technology to do that now, and it certainly wouldn't be a sentient being. That is the problem with the Turing test.
I wouldn't be so sure of your "definition" - many would argue that the is no such thing as intelligence there is only perceived intelligence. Examples:
Intelligence can depend on the environment: Is a spider intelligent? Spining a web it's amazing, stick the thing in a bath tub and it doesn't look so smart.
Intelligence can be social: is an ant intelligent? Not by itself but ant colonies perform some pretty amazing feats.
Intelligence may depend on other knowledge: A chess grandmaster may play a very strange move near the beginning of the game which looses him the game. Why? He took a calculated risk and it didn't pay off. Was he dumb? No, you say. What if it wasn't a chess grandmaster but Joe Blogss from down the street - yeah THAT was a dumb move...
Perception of intelligence is about being seen to to the right thing at the right time.
Regarding the second point - this gets to the heart of the Chinese Room Argument [utm.edu]: can intelligence (I would distinguish "sentience") be "built" or "must" there be something more. Was deep blue [ibm.com] intelligent? Searle would argue "no". Some would argue "Yes, In the chess domain". There was nobody on the planet it couldn't teach somethign about chess and (to an extent) explain those choices. Many AI researchers weren't happy about deep blue because it basically used very fast search and no fancy reasoning. But hey - that just shows that there's more than one way to solve a problem IMHO...
My whole point (stupid human, smart gorilla) is that there is a huge difference between something that will be percieved as intelligent, and a sentience.
When do you think computers are going to get to the point that they question their own existance? Obviously something like that is not required in the Turing test. Being self-aware, or having the urge to explore and learn, are traits of intelligence, but are not taken into account in the Turing test.
Real Intelligence is not a matter of subjectivity, except in some fringe cases. Even the most idiotic human can be distinguished from an intelligent gorilla. That is precisely why a less subjective test is needed
from whose point of view? an idiotic human is still intelligent (very much so. just not to your standard.), but look at it from a similar indivdual's point of view. Intelligence is very much in the eye of the beholder: it is subjective. Everyone has a different standard of what intelligence is; its something we all intuitively understand, but it is very hard to pin down. Dennett's way of pinning it down was simply to posit that if you would attribute some aspect of intelligence to it, it must be intelligent. How can we know other , humans are intelligent? We know we are, looking through our own eyes at the world, but there's no way we can just crack open someone's skull over lunch and find the inteliigence organ. No, we must infer from thier actions and reactions whether or not they are intelligent.
the Turing Test is simply a formalise version of the task we apply every freakin' day: determining whether we are dealing with something intelligent or not based on inference. The trick is, some of our inferences are based on appearance, which has nothing to do with intelligence; this must, then, be factored out, and so on...It's really quite slick when you think about it some.
Think about what you've said for a minute. I'll assume by the syntax of your sentence that you're young, and so I'll give you the benefit of the doubt. Your argument against one, of many, of the seminal ideas of someone with intellectual prowess of Alan Turing will not cut the mustard in the world of AI research, I'm afraid. Obviously? What is obvious about your hypothesis (that the Turing Test is completely subjective)? And how do you move from your hypothesis to your conclusion, ie., that the smarter of (sic) a person you are..., without any observation or analysis of results.
The biggest problem with your hypothesis, after reading your conclusion, is your lack of observation and analysis.
everyone will still get 99% of their predictions wrong...
... but they will only mention the 1% that they got right to the complete astonishment of their audience.
Concept: Using the brain for information originally stored elsewhere, possibly encrypted, or indeed upgrading human memory using plug-in chips, PC-style.
"Encrypted"? Suddenly the DMCA brings a whole new meaning to the term "thought crime":).
> Concept: The ability for artificially intelligent devices to feel emotions.
It's not at all obvious -- to me at least -- that we should want AIs to feel emotions. Who wants a warehouse full of smart bombs with hurt feelings?
Emotions can very clearly lead to inappropriate behavior. Granted, there may be times when emotions lead to positive behavior. But do they ever lead to positive behavior that couldn't be programmed into an AI without emotions? Unless that's the case then emotions are something known to be dangerous and not known to be useful, and therefore should be avoided as a life-threatening bug.
Granted, it may be a fact of nature that "intelligence" (whatever that is) is impossible without emotions. But unless/until that has been demonstrated, let's keep emotions off our wish list.
Now back to Part I:
> Concept: The idea of a computer becoming so complex it can understand, reason, listen, speak and interact in the same way as a human, including using deception and self-deception.
> Now we have: Machines that learn, software that breeds/replicates. 'Narrow AI,' i.e. computers that can perform 'narrow' tasks that previously could only be accomplished by human intelligence, such as playing games (e.g. chess) at master levels, diagnosing electrocardiograms and blood cell images, making financial investment decisions, landing jet planes, guiding cruise missiles, solving mathematical problems and so on. Currently exponential progress curve showing no sign of slowing down.
First, as with emotions, I dispute the desirability of AI agents that can knowingly deceive themselves and others.
Second, I'm not convinced that much of the laundry list in the second paragraph qualifies as "intelligence" instead of merely "appropriate algorithms". (Are we going to have to call MATLAB an intelligent agent because it's good at certain kinds of math problems?)
Third, I am amazed that they would say that we're making "exponential progress" in anything that might reasonably be called "AI". My games don't seem to ship with AIs that are "exponentially" smarter than the ones that shipped five years ago. Dish up some facts, please!
That said, here's a link to a paper [160K PDF] [umich.edu] that someone turned me on to recently. It's from a talk some AI researchers gave at a conference last year. They start by asking where is all the cool movie-style AI, and answer with the observation that no one is working on it. Their proposal to remedy that situation is that AI researchers should get involved in game AI, because many modern games require agents that are more "intelligent" than the common solve-one-problem stuff that has been coming out of the AI community for the last few... decades.
I think the authors of that paper overstate their case by calling game AI agents "human level" AI, but at least it's a step in the right direction. It's a bit of a light-weight article, but it's an easy read. And it would be way nice if 2/3 of the world's academic AI researchers started working on gaming applications!
>Second, I'm not convinced that much of the
>laundry list in the second paragraph qualifies
>as "intelligence" instead of merely "appropriate
>algorithms". (Are we going to have to call
>MATLAB an intelligent agent because it's good at
>certain kinds of math problems?)
This reminds me of something I read about AI: us (humans) constantly move the threshold of intelligence according to how far we've gone.
ie. first we had chess-playing AI. When that was done, it was said "this is not AI, but if it can beat a master-level player, then...". It beat the master-level player. "Still not AI, but if it beat the world champion...". It beat the world champion.
As AI nears 'wet' intelligence, the definition of intelligence drifts farther away. I'm wondering if it's too late to realize true intelligence is here when that happens...
I think that nature of the problem with chess-playing programs not being AI is the way in which most of them work. Chess has a set number of possible combinations of moves, as hardware gets bigger/faster, it is possible to sort through more and more data in the same period of time, meaning one just needs a large library of moves to beat any human player.
Of course, many human players use this same strategy (memorize positions/openings), but cannot come close to the machine's ability to memorize. The human player cannot play mind games with the machine, putting them at a disadvantage (though note that the machine may seem to psych out human players!).
All this really says is that the ability to play chess well is not a RELIABLE measure of intelligence.
The reason why that's happening, is that we have yet to have a program that can accurately simulate understanding the input it's being fed. It's been 46 years since Eliza was written, and still any AI program can be fooled by the most obvious tricks.
The reason why the threshold of intelligence keeps changing is because all we're learning is what problems can be solved by brute force. If I recall my half forgotten game theory information correctly, any finite game has a set of unbeatable moves. With enough brute force, chess can be beaten.
Why is chess not AI? Think of this. Imagine a chess tournament where, at the very beginning of each game, the rules are randomly changed. Pawns can move one square diagonally. The board wraps around like the tunnels in PacMan. The goal of the game is to capture both rooks. The only programming changes allowed to the AI are inputting the new rules. Who wins the game?
That's the difference between understanding and memorizing.
It's not at all obvious -- to me at least -- that we should want AIs to feel emotions. Who
wants a warehouse full of smart bombs with hurt feelings?
I do not believe that we would want a warehouse full of smart bombs with hurt feelings any more than we would want people with hurt feelings being responsible for the deployment of such bombs. The military screens for such things through various personality and performance based tests.
However, I do believe that emotions are important to AI for one simple fact. For true AI to work, the computer must want to do something, not just react as programmed. I came upon this when I first played with an ELIZA program. I mean it could "learn" an "appropriate" response by asking what it should say if it had no prior knowledge of a topic, but the program never wanted to learn, it had no motivation. In fact, if it asked for a response, it would simply sit there waiting indefinately, whereas any living thing above a plant would go about doing something else.
Now, putting human emotions into a computer might not be the best of things, but what definetly needs to happen is some kind of feedback loop to positively and negatively reinforce the machine so that it has some kind of "desire" to change its behavior. Then, we will have true AI, and not before.
This is an intriguing article because not only are they rating the credibility of recent SF ideas, but they're trying to attach a timeframe to the ideas based on what we have today.
Whats even more interesting from my point of view is to ask the question: If you consider the actual applications, perhaps even getting very specific, do the ratings and timeframes still match up. Obviously rating credibility is subjective somewhat, anyway. And similarly, trying to attach a time frame to technology is still at best an educated guess.
But if looked at from the point of view that a specific application of one of the SF technologies would have significant beneficial impact on quality of life for a lot of people, then perhaps the time frames change out of necessity. What if we figure out that by uploading a short program into the brain we could signal synapses, neurons and such to keep seratonin levels at a therapeutic level for people suffering from depression and give them a much better quality of life. Thats just a rough example I throw out, but I bet there are some serious applications seen in technologies of the future that will actually boost the timeframe through basic need.
I admire a man that makes obscure Infocom subreferences on a board full of 31337 C0d3r5 all born after 1987. I salute you, i remember the endless paragraphs of alien gibberish you had to wade through until you worked that one out...
Let's not have age wars goin' down on every story shall we? This thread didn't get much life.
But as the discussing winds down, I'd like to pitch in on the notion that these predictions were short cited. Language translation is impossible to stay ahead of. Machines can only play catch up because languages are human made and we've known that all along throughout the history of technology. It's no mystery at all. To think that a machine can hadle translation is to think that machines can come very very close to mimicking humans. We're very close to that, but that's totally different from machines being able to "comprehend" human language. That is, to have an understanding of human languages that encompasses the existing human knowledge bank. That will not happen in our lifetimes and if we live for centuries, that will not happen for centuries.
.........Neuromancer was the uncanniest thing i've read. He coined 'cyberspace' in 1984 (or was it '86?) he invented (or at least, popularized the idea in sci-fi), the "matrix" WAY before Keanu Cheese starred in that overrated film, and his characters were ultra cool examples of the "wired" human with organic-machine interfaces (that razor-girl was cool). Even the narrative style, with the dense but ambiguous portayals of the gritty subcultures of vast metropolises seems futuristic.
The guy was a prophet. Who knows what strange visions from his novels have yet to materialize?
The guy was a prophet. Who knows what strange visions from his novels have yet to materialize?
Big, evil corporations trying to conquer the world and maximize profits no matter what the human cost? Already got 'em!
As for the rest of his stuff it's rather naïve and dubious if fascinatingly surreal. Makes for great material on a long flight or on the john but I'd hardly call him a visionary prophet of TEH FUCHUR. Maybe when we really do have a matrix...
This is offtopic, but what the hey. I just recently read Neuromancer on the recommendation of almost every geeky friend that I have, and I was stunned. I was stunned that a book that won so many awards and is beloved by so many people turned out to be one of the worst sci-fi books that I've ever read. For point of reference, I've probably read ~100-150 sci-fi books, lifetime, and my faves are pretty standard (but not recent): Dune series, various Heinlein, Clark, Bradbury, and Assimov. In Neuromancer, I found the characterizations, character development, plot, pacing, development, voice, and dialog to be very poor. The narrative was acceptable more than it wasn't, but I don't think that's really a complement. I did actually finish the book as I assumed that that something interesting *had* to happen eventually. I can accept that when it was published, just the idea of a noir near-future was interesting, but to me as a modern reader it just comes off like an admirable first attempt by a capable high school student.
Now, I'm guessing that a fair number of/. readers liked the book and may try and defend it, so before you do, keep a couple things in mind. 1) I'm attacking the book's literary merit (or lack thereof). 2) I'm stating that a book lacking in literary merit and lacking ideas that are new to me (*regardless of whether or not they were new to somebody else at some other time!*) ranks very low on my "quality metric for sci-fi books."
Now, if you feel compelled to argue that Neuromancer does, in fact, have literary merit, then please be prepared to answer a few things: 1) Describe the character backgrounds (ie, information about the characters that occured prior to the events of the work) for Case, Molly, Armitage, and Riveria, in detail. 2) Explain how this correlates to each character's motives for furthering the plot. 3) Explain how the protaganist has grown over the course of the book. 4) Quote us one section of dialog that you found to be particularly well done. I assert that the answer to #1 will comprise about a paragraph, which for four major characters is ridiculous. This, in turn, relates to why #2 is easy to answer, and very, very shallow. I think the answer to #3 is to mumble, "Well, there must be *something*," while flipping through the book. For #4, you may find something. I'm curious as to what it is. I'll probably disagree with you, but then we can agree to disagree, I hope! I also hope that I've managed to substantiate and clarify my position sufficiently to avoid being modded a troll, as this isn't intended as such. If you like the book despite these shortcomings, well, to each his or her own:)
I'm not going to defend it, rather say what I like about Neuromancer and the later book in the series. (If you want to find out #2 you'll just have to read the other 5 books.)
First I found the style of writing fascinating. You spend the first part of the book confused from different plots that don't seem to be related. (Which most likely is a way of describing the chaotic life of the Sprawl.) As the story progress more order can be found amid the chaos. That is what I like about the series.
And pershaps the ideas are not that new anylonger, they have been copied for almost 2 decades by now, so why should they be?
Secondly a lot of really good Sci-Fi literature is rather poor, in a strictly literary meaning. For instance "A brave new world" is IMHO a very poor book, but the ideas and setting is the important part. The plot is extremely silly and unimaginative, that doesn't matter however. We are basically given a tour of Mr Huxley's vision, and at least I find it a lot more credible than 1984. (Although 1984 is a much better book, from a literary pow.)
hey now lets not forget that keanu reeves did one of gibson's movies. admittedly "Johny Mnemonic" wasn't all that good but thats not keanu's fault. i mean it was a dumb movie.
I think he gives all scenarios a credibility rating of 10/10, even the Independence Day scenario. This guy must live in a different world. However, I've bought his predictions book, and I plan to read in twenty years time or so. It will certainly be funny.
By 2010, AC posting outlawed on the grounds that it does not pass the Turing test, therefore all random drivel and unenlightened flaming to be done by intelligent computers (to perfect "Hard AI") and the previous AC-kiddies turned into soylent green.
Did it piss anyone else off to read how Johnny was apparently uploading data "into his brain". No. Read Gibson's short story. Listen to what is actually said in the movie. Johnny was uploading data into a chip in his head that was intended to treat his autism. By misusing his chip he could transport data that would otherwise be detected by law enforcement or pirates. What happened in the story was that he misused the chip for too long and his autism wasn't being treated, so he was experiencing symptoms (he was losing childhood memories at an alarming rate).
why is the future so hard to see when:
a: moore's law,
- which btw, does not simply reflect the speed of integrated circuts, but physics, biology and nanotech in general.
b: exponential growth of internet population and traffic
2002:
- 4gz processors
- 1-2 gigs of ram
- wireless networking explosions, bandwidth jumps to 10 mb/s
- p2p software explosion
- massivly multi-player rpgs gain huge grounds
- physisists and biologists play with.1 micron sized objects
- genomics will be twice as big as it was in 1999
etc etc
- population of the internet will exceed 1 billion
- internet traffic will continue to tripple every 6 months . . .
2003/2004:
- 8 - 10 gz processors,
- multiple processors become standard in PC's
- 1/2 the population of the world will be online
- Open Source will have overtaken the development of comercial software . .
2005:
optic cpu, mother, and internet backbone fuse, creating an "inflexion point" in which millions of computers around the world become the worlds fastest super computer.
2020-
artificial brain implants finaly teach me to spell!
I have serious problems with your reasoning. My biggest beef is that Moore's observation has lazily and incorrectly been labeled a law, when it obviously is nothing of the sort. Nature doesn't give a rat's arse about the speed of computers. The continued doubling of processor speeds is dependent on human engineering, and there are no guarantees that new advances will arrive on schedule.
moore's law, - which btw, does not simply reflect the speed of integrated circuts, but physics, biology and nanotech in general. b: exponential growth of internet population and traffic
See? This is just what I'm talking about. Moore came up with his observation years before any talk of nanotech, biochips or other buzzwords du jour. There are many promising technologies that may take us beyond current lithographic techniques, but there is no reason to believe that growth will conform to any 'law'. The pace may accelerate, or it may stall and plateau for a number of years until the next breakthough. Advances will come about because of human ingeniuty, not some over-hyped statement of current general trends.
I was just thinking about the storage market. Somebody on ZDNet was going on about FDS or whatever that new plastic shit is.
Anyway, I let him have it. I was like, fuck that mechanical/optical stuff. Hard drives are already clearly in the path of the speed bullet. Talking about new optical whatever is cute and whatever, but RAM is where it's at. That's the whole problem with the RAM market, they can't stabilize because they're feeding off the broader tech trends in semi. It was never supposed to be the way it is already and it could get a lot steeper. When RAM keeps stepping up with expontential growth as circuits shrink, it's hard to avoid. Especially when it's the only thing besides CPUs that can really take advantage of fiber and fast switches. You know, even fast ethernet can tax hard drives and the 10GhE standard is being finalized this year. D-Link 1G downstream switches are only three hundred bucks.
How about some futurists being interviewed by a journalist who can think of a halfway reasonable near-term forecast.
Perhaps then it would be a political interview though more suited to people looking at the real politics of the near term techno future. Hmm, how about that russian hacker dude what does he think about a pack of 20X100Gig Ram sticks with a fuel cell drinking vodka? How about some russian BBS hacker crazy guys. Let's have an interview with someone from the Top50 or Astalavista about what it would mean to have fiber, laser or a higher speed wireless direct to your CPU/RAM banks.
We have to get past that point to get to the even better stuff and it's out there.
Fer instance.
If we could really jam on such terriffic amounts of bandwidth for insignificant tasks, it may enable new hardware.
I don't know if we have any rapid prototype deverlopers in the house this evening, but having a printer for objects is definitely real and must be an application where exponential data speed increases could lead to dramatic increases in productivity and perhaps the use --or in this case, use and production-- of materials that are prohibitive with current technology.
Let's hear about nano printers!
The catch is that the notion of intellectual property will have a to change a lot before the kind of bandwidth that might make really out there science fantasy come true will be available. The hardware is forcing the issue on software and it's been going that way for a long time. It's getting faster and faster, but it's not like it has to be all bad. Perhaps after intellectual property, real property will become ubiquitous.
Who would complain then? You could still complain all you wanted and probably live out some wild fantasys about taking out your frustrations too, but there wouldn't really be that much to complain about outside your bad dreams and personally imposed limitations. There's certainly plenty to complain about right there for most people though so we don't need to be scared of a future where nobody complains. Nonthless, it's not hard to imagine a race of satisfied humanoids from the strictly technological perspective. It's gettin' there that's gonna be a big fight apparently if these RIAA, MPAA clowns are signs of things to come. We'll see what happens. One thing about ubiquitous high speed networks is they sure facilitate communication. It's all negotiable once we can see the advantages of working together for a brighter future.
Using the brain to store digital information:
The problem is less one of interface than it is one of reprogramming neurons. While this might technically be possible, is there going to be any sort of information density advantage? Human memory has some really nice lossy compression, but that would make it a bad way to store digital data.
Computers "understanding" and "speaking" human language:
I think the only thing we've really learned in the last 30 years is that the problem is a lot harder than we thought it was 30 years ago. There are a multitude of problems, from simple parsing to having a large enough database to understand context. That, and we really don't know what problem we are solving. A speech interface to a database would seem to be to be a useful tool - "what is the weather going to be like today?" opens up the appropriate web page. "Find me a good price on a 1997 Honda Accord" hits the search engines, finds a few dealers in my area, and gets me some pages to view. We don't even have anything this sophisticated without the voice interface. (Speech-to-text + text-to-speech + Google) is not tons better than Google. Yet, we expect a program with the depth of knowledge and subtlety of reasoning that a human posesses. My own version of the Turing Test, "I'll believe it when I see it," suggests to me the system that can pass the Turing Test is a LONG way off.
Software as a weapon:
OK, ID was a poor example - I know I'm 1337 enough to reverse engineer alien technology in a matter of minutes and write a virus using a Mac, but that guy? But really, software as an weapon is only useful against those who use software, and only when that software is of critical importance. Even North Americans aren't THAT reliant on the 'net, although it might be wise to take precautions before we wire all of our brains together...
How could Jeff Goldblum's character in Independence Day possibly be familiar with the Operating System the aliens were using. There is no such thing as a virus that affects all platforms.
That little fact has protected Macintosh users from most of the malicious code out there.
Jeff Goldblum's "virus" (Score:4, Funny)
Re:Jeff Goldblum's "virus" (Score:2)
if(shields == 1)
{
shields = 0;
}
virus writing 101.
Re:Jeff Goldblum's "virus" (Score:2, Funny)
if(shields == 1)
{
shields = 0;
}
This is the easy part. Writing a gcc backend to output alien assembly is the (very very) hard part.
Re:Jeff Goldblum's "virus" (Score:2)
if(shields == 1)
{
shields = 0;
ShowBitmap(JOLLY_ROGER);
}
Re:Jeff Goldblum's "virus" (Score:1)
You missed one thing. They didn't show this part so they could maintain the PG-13 rating, but you know they would have appended this to the code on behalf of all our human abductees.
if(shields == 1)
{
shields = 0;
ShowBitmap(JOLLY_ROGER);
probe(alien, anus);
}
Re:Jeff Goldblum's "virus" (Score:4, Funny)
Alien Commander: Are you sure this thing is secure?
Alien MCSE Tech: Trust us, it's unhackable. We built it with our reliable DRM 2 encryption code, and we've told the puny Earthlings not to publish exploits...
:)
Re:Jeff Goldblum's "virus" (Score:2, Funny)
Oh, I see... So the aliens were just enforcing the DMCA. Now the whole movie makes sense! Thanks!
Re:Jeff Goldblum's "virus" (Score:1)
Anyone have a copy?
Re:Jeff Goldblum's "virus" (Score:1)
Actually, with the way IT has been heading, I thought that scene was quite realistic; it might well happen in some distant future - with us humans as the invaders, of course.
How long til our targets 'sploit our mighty WindowsCE3000 mothership?
Re:Jeff Goldblum's "virus" (Score:1)
Guessing prophets. (Score:2)
Also, why can't modern day prophets realize that the next big thing probably hasn't even been guessed at yet. The vacuum tube, computers, transistors, etc. Ray wasn't reading old sci-fi pulp mags about Moog-like synthesizers, they more or less appeared on the scene. Now Ray sells digital synths. Real visionary.
Re:Guessing prophets. (Score:3, Funny)
> Hasn't anyone learned from the mistakes of A.C. Clarke and his predictions? I'm quite sick of it.
I'm still waiting for that technology that's indistinguishable from magic. When it hits Radio Shack I'm gonna be the first kid on my block to get it, and then I can fit a brim onto my dunce cap and pass myself off as a wizard.
"Futurists" (Score:3, Interesting)
Re:"Futurists" (Score:2, Insightful)
The future... (Score:3, Funny)
(I'll go read the article, now, with low expectations.)
Next up. (Score:3, Funny)
Re:Next up. (Score:1)
Machine translation? You gotta be kidding! (Score:4, Interesting)
To do correct machine translation you have to fully model the world and knowledge. Translation (for humans) is a tedious job, requiring a lot of research and artistic-like choice of words.
I think that we will sooner have machines writing their own novles than full machine transtaltion. The problem is just too hard.
Re:Machine translation? You gotta be kidding! (Score:2)
-josh
Re:Machine translation? You gotta be kidding! (Score:1)
As for machines writing their own novels, How many years' time? 0. [evolutionzone.com]
The problem with the Turing test (Score:3, Interesting)
Get someone dumb enough and they'll chat with ELIZA for hours at a time.
Re:The problem with the Turing test (Score:1)
Re:The problem with the Turing test (Score:3, Insightful)
The ability to solve problems, draw conclusions, faith, all are harbingers of intelligence. There is no doubt that a machine can be designed to warehouse conversations it can recall when needed, and learn new word definitions and such when needed. We have the technology to do that now, and it certainly wouldn't be a sentient being. That is the problem with the Turing test.
Re:The problem with the Turing test (Score:2, Interesting)
Regarding the second point - this gets to the heart of the Chinese Room Argument [utm.edu]: can intelligence (I would distinguish "sentience") be "built" or "must" there be something more. Was deep blue [ibm.com] intelligent? Searle would argue "no". Some would argue "Yes, In the chess domain". There was nobody on the planet it couldn't teach somethign about chess and (to an extent) explain those choices. Many AI researchers weren't happy about deep blue because it basically used very fast search and no fancy reasoning. But hey - that just shows that there's more than one way to solve a problem IMHO...
Re:The problem with the Turing test (Score:2)
When do you think computers are going to get to the point that they question their own existance? Obviously something like that is not required in the Turing test. Being self-aware, or having the urge to explore and learn, are traits of intelligence, but are not taken into account in the Turing test.
Re:The problem with the Turing test (Score:1)
from whose point of view? an idiotic human is still intelligent (very much so. just not to your standard.), but look at it from a similar indivdual's point of view. Intelligence is very much in the eye of the beholder: it is subjective. Everyone has a different standard of what intelligence is; its something we all intuitively understand, but it is very hard to pin down. Dennett's way of pinning it down was simply to posit that if you would attribute some aspect of intelligence to it, it must be intelligent. How can we know other , humans are intelligent? We know we are, looking through our own eyes at the world, but there's no way we can just crack open someone's skull over lunch and find the inteliigence organ. No, we must infer from thier actions and reactions whether or not they are intelligent.
the Turing Test is simply a formalise version of the task we apply every freakin' day: determining whether we are dealing with something intelligent or not based on inference. The trick is, some of our inferences are based on appearance, which has nothing to do with intelligence; this must, then, be factored out, and so on...It's really quite slick when you think about it some.
Re:The problem with the Turing test (Score:2, Insightful)
Think about what you've said for a minute. I'll assume by the syntax of your sentence that you're young, and so I'll give you the benefit of the doubt. Your argument against one, of many, of the seminal ideas of someone with intellectual prowess of Alan Turing will not cut the mustard in the world of AI research, I'm afraid. Obviously? What is obvious about your hypothesis (that the Turing Test is completely subjective)? And how do you move from your hypothesis to your conclusion, ie., that the smarter of (sic) a person you are
The biggest problem with your hypothesis, after reading your conclusion, is your lack of observation and analysis.
The scientific method does work.
catch the difference? (Score:4, Interesting)
The geek says it will all happen, it's just a matter of time.
In the future... (Score:4, Funny)
Re:In the future... (Score:1)
... but they will only mention the 1% that they got right to the complete astonishment of their audience.
Uh-oh... (Score:2, Funny)
"Encrypted"? Suddenly the DMCA brings a whole new meaning to the term "thought crime"
AI (Score:5, Informative)
> Concept: The ability for artificially intelligent devices to feel emotions.
It's not at all obvious -- to me at least -- that we should want AIs to feel emotions. Who wants a warehouse full of smart bombs with hurt feelings?
Emotions can very clearly lead to inappropriate behavior. Granted, there may be times when emotions lead to positive behavior. But do they ever lead to positive behavior that couldn't be programmed into an AI without emotions? Unless that's the case then emotions are something known to be dangerous and not known to be useful, and therefore should be avoided as a life-threatening bug.
Granted, it may be a fact of nature that "intelligence" (whatever that is) is impossible without emotions. But unless/until that has been demonstrated, let's keep emotions off our wish list.
Now back to Part I:
> Concept: The idea of a computer becoming so complex it can understand, reason, listen, speak and interact in the same way as a human, including using deception and self-deception.
> Now we have: Machines that learn, software that breeds/replicates. 'Narrow AI,' i.e. computers that can perform 'narrow' tasks that previously could only be accomplished by human intelligence, such as playing games (e.g. chess) at master levels, diagnosing electrocardiograms and blood cell images, making financial investment decisions, landing jet planes, guiding cruise missiles, solving mathematical problems and so on. Currently exponential progress curve showing no sign of slowing down.
First, as with emotions, I dispute the desirability of AI agents that can knowingly deceive themselves and others.
Second, I'm not convinced that much of the laundry list in the second paragraph qualifies as "intelligence" instead of merely "appropriate algorithms". (Are we going to have to call MATLAB an intelligent agent because it's good at certain kinds of math problems?)
Third, I am amazed that they would say that we're making "exponential progress" in anything that might reasonably be called "AI". My games don't seem to ship with AIs that are "exponentially" smarter than the ones that shipped five years ago. Dish up some facts, please!
That said, here's a link to a paper [160K PDF] [umich.edu] that someone turned me on to recently. It's from a talk some AI researchers gave at a conference last year. They start by asking where is all the cool movie-style AI, and answer with the observation that no one is working on it. Their proposal to remedy that situation is that AI researchers should get involved in game AI, because many modern games require agents that are more "intelligent" than the common solve-one-problem stuff that has been coming out of the AI community for the last few... decades.
I think the authors of that paper overstate their case by calling game AI agents "human level" AI, but at least it's a step in the right direction. It's a bit of a light-weight article, but it's an easy read. And it would be way nice if 2/3 of the world's academic AI researchers started working on gaming applications!
Re:AI (Score:1)
Remember Bomb in Dark Star?
Re:AI (Score:1)
>laundry list in the second paragraph qualifies
>as "intelligence" instead of merely "appropriate
>algorithms". (Are we going to have to call
>MATLAB an intelligent agent because it's good at
>certain kinds of math problems?)
This reminds me of something I read about AI: us (humans) constantly move the threshold of intelligence according to how far we've gone.
ie. first we had chess-playing AI. When that was done, it was said "this is not AI, but if it can beat a master-level player, then...". It beat the master-level player. "Still not AI, but if it beat the world champion...". It beat the world champion.
As AI nears 'wet' intelligence, the definition of intelligence drifts farther away. I'm wondering if it's too late to realize true intelligence is here when that happens...
Re:AI (Score:1)
Of course, many human players use this same strategy (memorize positions/openings), but cannot come close to the machine's ability to memorize. The human player cannot play mind games with the machine, putting them at a disadvantage (though note that the machine may seem to psych out human players!).
All this really says is that the ability to play chess well is not a RELIABLE measure of intelligence.
Re:AI (Score:1)
The reason why the threshold of intelligence keeps changing is because all we're learning is what problems can be solved by brute force. If I recall my half forgotten game theory information correctly, any finite game has a set of unbeatable moves. With enough brute force, chess can be beaten.
Why is chess not AI? Think of this. Imagine a chess tournament where, at the very beginning of each game, the rules are randomly changed. Pawns can move one square diagonally. The board wraps around like the tunnels in PacMan. The goal of the game is to capture both rooks. The only programming changes allowed to the AI are inputting the new rules. Who wins the game?
That's the difference between understanding and memorizing.
Re:AI (Score:2, Interesting)
I do not believe that we would want a warehouse full of smart bombs with hurt feelings any more than we would want people with hurt feelings being responsible for the deployment of such bombs. The military screens for such things through various personality and performance based tests.
However, I do believe that emotions are important to AI for one simple fact. For true AI to work, the computer must want to do something, not just react as programmed. I came upon this when I first played with an ELIZA program. I mean it could "learn" an "appropriate" response by asking what it should say if it had no prior knowledge of a topic, but the program never wanted to learn, it had no motivation. In fact, if it asked for a response, it would simply sit there waiting indefinately, whereas any living thing above a plant would go about doing something else.
Now, putting human emotions into a computer might not be the best of things, but what definetly needs to happen is some kind of feedback loop to positively and negatively reinforce the machine so that it has some kind of "desire" to change its behavior. Then, we will have true AI, and not before.
Re:AI (Score:2)
Of couse you wouldn't - you'd want your bombs to be all gung ho, eager and agressive.
credibility/timeframe vs needed application (Score:2, Interesting)
Whats even more interesting from my point of view is to ask the question: If you consider the actual applications, perhaps even getting very specific, do the ratings and timeframes still match up. Obviously rating credibility is subjective somewhat, anyway. And similarly, trying to attach a time frame to technology is still at best an educated guess.
But if looked at from the point of view that a specific application of one of the SF technologies would have significant beneficial impact on quality of life for a lot of people, then perhaps the time frames change out of necessity. What if we figure out that by uploading a short program into the brain we could signal synapses, neurons and such to keep seratonin levels at a therapeutic level for people suffering from depression and give them a much better quality of life. Thats just a rough example I throw out, but I bet there are some serious applications seen in technologies of the future that will actually boost the timeframe through basic need.
Anyone else think so? Comments?
Just my $.02
WBGG
Babel Fish (Score:3, Funny)
I eventually got mine, but I hope nobody asks me how I did it. I don't remember and I'm not about to figure it out again!
If one really cared, they could just do a web-search for a walk-through. I'm sure one is out there.
30 years for a Babel Fish. Shesh.
Re:Babel Fish (Score:1)
Pray for peace between the ages. (Score:1)
But as the discussing winds down, I'd like to pitch in on the notion that these predictions were short cited. Language translation is impossible to stay ahead of. Machines can only play catch up because languages are human made and we've known that all along throughout the history of technology. It's no mystery at all. To think that a machine can hadle translation is to think that machines can come very very close to mimicking humans. We're very close to that, but that's totally different from machines being able to "comprehend" human language. That is, to have an understanding of human languages that encompasses the existing human knowledge bank. That will not happen in our lifetimes and if we live for centuries, that will not happen for centuries.
William Gibson (Score:2, Interesting)
The guy was a prophet. Who knows what strange visions from his novels have yet to materialize?
Re:William Gibson (Score:2)
Big, evil corporations trying to conquer the world and maximize profits no matter what the human cost? Already got 'em!
As for the rest of his stuff it's rather naïve and dubious if fascinatingly surreal. Makes for great material on a long flight or on the john but I'd hardly call him a visionary prophet of TEH FUCHUR. Maybe when we really do have a matrix...
Re:William Gibson (Score:2, Interesting)
Now, I'm guessing that a fair number of
Now, if you feel compelled to argue that Neuromancer does, in fact, have literary merit, then please be prepared to answer a few things: 1) Describe the character backgrounds (ie, information about the characters that occured prior to the events of the work) for Case, Molly, Armitage, and Riveria, in detail. 2) Explain how this correlates to each character's motives for furthering the plot. 3) Explain how the protaganist has grown over the course of the book. 4) Quote us one section of dialog that you found to be particularly well done. I assert that the answer to #1 will comprise about a paragraph, which for four major characters is ridiculous. This, in turn, relates to why #2 is easy to answer, and very, very shallow. I think the answer to #3 is to mumble, "Well, there must be *something*," while flipping through the book. For #4, you may find something. I'm curious as to what it is. I'll probably disagree with you, but then we can agree to disagree, I hope! I also hope that I've managed to substantiate and clarify my position sufficiently to avoid being modded a troll, as this isn't intended as such. If you like the book despite these shortcomings, well, to each his or her own
Re:William Gibson (Score:1)
First I found the style of writing fascinating. You spend the first part of the book confused from different plots that don't seem to be related. (Which most likely is a way of describing the chaotic life of the Sprawl.) As the story progress more order can be found amid the chaos. That is what I like about the series.
And pershaps the ideas are not that new anylonger, they have been copied for almost 2 decades by now, so why should they be?
Secondly a lot of really good Sci-Fi literature is rather poor, in a strictly literary meaning. For instance "A brave new world" is IMHO a very poor book, but the ideas and setting is the important part. The plot is extremely silly and unimaginative, that doesn't matter however. We are basically given a tour of Mr Huxley's vision, and at least I find it a lot more credible than 1984. (Although 1984 is a much better book, from a literary pow.)
Re:William Gibson (Score:2)
1984 (Score:1)
1984 seems to be drawing ever closer... especially since September of this year.
If you have no idea what I am talking about, start here [ou.edu], or just jump straight to this summary [k-1.com].
Kurzweil seems to believe everything (Score:2, Interesting)
Re:plain silly (Score:2)
By 2010, AC posting outlawed on the grounds that it does not pass the Turing test, therefore all random drivel and unenlightened flaming to be done by intelligent computers (to perfect "Hard AI") and the previous AC-kiddies turned into soylent green.
Soylent green! Now with more stupid people!
Johnny Mnemonic (Score:2)
the future (Score:1, Funny)
a: moore's law,
- which btw, does not simply reflect the speed of integrated circuts, but physics, biology and nanotech in general. b: exponential growth of internet population and traffic
2002:
- 4gz processors
- 1-2 gigs of ram
- wireless networking explosions, bandwidth jumps to 10 mb/s
- p2p software explosion
- massivly multi-player rpgs gain huge grounds
- physisists and biologists play with
- genomics will be twice as big as it was in 1999 etc etc
- population of the internet will exceed 1 billion
- internet traffic will continue to tripple every 6 months
2003/2004:
- 8 - 10 gz processors,
- multiple processors become standard in PC's
- 1/2 the population of the world will be online
- Open Source will have overtaken the development of comercial software
2005:
optic cpu, mother, and internet backbone fuse,
creating an "inflexion point" in which
millions of computers around the world become
the worlds fastest super computer.
2020-
artificial brain implants finaly teach me to spell!
Moore's "law" (Score:1)
I have serious problems with your reasoning. My biggest beef is that Moore's observation has lazily and incorrectly been labeled a law, when it obviously is nothing of the sort. Nature doesn't give a rat's arse about the speed of computers. The continued doubling of processor speeds is dependent on human engineering, and there are no guarantees that new advances will arrive on schedule.
See? This is just what I'm talking about. Moore came up with his observation years before any talk of nanotech, biochips or other buzzwords du jour. There are many promising technologies that may take us beyond current lithographic techniques, but there is no reason to believe that growth will conform to any 'law'. The pace may accelerate, or it may stall and plateau for a number of years until the next breakthough. Advances will come about because of human ingeniuty, not some over-hyped statement of current general trends.
Whoa, let's slow down the time line. (Score:1)
Anyway, I let him have it. I was like, fuck that mechanical/optical stuff. Hard drives are already clearly in the path of the speed bullet. Talking about new optical whatever is cute and whatever, but RAM is where it's at. That's the whole problem with the RAM market, they can't stabilize because they're feeding off the broader tech trends in semi. It was never supposed to be the way it is already and it could get a lot steeper. When RAM keeps stepping up with expontential growth as circuits shrink, it's hard to avoid. Especially when it's the only thing besides CPUs that can really take advantage of fiber and fast switches. You know, even fast ethernet can tax hard drives and the 10GhE standard is being finalized this year. D-Link 1G downstream switches are only three hundred bucks.
How about some futurists being interviewed by a journalist who can think of a halfway reasonable near-term forecast.
Perhaps then it would be a political interview though more suited to people looking at the real politics of the near term techno future. Hmm, how about that russian hacker dude what does he think about a pack of 20X100Gig Ram sticks with a fuel cell drinking vodka? How about some russian BBS hacker crazy guys. Let's have an interview with someone from the Top50 or Astalavista about what it would mean to have fiber, laser or a higher speed wireless direct to your CPU/RAM banks.
We have to get past that point to get to the even better stuff and it's out there.
Re:Whoa, let's slow down the time line (Score:1)
If we could really jam on such terriffic amounts of bandwidth for insignificant tasks, it may enable new hardware.
I don't know if we have any rapid prototype deverlopers in the house this evening, but having a printer for objects is definitely real and must be an application where exponential data speed increases could lead to dramatic increases in productivity and perhaps the use --or in this case, use and production-- of materials that are prohibitive with current technology.
Let's hear about nano printers!
The catch is that the notion of intellectual property will have a to change a lot before the kind of bandwidth that might make really out there science fantasy come true will be available. The hardware is forcing the issue on software and it's been going that way for a long time. It's getting faster and faster, but it's not like it has to be all bad. Perhaps after intellectual property, real property will become ubiquitous.
Who would complain then? You could still complain all you wanted and probably live out some wild fantasys about taking out your frustrations too, but there wouldn't really be that much to complain about outside your bad dreams and personally imposed limitations. There's certainly plenty to complain about right there for most people though so we don't need to be scared of a future where nobody complains. Nonthless, it's not hard to imagine a race of satisfied humanoids from the strictly technological perspective. It's gettin' there that's gonna be a big fight apparently if these RIAA, MPAA clowns are signs of things to come. We'll see what happens. One thing about ubiquitous high speed networks is they sure facilitate communication. It's all negotiable once we can see the advantages of working together for a brighter future.
Some thoughts (Score:3, Interesting)
The problem is less one of interface than it is one of reprogramming neurons. While this might technically be possible, is there going to be any sort of information density advantage? Human memory has some really nice lossy compression, but that would make it a bad way to store digital data.
Computers "understanding" and "speaking" human language:
I think the only thing we've really learned in the last 30 years is that the problem is a lot harder than we thought it was 30 years ago. There are a multitude of problems, from simple parsing to having a large enough database to understand context. That, and we really don't know what problem we are solving. A speech interface to a database would seem to be to be a useful tool - "what is the weather going to be like today?" opens up the appropriate web page. "Find me a good price on a 1997 Honda Accord" hits the search engines, finds a few dealers in my area, and gets me some pages to view. We don't even have anything this sophisticated without the voice interface. (Speech-to-text + text-to-speech + Google) is not tons better than Google. Yet, we expect a program with the depth of knowledge and subtlety of reasoning that a human posesses. My own version of the Turing Test, "I'll believe it when I see it," suggests to me the system that can pass the Turing Test is a LONG way off.
Software as a weapon:
OK, ID was a poor example - I know I'm 1337 enough to reverse engineer alien technology in a matter of minutes and write a virus using a Mac, but that guy? But really, software as an weapon is only useful against those who use software, and only when that software is of critical importance. Even North Americans aren't THAT reliant on the 'net, although it might be wise to take precautions before we wire all of our brains together...
In the Year 2000... (Score:1)
How could Jeff possibly know? (Score:1)
How could Jeff Goldblum's character in Independence Day possibly be familiar with the Operating System the aliens were using. There is no such thing as a virus that affects all platforms.
That little fact has protected Macintosh users from most of the malicious code out there.
Sci-Fi concept that SHOULD become reality: (Score:2)
Blonde bombshells in sparkly catsuits, on alien worlds, capturing virile yet helpless Earth men to be their love slaves.
Why aren't more of our scientists working to makes this a reality?!