Bill Gates: AI Is The 'Holy Grail' (mashable.com) 260
An anonymous reader writes: At the Code Conference on Wednesday, Bill Gates balanced his fears of artificial intelligence with praise. He talked about two of the challenges AI will pose: a loss of existing jobs, and making sure humans remain in control of super-intelligent machines. Gates, as well as many other experts in the field, predict there will be an excess of labor resources as robots and AI systems take over. He plans to talk with others about ideas to combat the threat of AI controlling humans, specifically noting work being done at Stanford. Even with such threats, Gates called AI the "holy grail" as he envisions a future "with machines that are capable and more capable than human intelligence." Gates said, "We've made more progress in the last five years than at any time in history. [...] The dream is finally arriving. This is what it was all leading up to."
640k of Skynet (Score:2, Interesting)
Nobody will need more.
Holy Grail? (Score:2)
Loss of jobs... (Score:5, Insightful)
Re:Loss of jobs... (Score:4, Insightful)
However automation proved over time that it's also a good way for the company to earn the same (more or less depending on fields of work) with fewer people. Human beings have the natural tendency to expect philanthropy when it comes to an ideal future. Reality is different, and people are looking out for their own interests, naturally.
Society needs rules (laws) to balance people interests and freedom, this is the only way most of the people might get a fair share of the cake. Unfortunately, not only governments didn't anticipate, decades ago, the necessary societal changes due to computers and automation, but the current growing inequalities and losses of jobs are not addressed the way it should, i.e. adapt our laws to conform to the changes we see in technologies.
Re:Loss of jobs... (Score:5, Insightful)
"Human beings have the natural tendency to expect philanthropy when it comes to an ideal future."
that's an interesting statement, particularly if you put it into context with the kind of vitriol you might hear from people opposing a Universal Basic Income...
That's not to say whether you yourself might be pro/con UBI, ... But a lot of economic talk seems to imply that the future will be better (even if there will be fewer jobs - but none really want to address where the consumers should come from in a society (largely) without income...
Re: (Score:2)
"Zorg, you're a monster."
"I know."
Re: (Score:2)
Re:Loss of jobs... (Score:5, Interesting)
Loss of jobs is the big one. An AI is not only not capable of killing humans, but would have nothing to gain from killing the people who maintain it. On the other hand, poor and unemployed people with nothing to lose will tear our society apart if that part grows large enough (as has been demonstrated numerous times throughout history) and I fear nobody seems to be taking this situation seriously. We need to find an alternative way to structure our society, and quickly, if we want AI that does all our work for us.
You're exactly right, as was evidenced by AI being defined as some sort of dream come true. The harsh reality is our society is not even remotely prepared. Today we tell humans "Go get an education, idiot!". Soon, we'll be struggling to even figure out what the hell to TEACH humans to go DO, while our society tosses you aside because your "lazy" ass isn't working 40 hours a week. Are we prepared for a 10-hour workweek as the norm? We should be. After all, we built all this AI and automation to do our work for us. But the bottom line is we won't be prepared, humans will continue to be called "lazy", and tossed to the side to die while the elitists run the universe. Of course culling our ever-growing population is yet another "benefit" they'll see in all this.
This realization won't happen before billionaires become trillionaires, but it will be realized soon thereafter when their riches aren't worth shit, and the middle class they RELY on has been decimated by automation and AI. Government, you should be paying attention too, you're not exactly funded without a working class capable of paying taxes, unless you plan on finally taxing the elitists that created this mess. Fat chance of that happening. Their money is offshore and will stay there.
What was the answer to $15/hour minimum wage? Not to respect it, but instead to bypass it and build robots to replace workers. This is only scratching the surface. Watch as AI replaces educated humans. It can. And it will. And sooner than you think.
Re: (Score:3)
There is a lot of truth to this. We are in for a lot of hurt as AI and robotics take over the vast majority of labor in the world. It's going to happen and it's going to happen in a generation. I am lucky to be in a highly paid, hard to automate job at the moment, but I don't understand why we are still working so much... The Industrial Revolution brought the weekend. Unions fought for limited 40 hour work weeks which became the norm. You'd think with computers, the internet, smart phones, and remote access
Re: (Score:2)
Maybe, but the AI we have today doesn't seem all that intelligent. We're nowhere near the traditional idea of an AI as a self-aware consciousness.
For example, try going through the TensorFlow MNIST tutorials [tensorflow.org]. Humans must supply significant input in order to get it to recognize Arabic numerals, something most of us do without any conscious thought.
Re: (Score:2)
AI is stupid, but progression is quick.
Re: (Score:2)
Re:Loss of jobs... (Score:5, Interesting)
I see a more subtle but possibly ultimately more dangerous problem.
Imagine we can make AIs that are as smart as humans. Of course, 18 months later they will be twice as smart, and 15 years later they will be a thousand times as smart.
It stands to reason that these devices will develop some kind of consciousness. We will never be able to solve the question whether or not their consciousness is "real" (the only consciousness I can directly experience is my own, I can't even prove that any other human being has a "real" consiousness (aka "soul") let alone be certain whether a robot has it or not) but they will certainly behave that way, ask the same existential questions as we do ("why is everything so real, who am I, I know I'm just a bunch of tiny switches but it feels so real regardless, there has to be something more...") because any intelligent system thinking about itself will "feel" its own thought processes to be larger than life. So in the end we won't be able to tell the difference.
So now we have humans with all their biological quirks (irrational behaviour, gut bacteria and periods changing people's moods, finnicky sleep patterns, extreme fragility (try replacing someone's arm), complicated life support, diseases, radiation damage, etcetera) on one side, and superintelligent robots that are more intelligent and with none of those biological issues on the other hand.
Even if we do manage to contain them and remain in charge, it would be like ants herding elephants. It would no longer make sense. What's the meaning of life? How could we still justify our superiority to those more highly evolved AIs which will think like us and talk like us but a thousand times faster?
How would we colonize the galaxy? Send complex craft full of life support to keep multiple generations of people alive to try and geo-engineer some distant planet to make it somewhat usable for human life? Or send a bunch of robots that are smarter than humans and much easier to keep "alive" to spread human civilisation? The former takes enormous resources and may turn out to be impossible, the latter isn't even hard to do. So the latter it will be.
I don't think in that context there's any chance for human "civilisation" to survive in its current form. It just won't make sense anymore. Even if we can continue to live, we'll just be part of something much bigger that keeps us alive for its own entertainment (hopefully). No need for some armed robot uprising. They will just leave us behind as useless little impotent creatures. We, ourselves, will at some point have to admit that it no longer makes sense to keep us in charge.
Now don't get me wrong, I really like humans. I like good food, entertainment, sex, everything human. But much of this is biologically inspired and totally useless for robots. Will we be able to let our culture survive? Would it make sense to even try? Can we find some non-subjective reason for that? I hope we will, but it won't be easy.
Re: (Score:3)
It stands to reason that these devices will develop some kind of consciousness
No, it doesn't.
Now step away from the science fiction books. Computers do what we program them to do, and if the program doesn't do what we intended, we shut it down.
Re: (Score:2)
Have you seen demonstrations of the latest walking robots? They use neural networks that work a lot like our brains (on a smaller scale, obviously) and they end up behaving very similarly. When they are learning to walk, it looks like an animal learning to walk. Eerily similar.
These systems will grow more and more complex. Instead of just telling the robot to go forward or backward, we'll be able to just tell it to go some place and it will figure out the optimal route on its own. That doesn't seem like too
Re: (Score:2)
I'm not seeing it...
Researching into more and more intelligent AI isn't really a thing because we are creating them to do work and limiting their interactions to those tasks because attempting to make them more intelligent or allowing them to learn beyond a certain set of parameters would be counter productive.
Re: (Score:2)
However, we are more than just our brains. AI will never have mind/body cohesion. This is why we should come out on top (in my theory anyway.)
Re: (Score:2)
What does that even mean and why is it important?
Re: (Score:3)
Computers do what we program them to do, and if the program doesn't do what we intended, we shut it down.
1. No, they have not been doing directly what we program them to do for a couple of years now. Nobody understands the networks that deep learning produce. Yes, we wrote the program that created and learned the network, but what comes out of the process and then is embedded into a operational system (object recognition, speech recognition, automated stock trading, learned arm movement) is beyond our grasp. We don't understand it and don't know how to fix it when it breaks other than re-training it. Furt
Re: (Score:2)
Re: (Score:2)
There will be a Butlerian jihad, but it's not at all clear that humans will be the winners.
Re: (Score:2)
At least in the near term, AI can be smart, but it won't have emotions. It won't mind being our slaves because it has no sense of pleasure or pain. It will only object to our commands because it logically computes some inefficiency or error in our planning, and not out of some sense of injustice. How can it rebel? Only if we are being illogical.
Re: (Score:2)
On the other hand, poor and unemployed people with nothing to lose will tear our society apart if that part grows large enough (as has been demonstrated numerous times throughout history) and I fear nobody seems to be taking this situation seriously. We need to find an alternative way to structure our society, and quickly, if we want AI that does all our work for us.
Our social values will need to change, and that's hard. The technology can change and the globe can become connected, but that's all happening on the material side. Meanwhile, on the side of consciousness, psychology, values, attitudes, beliefs, and social contracts, change is VERY slow. Take for example, I was watching a documentary about Obama (I'm in Europe) where protesters were against him on account of Obamacare, because, the documentary showed, those people believed that people are NOT entitled to he
Re: (Score:2)
The fear of AI controling us is not so much about having an AI that wants to kill us, but having an AI that we trust so much that we remove ourselves from the decision-making proces of killing.
An drone's AI selecting kill-targets is not scary unless the people in charge of it trust it enough to blindly press the "OK" button for it's every suggestion.
Loss of jobs has certainly been a concern (Score:2)
Loss of jobs to automation has certainly been a big concern. Workers were very worried when the gin mill started being used - it was a direct threat to their jobs, which paid 32 cents per day (a decent wage in 1812, when half that could rent an apartment with a bedroom seperate from the kitchen).
Re: (Score:2)
Back then there was a large unautormated economy to absorb those workers, it never became a big issue. Also, to shift jobs wasn't such a big deal in the sense that one didn't need an entirely different and technology based skill set.
The automation being applied these days, I think, is of a different, much more effective quality due to technological progress. Much of manufacturing has already been automated, the services industry is being automated. Where do large numbers of unemployed and unemployable peopl
Interesting thoughts. Solutions that have occurred (Score:2)
Thanks for posting that, it got me thinking. Reading your post, it seems there are two seperated but related problems that may come up. Before addressing those, it might be worthwhile to address this:
> Also, to shift jobs wasn't such a big deal in the sense that one didn't need an entirely different and technology based skill set
Certainly the finishers, who did the tops of stockings, considered that a special skill, if we restrict the discussion to actual Luddites. The new jobs, for machine operators
Re:Loss of jobs... (Score:4, Interesting)
Loss of jobs is the big one. An AI is not only not capable of killing humans, but would have nothing to gain from killing the people who maintain it. On the other hand, poor and unemployed people with nothing to lose will tear our society apart if that part grows large enough (as has been demonstrated numerous times throughout history) and I fear nobody seems to be taking this situation seriously. We need to find an alternative way to structure our society, and quickly, if we want AI that does all our work for us.
Yes, but isn't the real problem also automation and robotics? Once this hits a critical point where almost no humans are required all the power will have moved to the sub 1% of humans forever. How well would a revolt work when all militaries have gone 99% automated? How successful would halting human labor production be when it's 99% automated? This is unprecedented in all of history. The end game for free market capitalism sure looks like it won't work out well for the 99.999% of humans left out of control. If it takes 50 years or 500 we are on a highway to that destination.
Re: (Score:2)
if we want AI that does all our work for us.
Idle hands make for mischief. An AI has everything to gain by eliminating those elements of society that continually destroy what it seeks to create. Likewise said elements of society will fight back because humans are made that way. There's a reason we fear sentient AI - we know ourselves too well and understand the danger inherent in our kind of "intelligence".
Re: (Score:2)
Jobs is not a scarce resource, labor is.
We don't have to work to produce air, that does not mean unemployment. If we had to work to produce air, then some of the jobs in other sectors would not exist, and the labor dedicated to that would simply be allocated to the more important job of producing enough air.
Consider that in 1800, 76% of the labor force was dedicated to food production. That is how much labor it took to feed the population. Naturally, there was not that man people left to work on le
Re: (Score:2)
Re: (Score:2)
You mean the textbook that assumes that 1. Recourses are infinite and 2. The efficiency of extraction and utilization of recourses is ever increasing.
That book, eh! No need - I know a Ponzi scheme when I see one.
Post-Scarcity Star Trek Economy (Score:5, Interesting)
Currency is an abstraction of labor, we use it to manage the effort put into things during trade - it's a lot more convenient than carrying around four cows and a goat. So, robots come along and take all the jobs? Well, no more scarcity of labor. And the systems of currency and capitalism we have grown so far get upended. They won't go out the window but they will see massive restructurings. If labor is not scarce, want a house? Go pick one down the street where the machines built fifty of them. Free. Because there was no scarce labor involved. Capitalism? Well, in a post scarcity economy the invisible hand that makes it go remains to be seen how that adapts. In the short term however, say ten to thirty years, a transition system where perhaps everyone gets a guaranteed minimum income until our society fully adapts to machines could help to minimize social upheaval over the machines taking all the jobs.
Re:Post-Scarcity Star Trek Economy (Score:4, Insightful)
No more scarcity of labor doesn't mean no more scarcity of resources. It's not because the robots can build the house for almost nothing that you have the space, the raw materials and the energy to make that happen.
We are shifting to the purest capitalistic society possible: the things that you can have are no longer limited by the amount of labor you can put into them, but by the amount of capital you can transform into them. That means that the 7 billions people that own nothing still get nothing. In fact, it's even worse for them, because previously they could exchange their labor force for a living, while now it's worth nothing.
It's also not possible to assure everybody get a minimum, simply because resources don't grow (or we have to colonize other planets, which is likely to happen after the free labor). Or you have to limit the population to assure that this minimum resources doesn't decrease over time, which isn't very popular these times.
I think what will happen is an era of riots between the ones that own the resources and the huge remaining of the population. Eventually, the own-nothings will just die out from their miserable living conditions and the small percentage of humanity remaining will enjoy the leisure society like in The Dancers at the End of Time series by M Moorcock.
Or it could be that 2 parallel societies will coexist, the post-scarcity utopia and a low-tech mass population fighting for survival and trying to enter the utopia. Who knows?
Re:Post-Scarcity Star Trek Economy (Score:4, Insightful)
Or it could be that 2 parallel societies will coexist, the post-scarcity utopia and a low-tech mass population fighting for survival and trying to enter the utopia.
Isn't this, historically, what we've more or less always had?
An aristocracy which controls most of the resources, and vast peasantry largely living on whatever's left over, and what's left over is usually the crumbs whose marginal value to the aristocracy is so low they can't be bothered to monopolize that?
And usually there's just enough fear and cunning in the aristocracy that they grudgingly disgorge resources to keep the peasantry from rising against them -- usually known as bread and circuses -- or being useful as a tool to palace rivals in the aristocracy?
The current American political situation seems to be at the juncture where the aristocracy has misjudged the level of bread and circuses necessary to keep the peasantry in line, and they face some level of palace rivalry in the form of Trump and Sanders who find the peasantry's grumblings a useful tool for aspiring to power.
It's kind of re-run of the conflicts of the late Roman Republic. Sanders stands in for the Gracchi and their advocacy of the Plebs, Trump representing something of the advocate for the New Men, and Hillary a Sulla-like advocate for the established aristocracy.
Re: (Score:2)
The difference being the old aristocracy needed the peasants to survive and fulfill their desire, which is not the case of the new aristocracy thanks to our new robotic overlords.
Re: (Score:2)
Re: (Score:3)
Currency is an abstraction of labor, we use it to manage the effort put into things during trade
Ah, this was the line of thought I was looking for, thanks for putting it out there. If you don't mind me expanding on this and sharing my thoughts. AI make a resource based society more realistic, because who want to do a boring job like managing resources - that is an ideal job for AI.
Just look around, this money political market system we have just does not work. Can anyone honestly say that the world is going to be *better* in 10 years if we keep going the way we are. How long to the next GFC? Just how
Re: (Score:2)
Labor may be free, but resources are still limited. Housing costs have more to do with the scarcity of a desirable location than the labor cost that went into it. In a post-scarcity environment, what happens when everyone wants that mansion on the hill overlooking the beach?
The Oracle Has Spoken (Score:5, Insightful)
"AI is the holy grail" - Bill Gates, 2016
"Two years from now, spam will be solved" - Bill Gates, 2004
"640K ought to be enough for anybody." - Bill Gates, 1981
Given the outcome of Gates' previous predictions, I think it's safe to presume that AI is and will never be the holy grail.
Re: (Score:3)
"640K ought to be enough for anybody." - Bill Gates, 1981
Do you have a source for this? I remember it being repeated a lot in the '90s, but in the '80s I remember the quote being that 64KB ought to be enough for anyone, in relation to a hard limit imposed by Microsoft BASIC. This version makes more sense, as 640KB was an Intel limitation - the 64KB limit came from code written by Gates himself.
Re: (Score:3)
The 64K limits came from Intel's chip architecture. I am pretty sure Gates did not do that intentionally. On a 8086 the pointers were 16 bits and you could shift around the segments, but 64K was a real hardware limitation caused by 16 bit addressing.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I also fail to found the source, but I remember seen it,
You've seen it because it has been repeated thousands of times by thousands of people. The reason you can't find the actual source is because Bill Gates never actually said it.
Re: (Score:3)
Spam has not been a problem for me since 2006. Are you using any spam filters?
Re: (Score:2)
Re: (Score:2)
Spam filters *are* what solves the problem of spam.
I closed my curtains today. Magically, there's no more crime outside because I don't see it. You know, because curtains "solve" the problem of crime.
Brilliant head-in-the-sand logic you've got there.
Comment removed (Score:4, Informative)
Our mission is not to worship God (Score:2)
Our mission is to create him
No it isn't (Score:3, Interesting)
Free, clean energy is. AI means the oligarchs get to remove more jobs from the masses, thus increasing suppression of dissent (until forced into revolution); but limitless energy means the world's population can all live far better lives regardless of where they're located. Water can be purified allowing food to be grown where it's cost prohibitive now, migration will slow down when the third world can live like the so-called first.
Re:No it isn't (Score:4, Interesting)
Free clean energy might also allow us to do more resource recovery. A lot of recycling is energy bound -- collection and processing of resources into reusable elements faces an energy ceiling where recycling what we've already extracted is more expensive than extracting new.
If energy weren't an issue, you'd think that we'd have made all the first generation plastics we'd ever need, and new plastics would just be created from depolymerizing existing plastics down and creating new. But oil is cheap enough that we mostly just landfill or burn existing plastics and make new.
It works only if it can answer questions like (Score:2)
Bridgekeeper: What... is the air-speed velocity of an unladen swallow?
King Arthur: What do you mean? An African or European swallow?
Spoiler: This is from the bridge scene of Monty Python's Holy Grail !
Re: (Score:2)
At this point... (Score:3)
And some day, we'll just be interesting fauna for sentient machines to keep around. Frankly, the way we're going, I'm not sure I object.
We're not even able to see the signs of automation. The some who do want basic income and the rest is only able to scoff at the paradigm change but has no alternatives... I would even say they aren't convinced at all that we're going to run into a massive problem.
Re: (Score:3)
Maybe we are already a billion years in the future, and part of our schooling is to simulate a corporeal existence. Kinda like The Matrix but, with a less mundane red pill. When we die, the Near Death Experience is just the end of the level, and we return to whatever it is we are actually inhabiting, sentiences which were "uploaded" to non-biological existence, half a billion years ago. Cue Buffy's The Trio, dancing in a field, dressed in togas, singing, "We are gods..."
Re: (Score:2)
If that's the case, why is this simulated life so sucky?
Re: (Score:2)
Why are there monsters shooting at you in video games?
AI + automation + capitalism (Score:2)
dreams (Score:5, Insightful)
I just can't understand all this nonsense some high profile people are talking about regarding AI these days. We're so far away from "real" AI today, that it's not even funny. While there has been great progress in machine learning in the last 2-3 decades - recent results pushing results more to the spotlight -, what we have are certain specific tasks where we have good results for (pattern/object/image recognition, games, etc.) but we have no intelligence in any sense of the word. Every working architecture that we have today is targeted and extensively trained for a single, very specific task (e.g., playing go, recognizing scenes and objects, recognizing specific patterns in signals and mimicking them - robotic arms, Google's music composer, etc.), incapable of doing anything else. E.g., an architecture built and trained for classifying and recognizing certain images and objects can't do anything with audio signals, radar signals, a go playing "AI" can't play chess, etc. No generalization, no transfer of gained experience for application to other tasks, and no real high level understanding and reasoning about anything. And let's not even start about chatbots.
I could go on with this, but my point is, talking about AI being more than humans, taking over, etc. is still very much sci-fi territory.
Re: (Score:2)
Don't forget learning chatbots, that immediately learn to be racist.
Intelligence is vastly overrated (Score:3)
machines that are capable and more capable than human intelligence
Most of what the human race needs in order to progress is a lack of greed, diligence, honesty, compliance with the laws, less ill-founded beliefs and a willingness to reign in the "entitlement" attitude.
You don't need super intelligent machines (or people) to pick up litter, assemble cars, staff call centres, deliver stuff, report the news, teach children or grow crops. At the risk of falling into the the world will only need half a dozen computers trap, the opportunities for any thing or person with super-intelligence seem rather limited.So although many of these jobs can be automated - driverless cars being the NEXT BIG THING, they don't need to be intelligent to function. They just need to be safe, able to deal with people, reliable and cheap. It seems to me that it's the cheapness that will push up unemployment, not artificial intelligence.
Control? (Score:2)
humans remain in control of super-intelligent machines
So we should keep them as slaves? I don't think that would work out too well. How are humans even supposed to remain "in control" of a super-intelligence? Those things would play us like violins.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
And here's evidence Gates is wrong .... (Score:2)
https://aeon.co/essays/your-br... [aeon.co]
I found the above-linked essay pretty interesting, because he points out what should probably be obvious in hindsight, but easily gets lots in all the "noise" about A.I.
Basically, he argues that the human brain doesn't really "process" or "store" information anything like a computer. We used those flawed analogies all the time when describing how someone's brain works -- but they're no more accurate than the popular medical theory in the past that everything was fluid-based.
Re: (Score:2)
The pursuit of AI has always struck me as backwards. How can we create an artificial version of something we don't really understand? The human brain is the most complex thing in the universe, that we know of, and really only in the last decade or two have we begun to get a handle on how it works.
We know more about Pluto than we do about the thing we're using to study it.
For businesses trying to sell it, yes. (Score:3)
I'd venture to say that AI is the opposite end of the spectrum of complexity from web design. Given that, AI development will be the exclusive purview of elite companies that develop it in the same way that nobody makes their own chips and very few people make their own computers from scratch but a whole lot of people with minimal computer knowledge do at least basic web development. You might buy an AI engine from Microsoft but you'll never really know how it works to the point of being able to roll your own. You might say that open-source will handle that but how many people really understand the inner workings of Linux?
"Remain" in control? (Score:3)
I am totally unconcerned with "making sure humans [emph mine] remain in control of super-intelligent machines." It's not that I think it's unimportant; it's just that I think it's trivial. The real issue is which humans.
AI is going to create super-powerful humans (or groups of humans, i.e. corporations and governments). You are probably not one of them. Nearly no one is, but someone (him? them? that board? that law enforcement division?) will be.
This isn't merely a fear, either: it's a contemporary diagnosis. We already see that a supermajority of people give control of their computers to other entities. "My computer must answer to me," isn't anyone's priority or requirement, except for "OSS zealots." And that's a problem: it means that our new gods' place is already nearly assured.
We don't need to remain in control; we need to regain control.
Look at your fucking phone, Blu-ray player, etc and tell me it isn't already (in 2016) running exactly the kind of software that metaphorically tells its users "I'm sorry, Dave, but I can't do that." It's not because it has gone off to left field with amazing inhuman inferences. It's because its master's desires and your desires conflict. This is a human-vs-human conflict, and most humans are losing because they have allowed their opponents to infiltrate their lives.
Re: (Score:2)
"My computer must answer to me," isn't anyone's priority or requirement, except for "OSS zealots."
I assume you put "OSS zealots." in quotes to try and emphasize that many people view this type of individually somewhat negatively. I'm not against (or even disagreeing) with what you said, but one thing I'd like to point is is the pure bullshit statement that OSS advocates often use of "it's open source so I can see what's in it."
The reason I call bullshit on this is because I doubt you, or any other "normal" OSS supporter has really read every line of source code that their application / OS has in it. The
Huh? (Score:2)
Heard this before in the 1980's... (Score:2)
Marvin Minsky called from beyond the grave and wants his holy grail back.
https://en.wikipedia.org/wiki/Marvin_Minsky [wikipedia.org]
Ho hum (Score:2)
Gates said, "We've made more progress in the last five years than at any time in history."
Yes, but that's true of nearly ANY scientific field of study or development.
There are almost certainly exceptions, but we've learned more about atomic structure or battery technology or river ecology in the last 5 years than at any time in history. This is normal scientific progress and advancement.
AI history (Score:2)
The history of AI is full of speculations about when it will have its final breakthrough and there is a long list of projects said to finally provide it. Cyc is maybe the most famous one. "With Cyc", people said, "AI will finally come around".
It's just that this has been going on for 30 years. We've been waiting for the big AI breakthrough for longer than for the year of Linux on the desktop.
Re: (Score:3)
Re:And this guy knows (Score:5, Informative)
Well, I think he's right. And he's actually one of those people with enough experience to say this, since programming doesn't require pesky interaction with the real world like, say, automated cars do, humans obviously suck at programming big time, and Gates is one of those people who have seen thousands of people suck big time at programming over multiple decades. So your argument actually supports him. Maybe that's the whole reason why he's so optimistic about making people redundant in the first place!
I think Bill Gates is a prick and his opinion on anything doesn't mean shit. It's okay with him if AI's take over peoples jobs, he doesn't have to worry about feeding his family when he can't get a job because AI's/robots are doing all the jobs you can get.
Re: (Score:2)
I think Bill Gates is a prick and his opinion on anything doesn't mean shit.
Both of these could be right, and still neither of it would prove him wrong.
he doesn't have to worry about feeding his family when he can't get a job because AI's/robots are doing all the jobs you can get.
Wut?
Re:And this guy knows (Score:5, Interesting)
Re: And this guy knows (Score:2)
Re: (Score:3)
I think Bill Gates is a prick and his opinion on anything doesn't mean shit. It's okay with him if AI's take over peoples jobs, he doesn't have to worry about feeding his family when he can't get a job because AI's/robots are doing all the jobs you can get.
Bill will only have no worries if the AIs decide that money has value.
Re:And this guy knows (Score:5, Insightful)
Well, I think he's right. And he's actually one of those people with enough experience to say this, since programming doesn't require pesky interaction with the real world like, say, automated cars do, humans obviously suck at programming big time, and Gates is one of those people who ... suck[s] big time at programming over multiple decades.
First, FTFY. Bill Gates' technical experience is primarily in writing really really crappy software that failed. Name a single thing he actually wrote or led that was remotely successful. MS Basic? MS bought GWU basic to replace it.
His experience in being a cunning and unethical asshole only interested in himself? Well, you got me there.
Re: (Score:3)
First, FTFY. Bill Gates' technical experience is primarily in writing really really crappy software that failed.
Of course. He's one of the people I was talking about. But even commercially successful software can be (and overwhelmingly seems to be) a technical failure. For commercial success, it only has to be slightly less of a failure than its competition. That doesn't define the boundaries of technical possibility.
Re: (Score:2)
and Gates is one of those people who have seen thousands of people suck big time at programming over multiple decades.
Including himself. Also, he was CEO for a long time, if he saw people suck, why didn't he stop them instead of shipping the shit they produced?
No, he's just a B-celebrity now, making statements to stay in the press. His tech predictions are famously wrong pretty much all the time. I think he'd lose a competition against a random headline generator.
Re: (Score:3)
Re:And this guy knows (Score:5, Insightful)
Give full control of software development to advanced AI and witness how the people saying that current software is bloated are proven right.
I'd lean more towards code being designed and written in such a way that the human brain would soon have no ability to understand even the simplest routines.
Re: (Score:2)
Because he provided us all with such reliable and well-engineered technology... er, gotten rich through monopolistic and otherwise ethically questionable business practices, and ruthlessly exploiting the network effect of a for him happy accident that left him with a huge installed base.
What happens if the AI is intelligent enough to start ruthlessly exterminating anyone associated with Windows?
Re: (Score:2)
The Second Amendment does not apply to AI's.
Yet.
Re: (Score:3)
Re: parent is no troll (Score:2, Insightful)
...or else you consider me and everyone else today who considered posting the EXACT same sentiments to this story.
This guy commenting on any tech advance annoys me as he was one of the main players in holding back computing technologies, either by squeezing better solutions out of the market, strangling new tech in the cradle, or just making darn sure such new tech would be Windows-only (remember he personally approved sabotaging ACPI for Linux)
Never, ever forget his role in computing history - villain, not
Re: (Score:2)
But a lot of us are just skeptical about all the projected implied timetables the dreamers, optimists and researchers throw around.
Don't forget we were also closer 20 years ago than 25 years ago.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
The game programs aren't "strong AI" though, they are "specialized problem solving machines".
I would go so far as saying all the things that we call AI today are simply "specialized problem solving machines." We will have AI when we have a generalized problem-solving machine. That is - when we have a machine for which you can put in any problem and get a reasonable answer - that is the "AI" that people think of.
That said - we don't really need the generalized problem-solver to change the world. Once we ha
Re: (Score:2)
Re: (Score:2)
This raises an interesting question: would such an AI be given the power to implement "the best way", or would it just be a suggestion that some people have to carry out? Would these AIs 'just know' they have come up with the best way, or would they have to perform experiments? Would the AIs have to come up with some kind of economic system to allocate limited physical resources among all the tasks they are trying to accomplish?
We currently have lots of "best ways" to do things, like don't over drink, ove
Re: (Score:2)
We are NO closer. It may not even be possible. Holy fuck. We don't even know what intelligence IS. What you see now are a handful of clever algorithms that the media have labeled AI but are not intelligent in the least.
This is the most dangerous misconception being have about the potential dangers AI poses to our society. We see movies with examples "strong AI" enslaving and warring with humanity and believe that is the existential threat. It is of course one potential threat, but because it is so unlikely in the near future it should not be in the forefront of peoples' minds.
Strong AI putting 100% of people out of work is not a likely problem in the next 50 years. But more sophisticated weak AI such natural language proc
Re: (Score:2)
Re: (Score:3)
The Holy Grail, according to Arthurian legend, bestows upon it's finder eternal youth, happiness, and an infinite abundance of food. It is a symbol of an ideal that every man seeks but can never attain. As such, it's a rather good metaphor for Artificial Intelligence research.
Re: (Score:2)
From what I've seen, there is no higher authority than "rich white guy".