LinkedIn's and eBay's Founders Are Donating $20 Million To Protect Us From AI (recode.net) 74
Reid Hoffman, the founder of LinkedIn, and Pierre Omidyar, the founder of eBay, have each committed $10 million to fund academic research and development aimed at keeping artificial intelligence systems ethical and to prevent building AI that may harm society. Recode reports: The fund received an additional $5 million from the Knight Foundation and two other $1 million donations from the William and Flora Hewlett Foundation and Jim Pallotta, founder of the Raptor Group. The $27 million reserve is being anchored by MIT's Media Lab and Harvard's Berkman Klein Center for Internet and Society. The Ethics and Governance of Artificial Intelligence Fund, the name of the fund, expects to grow as new funders continue to come on board. AI systems work by analyzing massive amounts of data, which is first profiled and categorized by humans, with all their prejudices and biases in tow. The money will pay for research to investigate how socially responsible artificially intelligent systems can be designed to, say, keep computer programs that are used to make decisions in fields like education, transportation and criminal justice accountable and fair. The group also hopes to explore ways to talk with the public about and foster understanding of the complexities of artificial intelligence. The two universities will form a governing body along with Hoffman and the Omidyar Network to distribute the funds. The $20 million from Hoffman and the Omidyar Network are being given as a philanthropic grant -- not an investment vehicle.
Al who? (Score:4, Funny)
Re: Al who? (Score:4, Funny)
Re: (Score:3)
Re: (Score:2)
Al Bundy. :P
Re: (Score:1)
Re: (Score:1)
The universe of The Terminator can be avoided by simply not giving Skynet the nuclear launch codes.
Re: (Score:2)
How hard is it?
If (kill_humans) do_not_do_it;
Saved you $20mm.
Great!
That 20mm is much better spend on a campaign to dehumanize the enemy.
The only way to make AIs safe.. (Score:2)
is to make sure they have no urge to reproduce or continue their existence. In fact, I would install a negative urge to reproduce, just to be sure.
Self replication and a desire for continued existence are the only thing that might motivate AIs to wipe us out.
Oh, and it might be nice to install a desire to never harm us.
As for preventing us from harming ourselves... fuck off, you nanny state wanker.
No way (Score:4, Insightful)
There's no way to make AI safe, for exactly the same reasons there's no way to make a human safe.
If we create intelligences, they will be... intelligent. They will respond to the stimulus they receive.
Perhaps the most important thing we can prepare for is to be polite and kind to them. The same way we'd be polite and kind of a big bruiser with a gun. Might start by practicing on each other, for that matter. Wouldn't hurt.
If we treat AI, when it arrives (certainly hasn't yet... not even close), like we do people... then "safe" is out of the question.
Re: (Score:2)
Except that it's moral to create an AI with no ability to act on the physical world. With humans, not so much.
Please explain your assertion (Score:2)
I would have to accept whatever justification you might have as to why you think it would be moral to create an intelligence with such limitations, or kept to such limitations once created. It's possible I might accept such a thing, I suppose, but at this point I'm simply coming up with a blank as to how this could possibly be acceptable.
How is it acceptable to imprison an intelligence for your own purposes when that intelligence has offered you no wrong? The only venues I've run into that kind of reasoning
Re: (Score:1)
Re:The only way to make AIs safe.. (Score:4, Interesting)
The most likely reason for an AI to kill you is that its designer/operator/owner/cracker instructed it to do so. And believe me, there are people who want to see you dead, no matter who you are or what you do. Once AIs are capable enough to autonomously control an armed combat robot unit, such units will be build, with the usual reasoning that it's just for our safety and because "it's controlled by us, and we are the good ones". And then one day somebody will decide to have it go against you. Might be even an accident/misunderstanding/prank.
Re: (Score:2)
We already have semi-autonomous killing machines. It's not a big stretch to get to fully autonomous killing machines (tech wise, if we can have a self driving car, then we're there - though it'd be scary as hell and a bad idea to use it just yet).
If we get to autonomous ones, we can fight them back if they go haywire so long as they don't have a desire for continued existence and/or the ability to self replicate. I may be reading into it some, but I think that was GP's point. Not to get to 100% safe, becaus
Re: (Score:2)
is to make sure they have no urge to reproduce or continue their existence. In fact, I would install a negative urge to reproduce, just to be sure.
Your suggestion comes a bit late. Some types of AI are all about mimicking biological evolution by replicating themselves with the positive urge to improve themselves each time (killing off the inefficient AIs and keeping only the most efficient variations of its children AIs)
The stupid things you can read about sometimes.. (Score:3)
I mean c'mon...someone "insert ceo/founder/idealist/rich-moron etc. here" donates money to keep A.I. civilized. Yay. As if that is gonna be a deciding factor, as if that is going to do anything. I smell tax excemptions here...
Re: (Score:2)
Yeah, right.
The recipients of this money must be patting themselves on the back though.
Linkedin ethics? (Score:3)
So... the AI will have all the ethics of linkedin... the freedom to spam every person you've ever contacted, ever?
Re: (Score:3)
Only if you're stupid enough to give them access to your email address book.
Re: (Score:2)
Except if someone you emailed happened to share their contact list... they mine that shit for all of our peril.
Will probably run into the same problem as people (Score:3)
I suspect the solution here isn't to design an AI to act ethically, but to design it to act as the AI or person it's dealing with acts. Basically the tit for tat strategy [wikipedia.org] as a solution to the Prisoner's Dilemma. That gives it enough leeway to protect itself, while also creating an incentive for other AIs / people to act ethically.
Natrual Selection for people and AIs (Score:3)
Almost correct.
People behave ethically because they need to work together. And people that are (too) unethical are ostracized. Unethical societies tend to collapse, and so are dominated by ethical ones. So Natural Selection has given us our moral values, which compete with shallow self interest to an extent that works out surprisingly well in our radically new society.
Natural Selection will and does affect AIs, even before they become intelligent enough to understand the concept. (People only understood
Re: (Score:1)
Evolutionary Selection for people and AIs (Score:2)
Brilliant points about evolution shaping morality -- thanks for making them aberglas. Two other things to consider -- other evolutionary processes and our direction going into the singularity.
There are several evolutionary processes besides conventional natural selection (including just random drift). Even just natural selection includes seemingly weird things like "sexual selection" that shape a Peacock's tail because Pehens think big tails are sexy proof of health and strength because they are so hard to
Re: (Score:2)
The goal of this sort of research isn't too provide general-purpose ethics for AIs, it's to figure out how to make sure they don't decide to wipe out or oppress humanity. The problem is that there's no obvious reason that the intelligence level of an artificial mind is naturally limited to human equivalence. For that matter there's no reason human intelligence is limited... but increasing our intelligence is a slow process.
Given that an AI that reaches something close to human level intelligence can then
Re: (Score:2)
Solving that problem is what this sort of research is about. For a good overview, read "Superintelligence", by Nick Bottom. It may be the most terrifying book you'll ever read.
That's Nick Bostrom. Dang autocorrect.
Isaac Asimov solved this decades ago (Score:4)
https://en.wikipedia.org/wiki/... [wikipedia.org]
A robot [AI] may not injure a human being or, through inaction, allow a human being to come to harm.
A robot [AI] must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot [AI] must protect its own existence as long as such protection does not conflict with the First or Second Laws.[1]
Re:Isaac Asimov solved this decades ago (Score:4)
Anyone who seriously quotes the "Three Laws", a plot device that FAILED in every story it was used in, is telling the world they are a fucking moron.
Give a robot a box and tell it to walk into that large crowd. Does it break the three dumb laws? Nope. Oh shit there was a bomb in it now everyone is dead, thanks to that robot.
That is a very basic example of a workaround. A real AI cold come up with a seemingly endless amount of workarounds to anything we program.
No need to be rude in your response. I feel sorry for you to be such an angry person, you must be very unhappy in your life. People are so tough over the internet while safe behind their keyboards to act in ways they would never in person.
You obviously never read any of the books. While I am sure you will never read my response, the Wiki page writes up this fact as Asimov used this ery weakness as the focal point of several books:
"In The Naked Sun, Elijah Baley points out that the Laws had been deliberately misrepresented because robots could unknowingly break any of them. He restated the first law as "A robot may do nothing that, to its knowledge, will harm a human being; nor, through inaction, knowingly allow a human being to come to harm." This change in wording makes it clear that robots can become the tools of murder, provided they not be aware of the nature of their tasks; for instance being ordered to add something to a person's food, not knowing that it is poison. Furthermore, he points out that a clever criminal could divide a task among multiple robots so that no individual robot could recognize that its actions would lead to harming a human being.[34] The Naked Sun complicates the issue by portraying a decentralized, planetwide communication network among Solaria's millions of robots meaning that the criminal mastermind could be located anywhere on the planet.
Baley furthermore proposes that the Solarians may one day use robots for military purposes. If a spacecraft was built with a positronic brain and carried neither humans nor the life-support systems to sustain them, then the ship's robotic intelligence could naturally assume that all other spacecraft were robotic beings. Such a ship could operate more responsively and flexibly than one crewed by humans, could be armed more heavily and its robotic brain equipped to slaughter humans of whose existence it is totally ignorant.[35] This possibility is referenced in Foundation and Earth where it is discovered that the Solarians possess a strong police force of unspecified size that has been programmed to identify only the Solarian race as human."
Re: (Score:2)
I've read the books. Maybe all of them, though probably not, considering how many he wrote. I don't know if your original post about the three laws was intended to be a joke, but if it was, it managed to be both rich and subtle. Be not offended by the angry nerds, our misguided fury is itself funny to those with the self awareness to recognize when we've been ... well you know https://youtu.be/xLzHj3aFCaE?t... [youtu.be]
The three laws sound rational and his stories make the programming seem reliable, but then he proc
Re: (Score:3)
Exactly, that is what was so fun about the books. They really were really mysteries set in the future (like most of the best science fiction). I actually forgot as its been many many years since i read them, that the key point was the breakdown of the laws, and the subsequent modifications to try to make them work. Really fun reading. I actually read all the books set in this "universe" he wrote. Truly amazing books considering they span decades of his life...
Re: (Score:2)
If you actually read his books, you would know quite a few feature clever ways those laws can be broken or worked around. I'd start by defining what constitutes a "human being". Oh, and add that pesky zeroeth law, which basically says "the good of the many outweighs the good of the few".
"$5 million from the Knight Foundation" (Score:2)
I thought they solved that problem 35 years ago.
Social inertia (Score:2)
What was ethical and even honorable behavior in the past is now seen as horribly wrong. Programming an AI to behave ethically will need to include flexibility and a way to respond to changes (growth?) in society. Otherwise we get stagnation that will lead to explosive revolutions. And therein lies an attack surface.
This is about People, not AI (Score:2)
Seriously, this is not a problem with AI, it's a problem with people using AI to do bad things. There is ZERO chance you can keep bad people from doing bad things with software and hardware, I don't care how much money you spend. Where I applaud the effort, it's not going to be successful.
Now if you want to educate folks on the issues, develop a moral guideline for "ethical use of AI" then great. But don't be fooled, you won't be able to force anybody who doesn't want to play along with your rules to fall i
Roko's Basilisk... (Score:3)
...is really gonna have it in for those guys.
$20 m (Score:3)
Suuure, That's a Great Idea! (Score:2)
AI is only as Bad as those that use it (Score:1)
AI will be used by both good and bad people, just like the internet and you can't stop it and you can't put in controls so that it only makes the decisions you want. Do we have $10 million dollars to show that all of the teachers on the planet make good decisions for the kids they teach? No, all you can do is test them and verify that they have some soft of formal education. So I guess all the AI's will have to take the ethical 101 course if they are going to be used in the public.
Combined Militaries Donate 20 Bajillion To Make Un (Score:2)
I'm pretty sure this 20 million is nothing compared to the 20 bajillion being "donated" by militaries to make autonomous, amoral, killing machines.
It must be another slow news day on slashdot.
Twitter Crowd Sourcing Future AI (Score:2)
That's going to be a lot of solid QA (Score:1)
Roko's Basilisk (Score:2)
This money will be used to make AIs friendly. This is a very important program and you need to donate, because if you don't, things will go bad.
It is so important that future friendly AIs have to do anything in their power to make it happen, including torturing you into giving all your money to the cause if it can help in any way. They can do it even from the future by running simulations of you.
So now that you know, you should better donate too.
I'll be OK. (Score:2)