AI Pioneers Call For Protections Against 'Catastrophic Risks' 69
An anonymous reader quotes a report from the New York Times: Scientists who helped pioneer artificial intelligence are warning that countries must create a global system of oversight to check the potentially grave risks posed by the fast-developing technology. The release of ChatGPT and a string of similar services that can create text and images on command have shown how A.I. is advancing in powerful ways. The race to commercialize the technology has quickly brought it from the fringes of science to smartphones, cars and classrooms, and governments from Washington to Beijing have been forced to figure out how to regulate and harness it. In a statement on Monday, a group of influential A.I. scientists raised concerns that the technology they helped build could cause serious harm. They warned that A.I. technology could, within a matter of years, overtake the capabilities of its makers and that "loss of human control or malicious use of these A.I. systems could lead to catastrophic outcomes for all of humanity."
If A.I. systems anywhere in the world were to develop these abilities today, there is no plan for how to rein them in, said Gillian Hadfield, a legal scholar and professor of computer science and government at Johns Hopkins University. "If we had some sort of catastrophe six months from now, if we do detect there are models that are starting to autonomously self-improve, who are you going to call?" Dr. Hadfield said. On Sept. 5-8, Dr. Hadfield joined scientists from around the world in Venice to talk about such a plan. It was the third meeting of the International Dialogues on A.I. Safety, organized by the Safe AI Forum, a project of a nonprofit research group in the United States called Far.AI. Governments need to know what is going on at the research labs and companies working on A.I. systems in their countries, the group said in its statement. And they need a way to communicate about potential risks that does not require companies or researchers to share proprietary information with competitors. The group proposed that countries set up A.I. safety authorities to register the A.I. systems within their borders. Those authorities would then work together to agree on a set of red lines and warning signs, such as if an A.I. system could copy itself or intentionally deceive its creators. This would all be coordinated by an international body.
Among the signatories was Yoshua Bengio, whose work is so often cited that he is called one of the godfathers of the field. There was Andrew Yao, whose course at Tsinghua University in Beijing has minted the founders of many of China's top tech companies. Geoffrey Hinton, a pioneering scientist who spent a decade at Google, participated remotely. All three are winners of the Turing Award, the equivalent of the Nobel Prize for computing. The group also included scientists from several of China's leading A.I. research institutions, some of which are state-funded and advise the government. A few former government officials joined, including Fu Ying, who had been a Chinese foreign ministry official and diplomat, and Mary Robinson, the former president of Ireland. Earlier this year, the group met in Beijing, where they briefed senior Chinese government officials on their discussion.
If A.I. systems anywhere in the world were to develop these abilities today, there is no plan for how to rein them in, said Gillian Hadfield, a legal scholar and professor of computer science and government at Johns Hopkins University. "If we had some sort of catastrophe six months from now, if we do detect there are models that are starting to autonomously self-improve, who are you going to call?" Dr. Hadfield said. On Sept. 5-8, Dr. Hadfield joined scientists from around the world in Venice to talk about such a plan. It was the third meeting of the International Dialogues on A.I. Safety, organized by the Safe AI Forum, a project of a nonprofit research group in the United States called Far.AI. Governments need to know what is going on at the research labs and companies working on A.I. systems in their countries, the group said in its statement. And they need a way to communicate about potential risks that does not require companies or researchers to share proprietary information with competitors. The group proposed that countries set up A.I. safety authorities to register the A.I. systems within their borders. Those authorities would then work together to agree on a set of red lines and warning signs, such as if an A.I. system could copy itself or intentionally deceive its creators. This would all be coordinated by an international body.
Among the signatories was Yoshua Bengio, whose work is so often cited that he is called one of the godfathers of the field. There was Andrew Yao, whose course at Tsinghua University in Beijing has minted the founders of many of China's top tech companies. Geoffrey Hinton, a pioneering scientist who spent a decade at Google, participated remotely. All three are winners of the Turing Award, the equivalent of the Nobel Prize for computing. The group also included scientists from several of China's leading A.I. research institutions, some of which are state-funded and advise the government. A few former government officials joined, including Fu Ying, who had been a Chinese foreign ministry official and diplomat, and Mary Robinson, the former president of Ireland. Earlier this year, the group met in Beijing, where they briefed senior Chinese government officials on their discussion.
AI is stupid (Score:5, Informative)
It's way too early to be paranoid about AI risks. When AI can actually outsmart humans in science or engineering tasks like say inventing a super efficient battery, flying car, fusion energy plant design, or a cure for cancer, then we can discuss the risks of AI. AI has access to all our knowledge of physics and chemistry, why can't it invent a next generation battery? Or a superconductor .. or at least tell us that superconductors are impossible. To date all the AI we have is stupid.
Re: (Score:2)
*room temperature superconductors
AI pioneers want an out in the history books (Score:2)
This is AI early proponents protecting their legacy and 'distancing' themselves from any blame should there be an AI induced catastrophe, war, bio weapon, etc done with AI.
This is the Alfred Nobel thing again, inventing TNT without realizing it could be used in war, establishing a Nobel Prize foundation to forever promote good news about Alfred Nobel and diminish the linking of Alfred Nobel, TNT and war weapons in the news.
Re: (Score:2)
Hence my statement that AI is stupid.
Re: (Score:2)
Re: (Score:2)
Umm .. ok .. some of us are still a lot better than AI. Humans invented computer, the iPhone, rockets, cars, TV, Facebook, Google, AI itself etc. Let me know when AI can come up with a useful invention.
Re:AI is stupid (Score:5, Informative)
Let me know when AI can come up with a useful invention.
AI has designed antennas that are superior to human designs. Evolved antennas [wikipedia.org]
AI has made drug discoveries, including Halicin [wikipedia.org].
Re: AI is stupid (Score:2)
Re: (Score:2)
Also, per this thread, AI has excelled at the "that's so stupid, that doesn't count" types of intelligence. For example, calculating ballistic trajectories (we used to have people do it slower and worse, people like Schwartzchild who solved an equation Einstein thought unsolvable). AI can read license plates faster and cheaper than any human thought possible, and can even do face and gait recognition. AI allows an idiot with a spreadsheet to replace a dozen accountants. AI can beat the human world chess cha
Re: (Score:1)
And when a near-future coding A.I. is trained on every bit of code and every tech spec out there, and some kid asks it to "find a way to disable the power grid in California," and it succeeds, because it is a million times more capable as a hacker than any human dev... what then? After that happens, you then think, "oh wait, I see what you mean now"? Bit late?
You say you think it's unlikely? Because, by some miracle, you personally know the true limits of LLMs in a couple of years - or because you like AI,
Re: (Score:1)
Re: (Score:2)
Why try to use an LLM to do the job of a GPS drone autopilot?
Re: (Score:2)
> AI has made drug discoveries, including Halicin [wikipedia.org]
Yes, for the time being it still appears to be an experimental drug.
"AI excels in processing and analyzing complex data, streamlining the drug development process by shortening research times ... Continuous induction experiments demonstrated that bacteria did not easily develop resistance to halicin. Sequencing of halicin-resistant mutants revealed that the primary mutations were concentrated in three functions: bacterial protein synthesis,
Re: AI is stupid (Score:2)
Re: (Score:2)
Yes, and 92% of all statistics are made up on the spot.
Re: (Score:3)
Because that's not how you invent the next gen battery. Instead, you test many ideas in the lab. A LLM has no lab. Humans without labs can't do it either.
Re: (Score:2)
Umm, assuming that's true, maybe it can propose some experiments? Grad students around the world must have already asked chatgpt what to do. It's been a year or two since chatgpt became a thing .. still no next gen battery breakthrough or "it's impossible with known laws of physics" statement.
Also, your premise is a bit flawed that intelligence (actual intelligence, not LLMs or stable diffusion. or whatever) can't predict the properties of materials based on its knowledge of elemental properties and materia
Re: (Score:2)
An LLM is not a tool for speculative chemical simulations.
There's far more to AI than LLMs.
Re: (Score:1)
OpenAI's o1 model is capable of reasoning, which takes time for it to "think", but theoretically it could think for hours, days, weeks, months, etc. and come up with an even better answer.
The next step is an AI Agent, which is one step closer to being able to autonomously design something. Software will likely be one of the first tasks for AI agents, where it can work together to with itself "thinking" to develop software. This can also be used to develop and eventually simulate experiments. AI
Re: (Score:2)
Re: (Score:3)
One of the biggest problems with AI in the short-term is just that people who don't know any better get too impressed by AI's capabilities and associate it with properties it does not have. That has then led to AI being put in charge of doing for things it shouldn't.
It has led to layoffs, because managers have get fooled into believing that fewer workers using AI could do as much work. But AI can only help so much and the output is often of low quality.
There have been companies applying AI technology to sto
Re: (Score:3)
But seriously, these kinds of messages are mostly PR & marketing to investors & potential clients to keep the VC capital flowing. If you hadn't noticed, AI hype is a huge & very lucrative business; far more than AI itself which is losing money hand over fist.
I guess there's also the AI TrueBelievers(TM) who think that the AI singularity is just around the corner & that they need to "fake it till they make it." More level
Re: AI is stupid (Score:2)
That nicely sums it up. Also nicely explains why the hype is kept alive by more and more insane claims.
Re: (Score:2)
Famous last words. (Score:2)
I would say that the AI experts warning of the dangers associated with AGI - and those are quite few - know a thing or two better than you about AI. So by and large I think it is _very_ wise and opportune to listen to them and get the move on with feasible AI security, on a global scale.
AI is very much like bio-engineering pathogens - you only have to get it wrong once and humanity is screwed. Epic style.
We better not f*ck this up and tread very very carefully. I don't want humanities last tweet to be "Chat
Re: (Score:2)
I would say that the AI experts warning of the dangers associated with AGI - and those are quite few - know a thing or two better than you about AI.
I would say appealing to meh credentials rather than focusing on the alarming lack of objectively supportable claims being made by "expert" doomers is a problem.
We now have incredibly smart and talented people forming groups peddling delusions about ways they are going to "control" super-intelligence when the whole world just got a front row seat to simple human greed / "power seeking" corrupting OpenAI.
So by and large I think it is _very_ wise and opportune to listen to them and get the move on with feasible AI security, on a global scale.
What I find especially interesting is doomers never seem to come to the conclusion you know its better ju
Re: (Score:2)
It's way too early to be paranoid about AI risks. When AI can actually outsmart humans in science or engineering tasks like say inventing a super efficient battery, flying car, fusion energy plant design, or a cure for cancer, then we can discuss the risks of AI. AI has access to all our knowledge of physics and chemistry, why can't it invent a next generation battery? Or a superconductor .. or at least tell us that superconductors are impossible. To date all the AI we have is stupid.
AI doesn't need to be self-aware or in any way "intelligent" for it to pose risks! Take, for example, a self-driving car. Definitely not self-aware or outsmarting humans, but still an AI application and for sure quite risky from e.g. pedestrian perspective.
Re: (Score:2)
It's way too early to be paranoid about AI risks. When AI can actually outsmart humans in science or engineering tasks like say inventing a super efficient battery, flying car, fusion energy plant design, or a cure for cancer, then we can discuss the risks of AI.
There is a risk that you are not considering here: Stupid people in power using stupid AI to make stupid decisions that affect the Real World(TM).
Slashdot readers call for protection against dupes (Score:2)
Dupe, still on the main page even... get it together editors.
Re: (Score:2)
Yeah, it even has the same title [slashdot.org], and references the same NY Times article. There are dupes and DUPES.
Re: (Score:2)
That should not require AI, a simple comparison of the links should be sufficient. My guess is that the second submission fell before the first one had passed meta moderation and that they don't test for that case, what they should do is check new "published" articles against the submission queue and reject pending submissions if they match.
I'm not sure if I have ever seen two identical submissions before, normal Dupes reference two different articles with pretty much the same content.
It worked in 1139 too (Score:5, Insightful)
Fortunately society never changed and we still all live as serfs on the field.
Also, ban's don't work, so this is stupid.
Re: (Score:2)
With all due respect
There's a massive complexity difference here, and you don't even understand how a crossbow works.
The parent literally explained exactly how a crossbow “works” to defeat armor, and the risk society saw in that.
With all due respect, learn to fucking comprehend what you read. Crossbows aren’t black magic for fucks sake. It’s physics.
Re: (Score:2)
Re: (Score:2)
So don't ban child porn or murder because it still happens? I'd think it would happen a lot more if it wasn't banned. You don't expect 100% compliance but you avoid a free for all.
I've stopped taking the "bans don't work" bait. Seems like for people saying it "work" means 100% compliance, as if anything less makes the ban worthless (cause you know, a 97% reduction with 3% of people finding a way around the ban makes the ban worthless /sarcasm). It's an untenable position at best, but you aren't likely to persuade anyone who already holds the position since clearly reasonability wasn't a huge factor in staking the position in the first place (although I wish you luck in any future att
Lock it up (Score:5, Insightful)
Re: (Score:1)
Good laugh but who the hell would vote Insightful?
Re: (Score:3)
Good laugh but who the hell would vote Insightful?
I am not one who gave him an insightful mark, but: It may be insightful because it's a literal interpretation of the way our governments have been handling the situation. They go to these for-profit companies to get all their information, and then use that information to try to lock out the market for any competition. The government in the US has always been a fan of asking the foxes and wolves how best to protect the chickens and cattle, but this one is hitting a little sour for me. What could very well en
This is a dangerous precipice (Score:3)
a set of red lines and warning signs, such as if an A.I. system could copy itself
Technology has come a long way, but if anyone ever took the next step of discovering a way to read a bit and then write the same bit back out again, that would be tantamount to the end.
Mom (Score:1)
Anyone else hearing (Score:3)
...an echo [slashdot.org]
Good job catching the dupes there.
Save us from ourselves! (Score:3)
Re: (Score:2)
This sounds to me like: We can not be expected to make a profit and safeguard humanity from us at the same time, we need the government to set the rules for AI so we're not to blame no matter what we do or invent!
Nothing that noble even. This is, "We think we're on the cusp of something. Best pull the ladder up behind ourselves before anybody else climbs aboard."
You know what happens when you try to pull a ladder up that you're still standing on? Two possibilities: 1) Nothing. You look like an idiot trying to jerk on the ladder your standing on. 2) You enlist help and the dumb motherfucker knocks the ladder over with you halfway up it.
These tech-bros are looking squarely at scenario number 2. They're asking the dumb
slashdot needs AI to detect the dupes (Score:2)
slashdot needs AI to detect the dupes.
the only risk is it'd put the editors out of a job.
I know this one! (Score:3)
"If we had some sort of catastrophe six months from now, if we do detect there are models that are starting to autonomously self-improve, who are you going to call?"
Ghostbusters!!
\o/ (Score:3, Insightful)
If you're a pioneer of a field you think poses a danger to humanity, do you:
a) keep profiting from it whilst making half-hearted calls for others to stop you OR
b) just stop
Re: (Score:2)
c) Brag that your particular implementation of AI is so wonderful and powerful that it's dangerous and should receive free advertising and much profit. Also, there's a desperate need to strangle your competitors, via legislation.
Re: (Score:1)
Is this enough of a standard practice that it's taught in business schools or is it more of an 'advanced technique' one picks-up on-the-job.
What would the class code for Anticompetitive Techniques be?
Re: (Score:2)
It's a fairly standard practice, called regulatory capture. However it's only available to those who can own or rent a legislator.
Re: \o/ (Score:1)
Perhaps this is an area for disruption - to reduce the gap in availability to access to "Sponsored Representation" - anyone looking to build a startup? C2aaS - Congress Critter as a Service
How responsible! (Score:4, Insightful)
Nations are stockpiling tens of thousands of nuclear bombs, enough to kill humanity several times over, working on chemical and biological weapons. Humanity has spent decades to cause the biggest mass extinction of species since millions of years, literally destroying what makes Earth a livable planet for us. Humanity has spent decades to change the climate and not in a good way. Humanity has spent decades to utterly pollute and damage the environment.
And these dudes are getting concerned because we have improved our language models?
Could they please look up what a language model is?
More fear marketing (Score:1)
Your biggest risk is losing money investing in this quest for the techno unicorn. Give it up already.
Government knows best. (Score:2)
Oversight in AI sounds like a good idea on paper. But in reality it’s fucking delusional, so question who is selling that concept.
Government can simply make all their AI development classified. You know, for the excu, er “sake’ of national security.
Government can come in and take your patents. Take your IP. Under the excu, er “guise” of secrecy orders.
We can pretend oversight can happen. Or we can stop bullshitting ourselves.
Sorry, Dave I can't do that..... (Score:2)
self importance (Score:1)
All these guys want to imagine their toy is some sort of global threat, because it feeds their ego. Reality is generative AI is NOT like the Manhattan project.
I am not even convinced you can do anything with it a traditional targeted 'expert system' could not do; its just you have this sorta off the shelf black box you can now use quickly and cheaply (in upfront cost terms).
Everyone needs to really chill out here, if for no other reason then the genie is already out of the bottle. you can do this stuff fo
Manipulation (Score:2)
"...such as if an A.I. system could copy itself or intentionally deceive its creators". Whoever believes this is possible is an idiot. I'm not inclined to believe that the "AI pioneers" are idiots, so the next best guess is they are manipulating the public for funding to squander.
I know how they'll do it (Score:2)
Easy fix, we'll just put AI in charge of overwatching AI.