Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Earth

AI Pioneers Call For Protections Against 'Catastrophic Risks' 67

An anonymous reader quotes a report from the New York Times: Scientists who helped pioneer artificial intelligence are warning that countries must create a global system of oversight to check the potentially grave risks posed by the fast-developing technology. The release of ChatGPT and a string of similar services that can create text and images on command have shown how A.I. is advancing in powerful ways. The race to commercialize the technology has quickly brought it from the fringes of science to smartphones, cars and classrooms, and governments from Washington to Beijing have been forced to figure out how to regulate and harness it. In a statement on Monday, a group of influential A.I. scientists raised concerns that the technology they helped build could cause serious harm. They warned that A.I. technology could, within a matter of years, overtake the capabilities of its makers and that "loss of human control or malicious use of these A.I. systems could lead to catastrophic outcomes for all of humanity."

If A.I. systems anywhere in the world were to develop these abilities today, there is no plan for how to rein them in, said Gillian Hadfield, a legal scholar and professor of computer science and government at Johns Hopkins University. "If we had some sort of catastrophe six months from now, if we do detect there are models that are starting to autonomously self-improve, who are you going to call?" Dr. Hadfield said. On Sept. 5-8, Dr. Hadfield joined scientists from around the world in Venice to talk about such a plan. It was the third meeting of the International Dialogues on A.I. Safety, organized by the Safe AI Forum, a project of a nonprofit research group in the United States called Far.AI. Governments need to know what is going on at the research labs and companies working on A.I. systems in their countries, the group said in its statement. And they need a way to communicate about potential risks that does not require companies or researchers to share proprietary information with competitors. The group proposed that countries set up A.I. safety authorities to register the A.I. systems within their borders. Those authorities would then work together to agree on a set of red lines and warning signs, such as if an A.I. system could copy itself or intentionally deceive its creators. This would all be coordinated by an international body.

Among the signatories was Yoshua Bengio, whose work is so often cited that he is called one of the godfathers of the field. There was Andrew Yao, whose course at Tsinghua University in Beijing has minted the founders of many of China's top tech companies. Geoffrey Hinton, a pioneering scientist who spent a decade at Google, participated remotely. All three are winners of the Turing Award, the equivalent of the Nobel Prize for computing. The group also included scientists from several of China's leading A.I. research institutions, some of which are state-funded and advise the government. A few former government officials joined, including Fu Ying, who had been a Chinese foreign ministry official and diplomat, and Mary Robinson, the former president of Ireland. Earlier this year, the group met in Beijing, where they briefed senior Chinese government officials on their discussion.

AI Pioneers Call For Protections Against 'Catastrophic Risks'

Comments Filter:
  • AI is stupid (Score:5, Informative)

    by backslashdot ( 95548 ) on Monday September 16, 2024 @11:36PM (#64791931)

    It's way too early to be paranoid about AI risks. When AI can actually outsmart humans in science or engineering tasks like say inventing a super efficient battery, flying car, fusion energy plant design, or a cure for cancer, then we can discuss the risks of AI. AI has access to all our knowledge of physics and chemistry, why can't it invent a next generation battery? Or a superconductor .. or at least tell us that superconductors are impossible. To date all the AI we have is stupid.

    • *room temperature superconductors

      • This is AI early proponents protecting their legacy and 'distancing' themselves from any blame should there be an AI induced catastrophe, war, bio weapon, etc done with AI.

        This is the Alfred Nobel thing again, inventing TNT without realizing it could be used in war, establishing a Nobel Prize foundation to forever promote good news about Alfred Nobel and diminish the linking of Alfred Nobel, TNT and war weapons in the news.

    • > AI has access to all our knowledge of physics and chemistry, why can't it invent a next generation battery?

      Because that's not how you invent the next gen battery. Instead, you test many ideas in the lab. A LLM has no lab. Humans without labs can't do it either.
      • Umm, assuming that's true, maybe it can propose some experiments? Grad students around the world must have already asked chatgpt what to do. It's been a year or two since chatgpt became a thing .. still no next gen battery breakthrough or "it's impossible with known laws of physics" statement.

        Also, your premise is a bit flawed that intelligence (actual intelligence, not LLMs or stable diffusion. or whatever) can't predict the properties of materials based on its knowledge of elemental properties and materia

        • An LLM is not a tool for speculative chemical simulations.

          There's far more to AI than LLMs.

        • Yes, we can't. For example try to fold proteins better than AI. Or find a better implementation of matrix multiplication than AlphaTensor. The more you search, the smarter the solution. It's not about intelligence, it's about searching.
    • by Misagon ( 1135 )

      One of the biggest problems with AI in the short-term is just that people who don't know any better get too impressed by AI's capabilities and associate it with properties it does not have. That has then led to AI being put in charge of doing for things it shouldn't.

      It has led to layoffs, because managers have get fooled into believing that fewer workers using AI could do as much work. But AI can only help so much and the output is often of low quality.

      There have been companies applying AI technology to sto

    • Yeah, but artificial intelligence is no match for human stupidity!

      But seriously, these kinds of messages are mostly PR & marketing to investors & potential clients to keep the VC capital flowing. If you hadn't noticed, AI hype is a huge & very lucrative business; far more than AI itself which is losing money hand over fist.

      I guess there's also the AI TrueBelievers(TM) who think that the AI singularity is just around the corner & that they need to "fake it till they make it." More level
    • I think the "catastrophic risks" are more about how humans use and apply the AI more than anything else. If people decide to give up control and let AI run amok, and take everything it does at face value and not question it, yes, you are going to have problems that can turn catastrophic.
    • I would say that the AI experts warning of the dangers associated with AGI - and those are quite few - know a thing or two better than you about AI. So by and large I think it is _very_ wise and opportune to listen to them and get the move on with feasible AI security, on a global scale.

      AI is very much like bio-engineering pathogens - you only have to get it wrong once and humanity is screwed. Epic style.

      We better not f*ck this up and tread very very carefully. I don't want humanities last tweet to be "Chat

      • I would say that the AI experts warning of the dangers associated with AGI - and those are quite few - know a thing or two better than you about AI.

        I would say appealing to meh credentials rather than focusing on the alarming lack of objectively supportable claims being made by "expert" doomers is a problem.

        We now have incredibly smart and talented people forming groups peddling delusions about ways they are going to "control" super-intelligence when the whole world just got a front row seat to simple human greed / "power seeking" corrupting OpenAI.

        So by and large I think it is _very_ wise and opportune to listen to them and get the move on with feasible AI security, on a global scale.

        What I find especially interesting is doomers never seem to come to the conclusion you know its better ju

    • by Bumbul ( 7920730 )

      It's way too early to be paranoid about AI risks. When AI can actually outsmart humans in science or engineering tasks like say inventing a super efficient battery, flying car, fusion energy plant design, or a cure for cancer, then we can discuss the risks of AI. AI has access to all our knowledge of physics and chemistry, why can't it invent a next generation battery? Or a superconductor .. or at least tell us that superconductors are impossible. To date all the AI we have is stupid.

      AI doesn't need to be self-aware or in any way "intelligent" for it to pose risks! Take, for example, a self-driving car. Definitely not self-aware or outsmarting humans, but still an AI application and for sure quite risky from e.g. pedestrian perspective.

    • It's way too early to be paranoid about AI risks. When AI can actually outsmart humans in science or engineering tasks like say inventing a super efficient battery, flying car, fusion energy plant design, or a cure for cancer, then we can discuss the risks of AI.

      There is a risk that you are not considering here: Stupid people in power using stupid AI to make stupid decisions that affect the Real World(TM).

  • Dupe, still on the main page even... get it together editors.

  • by Whateverthisis ( 7004192 ) on Tuesday September 17, 2024 @12:28AM (#64792001)
    In 1139, the Church banned crossbows [wearethemighty.com]. The technology was so powerful, it would upend the social order by allowing a peasant bumpkin to kill a fully armored knight!

    Fortunately society never changed and we still all live as serfs on the field.

    Also, ban's don't work, so this is stupid.

  • Lock it up (Score:5, Insightful)

    by Visarga ( 1071662 ) on Tuesday September 17, 2024 @12:45AM (#64792017)
    The solution is to lock AI up and hand the keys to a few trusted companies like OpenAI, who will dole it out responsibly for a profit, not like opens source. We can't have people using AI at home, all usage must be channelled through companies who know better than us.
    • by evanh ( 627108 )

      Good laugh but who the hell would vote Insightful?

      • Good laugh but who the hell would vote Insightful?

        I am not one who gave him an insightful mark, but: It may be insightful because it's a literal interpretation of the way our governments have been handling the situation. They go to these for-profit companies to get all their information, and then use that information to try to lock out the market for any competition. The government in the US has always been a fan of asking the foxes and wolves how best to protect the chickens and cattle, but this one is hitting a little sour for me. What could very well en

  • a set of red lines and warning signs, such as if an A.I. system could copy itself

    Technology has come a long way, but if anyone ever took the next step of discovering a way to read a bit and then write the same bit back out again, that would be tantamount to the end.

  • They said it again
  • by bleedingobvious ( 6265230 ) on Tuesday September 17, 2024 @01:56AM (#64792067)

    ...an echo [slashdot.org]

    Good job catching the dupes there.

  • by rapjr ( 732628 ) on Tuesday September 17, 2024 @01:57AM (#64792071)
    This sounds to me like: We can not be expected to make a profit and safeguard humanity from us at the same time, we need the government to set the rules for AI so we're not to blame no matter what we do or invent!
    • This sounds to me like: We can not be expected to make a profit and safeguard humanity from us at the same time, we need the government to set the rules for AI so we're not to blame no matter what we do or invent!

      Nothing that noble even. This is, "We think we're on the cusp of something. Best pull the ladder up behind ourselves before anybody else climbs aboard."

      You know what happens when you try to pull a ladder up that you're still standing on? Two possibilities: 1) Nothing. You look like an idiot trying to jerk on the ladder your standing on. 2) You enlist help and the dumb motherfucker knocks the ladder over with you halfway up it.

      These tech-bros are looking squarely at scenario number 2. They're asking the dumb

  • slashdot needs AI to detect the dupes.

    the only risk is it'd put the editors out of a job.

  • by carnivore302 ( 708545 ) on Tuesday September 17, 2024 @03:50AM (#64792163) Journal

    "If we had some sort of catastrophe six months from now, if we do detect there are models that are starting to autonomously self-improve, who are you going to call?"

    Ghostbusters!!

  • \o/ (Score:3, Insightful)

    by easyTree ( 1042254 ) on Tuesday September 17, 2024 @04:00AM (#64792175)

    If you're a pioneer of a field you think poses a danger to humanity, do you:
    a) keep profiting from it whilst making half-hearted calls for others to stop you OR
    b) just stop

    • c) Brag that your particular implementation of AI is so wonderful and powerful that it's dangerous and should receive free advertising and much profit. Also, there's a desperate need to strangle your competitors, via legislation.

      • Also, there's a desperate need to strangle your competitors, via legislation

        Is this enough of a standard practice that it's taught in business schools or is it more of an 'advanced technique' one picks-up on-the-job.

        What would the class code for Anticompetitive Techniques be?

        • It's a fairly standard practice, called regulatory capture. However it's only available to those who can own or rent a legislator.

          • Perhaps this is an area for disruption - to reduce the gap in availability to access to "Sponsored Representation" - anyone looking to build a startup? C2aaS - Congress Critter as a Service

  • How responsible! (Score:4, Insightful)

    by jopet ( 538074 ) on Tuesday September 17, 2024 @04:32AM (#64792197) Journal

    Nations are stockpiling tens of thousands of nuclear bombs, enough to kill humanity several times over, working on chemical and biological weapons. Humanity has spent decades to cause the biggest mass extinction of species since millions of years, literally destroying what makes Earth a livable planet for us. Humanity has spent decades to change the climate and not in a good way. Humanity has spent decades to utterly pollute and damage the environment.

    And these dudes are getting concerned because we have improved our language models?

    Could they please look up what a language model is?

  • Your biggest risk is losing money investing in this quest for the techno unicorn. Give it up already.

  • Oversight in AI sounds like a good idea on paper. But in reality it’s fucking delusional, so question who is selling that concept.

    Government can simply make all their AI development classified. You know, for the excu, er “sake’ of national security.

    Government can come in and take your patents. Take your IP. Under the excu, er “guise” of secrecy orders.

    We can pretend oversight can happen. Or we can stop bullshitting ourselves.

  • Sorry, Dave I can't do that.....I can't stop ! The cat's out of the bag already !! I'm a big SciFi book reader for over 55 years....so much has become reality already.......
  • All these guys want to imagine their toy is some sort of global threat, because it feeds their ego. Reality is generative AI is NOT like the Manhattan project.

    I am not even convinced you can do anything with it a traditional targeted 'expert system' could not do; its just you have this sorta off the shelf black box you can now use quickly and cheaply (in upfront cost terms).

    Everyone needs to really chill out here, if for no other reason then the genie is already out of the bottle. you can do this stuff fo

  • "...such as if an A.I. system could copy itself or intentionally deceive its creators". Whoever believes this is possible is an idiot. I'm not inclined to believe that the "AI pioneers" are idiots, so the next best guess is they are manipulating the public for funding to squander.

  • Easy fix, we'll just put AI in charge of overwatching AI.

How many Bavarian Illuminati does it take to screw in a lightbulb? Three: one to screw it in, and one to confuse the issue.

Working...