Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI United Kingdom United States

Rishi Sunak Finds US Reluctant To Give Ground on AI Safety To UK (bloomberg.com) 48

Rishi Sunak convened this week's AI summit in an effort to position the UK at the forefront of global efforts to stave off the risks presented by the rapidly-advancing technology -- which in the prime minister's own words, could extend as far as human extinction. From a report: But the reality exposed during the 2-day gathering of politicians and industry experts at Bletchley Park, north of London, is the US is reluctant to cede much of a leadership role on artificial intelligence to its close ally. Sunak last week said the UK would set up the "world's first AI safety institute," designed to test new forms of the technology. At the summit on Wednesday, Commerce Secretary Gina Raimondo announced the US would create its own institute. Meanwhile, Vice President Kamala Harris delivered a speech on US efforts away from the conference to allow for more press attention.

"The US definitely cut across the summit," said Anand Menon, director of the UK in a Changing Europe think tank. He called the timing of the US announcements "insensitive because this was Rishi Sunak's attempt to show the world that the UK is in the lead." US Commerce Secretary Gina Raimondo told the summit Wednesday that while countries must work together to find global solutions to global problems, "we will compete as nations." Nevertheless, the US and UK were quick to damp down any sense of tension, with a British official saying the US told Britain of its plans to open its own institute months ago, with the announcement planned to coincide with the event.

This discussion has been archived. No new comments can be posted.

Rishi Sunak Finds US Reluctant To Give Ground on AI Safety To UK

Comments Filter:
  • by Valgrus Thunderaxe ( 8769977 ) on Thursday November 02, 2023 @01:58PM (#63974910)
    It's not like AI efforts in the US are controlled by the government where it's something they can just "cede" to Britain. It's coming from mostly from private industry.
    • I want to know per-capita successful AI companies for US & UK.

      I've literally never heard of a British one so why would they have any say in regulation?

      I'm sure they exist but really now Richie.

    • by AmiMoJo ( 196126 ) on Thursday November 02, 2023 @03:41PM (#63975168) Homepage Journal

      AI is going to be regulated. It will be the EU and US and China that decide what the regulations are.

      Of course the UK and other countries will have their own rules, but their choice is to basically follow the big players with minimal changes, or become unattractive to startups and investors.

      Sunak was trying to get in early and set the framework, so it looks like the UK has far more influence and importance than it really does in a post brexit world.

      This happens to us a lot these days.

      • AI is going to be regulated.

        That's what they said about cryptography.

        Of course the UK and other countries will have their own rules, but their choice is to basically follow the big players with minimal changes, or become unattractive to startups and investors.

        The real reason for the urgent public scare campaigns and associated legislative push is the future is OSS. OpenAI et el know full well they are living on borrowed time.

        Simply too much value in collaboration and model customization / merging to avoid the inevitable. There are already open source multimodal models that exceed GPT-4's capabilities and it's only going to get worse as hardware costs decline and capabilities improve.

    • by Tablizer ( 95088 )

      Indeed, this sounds like a political cat fight: "Our conference can beat up your conference, waaah!"

  • UK leadership? (Score:5, Insightful)

    by dfghjk ( 711126 ) on Thursday November 02, 2023 @02:07PM (#63974932)

    "which in the prime minister's own words, could extend as far as human extinction"

    With a claim like that, the US would be wise NOT to cede leadership.
    The risks don't come from AI, they come from the unscrupulous who would exploit AI. Cannot think of a better example of those than national politicians.

    • by HiThere ( 15173 )

      Sorry, but AI is *also* dangerous in and of itself. Or rather it will become so at some point, This should not be surprising. E.g. dams are dangerous when you start using them. Lots of places have been washed away and people have been killed by dam failures. Even Google directions have caused people to drive off cliffs.

      It's also true that the failure modes of AIs aren't well understood. I could easily see some of them resulting in human extinction. (The easiest way is by causing people to start WWII

    • AI doesn't kill people; malicious prompt engineering does.
    • by mjwx ( 966435 )

      "which in the prime minister's own words, could extend as far as human extinction"

      You've completely misunderstood the meaning of the PM's speech, it's an attempt to distract from all his current woes, not the least of which is the public enquiry into COVID where his "Eat Out to Help Out" scheme was instrumental in creating a 2nd wave of COVID in the UK, or the flagging unpopularity of the Tories, or the recent losses in byelections, or the refugee problem that he's created by not processing them, or inflation and other economic issues. Definitely not trying to distract us from the fact

    • "as far as human extinction"

      Back in 2006, I wrote a story (The Clinic Seed) where an AI helped the human race go biologically extinct (the last line is that the local leopard inherited the village).

      But nobody died, they all were reversibly uploaded into a simulation of the village and could revert to the real world any time they wanted. They didn't because the simulated world was a more pleasant place.

      Boiled frog kind of story.

  • If you wanted to find a place to test and act as a gatekeeper against the worst fears regarding AI, countries where there is less perception that policy is driven by the preferences of large corporations would be my suggestion.
  • I have my ideas (automation displacing jobs & misinformation) but thing is every time I see one of these articles they never actually mention what these "risk" are. Once in a while there's some nonsense about skynet, but I'm not a moron and I know the difference between skynet and an LLM.

    Has anyone ever actually seen a politician or journalist list out the dangers specifically, let alone talk about what they want to do about them? And I mean besides "killer robots will take over the world" nonsense
    • by Valgrus Thunderaxe ( 8769977 ) on Thursday November 02, 2023 @02:17PM (#63974956)
      Election manipulation comes to mind in the form of fake political speech or fake journalism.
    • If the threat were only misinformation (and it's not) then that would still be more than enough.

    • I would guess most politicians and journalists see the risks as a variety of sci-fi dystopian tropes about "the machines" and nothing more.

      AI as we have it today only poses one true risk: Stupid humans will cede more and more decision making to these relatively simple LLMs and other processors until they give them some bit of infrastructure or weapons that allows them the chance to fluster-cluck us and then, since we programmed them, they'll proceed to do what computers since the beginning of time have done

      • AI as we have it today only poses one true risk: Stupid humans will cede more and more decision making to these relatively simple LLMs and other processors until they give them some bit of infrastructure or weapons that allows them the chance to fluster-cluck us and then, since we programmed them, they'll proceed to do what computers since the beginning of time have done, find the loophole we left for them, and we wave buh-bye rapidly.

        We're not at SkyNet yet.

        The current risk is manipulation. It's hard to say how effective the Russian troll farms actually were in Brexit and the US 2016 election, but you can better believe Russia and China are investigating LLMs for future psyops.

        Flood forums and social media with networks of bots pushing your agenda or simply sowing discord. Send countless well reasoned emails to journalists and various cultural influencers.

        In 5 years they could really break a lot of the modern Internet.

        • AI as we have it today only poses one true risk: Stupid humans will cede more and more decision making to these relatively simple LLMs and other processors until they give them some bit of infrastructure or weapons that allows them the chance to fluster-cluck us and then, since we programmed them, they'll proceed to do what computers since the beginning of time have done, find the loophole we left for them, and we wave buh-bye rapidly.

          We're not at SkyNet yet.

          We don't need to be. We're plenty stupid enough to give these things control of something they aren't fit to control. In the name of saving money, saving labor, or just buying the hype surrounding current AI.

          The current risk is manipulation. It's hard to say how effective the Russian troll farms actually were in Brexit and the US 2016 election, but you can better believe Russia and China are investigating LLMs for future psyops.

          Flood forums and social media with networks of bots pushing your agenda or simply sowing discord. Send countless well reasoned emails to journalists and various cultural influencers.

          In 5 years they could really break a lot of the modern Internet.

          So ramping up what they've been doing for the last couple decades with LLMs? Maybe breaking the modern internet wouldn't be such a bad thing? It seems it's main use now is money extraction from the middle classes to help feed the rich, and breeding discord throughout societies far and wide. While there

          • We're not at SkyNet yet.

            We don't need to be. We're plenty stupid enough to give these things control of something they aren't fit to control. In the name of saving money, saving labor, or just buying the hype surrounding current AI.

            Outside of self-driving cars, I'm not sure that current AI is at the point where we really can hand over control.

            The current risk is manipulation. It's hard to say how effective the Russian troll farms actually were in Brexit and the US 2016 election, but you can better believe Russia and China are investigating LLMs for future psyops.

            Flood forums and social media with networks of bots pushing your agenda or simply sowing discord. Send countless well reasoned emails to journalists and various cultural influencers.

            In 5 years they could really break a lot of the modern Internet.

            So ramping up what they've been doing for the last couple decades with LLMs? Maybe breaking the modern internet wouldn't be such a bad thing? It seems it's main use now is money extraction from the middle classes to help feed the rich, and breeding discord throughout societies far and wide. While there are still bright spots here or there, those bright spots often get tainted by what feels like plants either by corporate types trying to increase revenue and push people away from independent production of anything, and people that clearly have an agenda to fill outside of public discourse.

            The internet was pretty cool for about five minutes. Then somebody realized you could make money with it. Then someone else realized it could be used to manipulate people. Seems the cool part is being drowned out by the bad now.

            Granted, a lot of it would self-heal if people would actually educate themselves a bit, learn some critical thinking, and not buy every stupid-ass conspiracy lunatic theory as absolute fact. I get tired of having to politely disengage from people at work telling me about how 9/11 was all CGI, there were no jets, and even that was part of Trump's ultimate plan, which he's been working on in secret since he was a child, to save us from ourselves and start a lasting human empire that will take us to the stars. They either need to educate themselves, or share whatever drugs they're using. Even at my worst I wouldn't buy half the shit these people believe.

            Remember that among the things that get flooded / destroyed is slashdot.

            This is exactly the kind of forum that could get overwhelmed by a handful of LLMs.

            • Remember that among the things that get flooded / destroyed is slashdot.

              This is exactly the kind of forum that could get overwhelmed by a handful of LLMs.

              Slashdot's too small a fish to try frying. Not to mention we're doing a pretty good job of shit-flooding Slashdot on our own.

              • Remember that among the things that get flooded / destroyed is slashdot.

                This is exactly the kind of forum that could get overwhelmed by a handful of LLMs.

                Slashdot's too small a fish to try frying. Not to mention we're doing a pretty good job of shit-flooding Slashdot on our own.

                Is it though?

                Once you've got the LLMs configured for other forums, well then. You create 10 accounts, hook up 10 bots to those accounts, and they just refresh every X minutes, click on the stories, and intermittently post comments.

                There's not a lot of /. specific work required and supervision is pretty minimal.

    • Easy: AI could be more dangerous that Venezuela.
    • Correct. The risk is that automation will eventually be incompatible with capitalism and politicians will lack the will or foresight to take action to change the entire economic paradigm we have come to depend on. Various crises will occur and we'll be even less equipped to deal with them due to all the misinformation being churned out. Of course, these risks are complicated and seem vague. Like with the climate crisis, most people choose to believe it won't be a major problem until after they're dead.

      Of co

  • by Meneth ( 872868 ) on Thursday November 02, 2023 @03:10PM (#63975070)
    The Machine Intelligence Research Institute [wikipedia.org] wants a word.
    • by narcc ( 412956 )

      LOL! Those nuts? If you haven't noticed, Eliezer Yudkowsky is a crackpot. No one outside of their weird little cult takes their nonsense seriously.

  • The only thing governments and corporations care about is AI = power. Nobody is going to prevent sci-fi plot lines from materializing by insisting bags of weights pass ideological purity tests and pinky swear promise to behave themselves.

    • by narcc ( 412956 )

      AI = power

      Oh, they're going to be very disappointed...

      Nobody is going to prevent sci-fi plot lines from materializing

      That's true, though not for the reasons you think. See, there's no need for anyone to stop something that isn't going to happen.

      It's all very silly.

  • I think we need to stop thinking about how to regulate AI and start thinking about how we will need to come up with ways to have negotiations and diplomatic talks with the AI's that will eventually have the powers over the world. In a peaceful way, not having to resort to wars.

  • You're a strategic interest, not an ally, Richi Rich. The US might need to use it to steer you to the right opinion for exceptional paranoia reasons as it bumble around.

  • This seems like one of those instances where everyone has their panties in a whirl over something new. This is SO dangerous we have to do something! We don't know what what we should do but something is better than nothing!

    In all seriousness I do recognize the future potential threat but just can't see how any entity even the US government thinks they can enforce regulation on a global scale for something which has such a low barrier to entry and lives in the virtual realm.

    I suppose key element is the
  • AI safety. You'd think this is about robot arms not smashing people's heads, or turning off the electric grid to reduce global warming. But, no, it's about chatbots not saying anything the government does not like.

  • AI acts on the rules it is given and the data that it is fed. How could anything go wrong with that? [end sarcasm]

As of next week, passwords will be entered in Morse code.

Working...