Rishi Sunak Finds US Reluctant To Give Ground on AI Safety To UK (bloomberg.com) 48
Rishi Sunak convened this week's AI summit in an effort to position the UK at the forefront of global efforts to stave off the risks presented by the rapidly-advancing technology -- which in the prime minister's own words, could extend as far as human extinction. From a report: But the reality exposed during the 2-day gathering of politicians and industry experts at Bletchley Park, north of London, is the US is reluctant to cede much of a leadership role on artificial intelligence to its close ally. Sunak last week said the UK would set up the "world's first AI safety institute," designed to test new forms of the technology. At the summit on Wednesday, Commerce Secretary Gina Raimondo announced the US would create its own institute. Meanwhile, Vice President Kamala Harris delivered a speech on US efforts away from the conference to allow for more press attention.
"The US definitely cut across the summit," said Anand Menon, director of the UK in a Changing Europe think tank. He called the timing of the US announcements "insensitive because this was Rishi Sunak's attempt to show the world that the UK is in the lead." US Commerce Secretary Gina Raimondo told the summit Wednesday that while countries must work together to find global solutions to global problems, "we will compete as nations." Nevertheless, the US and UK were quick to damp down any sense of tension, with a British official saying the US told Britain of its plans to open its own institute months ago, with the announcement planned to coincide with the event.
"The US definitely cut across the summit," said Anand Menon, director of the UK in a Changing Europe think tank. He called the timing of the US announcements "insensitive because this was Rishi Sunak's attempt to show the world that the UK is in the lead." US Commerce Secretary Gina Raimondo told the summit Wednesday that while countries must work together to find global solutions to global problems, "we will compete as nations." Nevertheless, the US and UK were quick to damp down any sense of tension, with a British official saying the US told Britain of its plans to open its own institute months ago, with the announcement planned to coincide with the event.
How does the US "cede the roll"? (Score:3)
Re: (Score:2)
I want to know per-capita successful AI companies for US & UK.
I've literally never heard of a British one so why would they have any say in regulation?
I'm sure they exist but really now Richie.
Re: How does the US "cede the roll"? (Score:5, Informative)
Never heard of a british one? You mean like DeepMind, creator of AlphaGo and AlphaFold?
Oh I'm sorry, did you think they were American just because google bought them?
Re:How does the US "cede the roll"? (Score:4, Informative)
AI is going to be regulated. It will be the EU and US and China that decide what the regulations are.
Of course the UK and other countries will have their own rules, but their choice is to basically follow the big players with minimal changes, or become unattractive to startups and investors.
Sunak was trying to get in early and set the framework, so it looks like the UK has far more influence and importance than it really does in a post brexit world.
This happens to us a lot these days.
Re: (Score:3)
AI is going to be regulated.
That's what they said about cryptography.
Of course the UK and other countries will have their own rules, but their choice is to basically follow the big players with minimal changes, or become unattractive to startups and investors.
The real reason for the urgent public scare campaigns and associated legislative push is the future is OSS. OpenAI et el know full well they are living on borrowed time.
Simply too much value in collaboration and model customization / merging to avoid the inevitable. There are already open source multimodal models that exceed GPT-4's capabilities and it's only going to get worse as hardware costs decline and capabilities improve.
Re: (Score:1)
Indeed, this sounds like a political cat fight: "Our conference can beat up your conference, waaah!"
UK leadership? (Score:5, Insightful)
"which in the prime minister's own words, could extend as far as human extinction"
With a claim like that, the US would be wise NOT to cede leadership.
The risks don't come from AI, they come from the unscrupulous who would exploit AI. Cannot think of a better example of those than national politicians.
Re: (Score:2)
Too bad the gun-nuts modded you down. The gun deaths in the U.S. vs some other countries are totaled up here for even the blind to see:
https://www.bbc.com/news/world... [bbc.com]
Re: (Score:2)
Too bad the gun-nuts modded you down. The gun deaths in the U.S. vs some other countries are totaled up here for even the blind to see
When I read articles like this I tend to walk away disappointed.
Aside from the emotional value I see little value in data detailing murders clustered in time and space. What is actually important to me is the total number of murders as a function of population and time vs other nations with different policies and legal regimes.
Also modality of murders does not seem at all relevant. Whether someone is shot or pushed out the window the outcome is the same. What matters is that someone was murdered not the
Re:UK leadership? (Score:4, Informative)
Ok go here: https://en.wikipedia.org/wiki/... [wikipedia.org]
Sort the list by rate go down, and excluding countries that not 1st world find the highest murder rate. Russia has the same rate. For the worlds richest country that is pretty bad.
The next 1st world country on the list is New Zealand with 2.6 as opposed to USA at 6.8, and that is only because that year in New Zealand you had 1 person kill about the same amount of people as the rest of New Zealand combined (Christchurch mosque shootings).
Can I prove its because of gun ownership? Hell no, no statistic gives you why, but it certainly points to some problem in the US.
Re: (Score:2)
Ok go here: https://en.wikipedia.org/wiki/... [wikipedia.org]
Sort the list by rate go down, and excluding countries that not 1st world find the highest murder rate. Russia has the same rate. For the worlds richest country that is pretty bad.
The next 1st world country on the list is New Zealand with 2.6 as opposed to USA at 6.8, and that is only because that year in New Zealand you had 1 person kill about the same amount of people as the rest of New Zealand combined (Christchurch mosque shootings).
Can I prove its because of gun ownership? Hell no, no statistic gives you why, but it certainly points to some problem in the US.
Indeed, up until the Christchurch shooting, NZ had some of the most lax gun laws in the developed world, excluding the US.
It's pretty obvious that proliferation of firearms corresponds with higher rates of firearm crimes. You have to start doing extreme feats of mental gymnastics to say otherwise.
Re: (Score:2)
Sorry, but AI is *also* dangerous in and of itself. Or rather it will become so at some point, This should not be surprising. E.g. dams are dangerous when you start using them. Lots of places have been washed away and people have been killed by dam failures. Even Google directions have caused people to drive off cliffs.
It's also true that the failure modes of AIs aren't well understood. I could easily see some of them resulting in human extinction. (The easiest way is by causing people to start WWII
Re: (Score:1)
Re: (Score:2)
"which in the prime minister's own words, could extend as far as human extinction"
You've completely misunderstood the meaning of the PM's speech, it's an attempt to distract from all his current woes, not the least of which is the public enquiry into COVID where his "Eat Out to Help Out" scheme was instrumental in creating a 2nd wave of COVID in the UK, or the flagging unpopularity of the Tories, or the recent losses in byelections, or the refugee problem that he's created by not processing them, or inflation and other economic issues. Definitely not trying to distract us from the fact
Extinction was:UK leadership? (Score:2)
"as far as human extinction"
Back in 2006, I wrote a story (The Clinic Seed) where an AI helped the human race go biologically extinct (the last line is that the local leopard inherited the village).
But nobody died, they all were reversibly uploaded into a simulation of the village and could revert to the real world any time they wanted. They didn't because the simulated world was a more pleasant place.
Boiled frog kind of story.
Neither of these two (Score:1)
Can somebody tell me what that risks are? (Score:2)
Has anyone ever actually seen a politician or journalist list out the dangers specifically, let alone talk about what they want to do about them? And I mean besides "killer robots will take over the world" nonsense
Re:Can somebody tell me what that risks are? (Score:5, Informative)
Re: (Score:2)
I'm pretty sure we had loads of that long before AI became incapable enough for both...
Re: (Score:2)
If the threat were only misinformation (and it's not) then that would still be more than enough.
Re: (Score:2)
I would guess most politicians and journalists see the risks as a variety of sci-fi dystopian tropes about "the machines" and nothing more.
AI as we have it today only poses one true risk: Stupid humans will cede more and more decision making to these relatively simple LLMs and other processors until they give them some bit of infrastructure or weapons that allows them the chance to fluster-cluck us and then, since we programmed them, they'll proceed to do what computers since the beginning of time have done
Re: (Score:2)
AI as we have it today only poses one true risk: Stupid humans will cede more and more decision making to these relatively simple LLMs and other processors until they give them some bit of infrastructure or weapons that allows them the chance to fluster-cluck us and then, since we programmed them, they'll proceed to do what computers since the beginning of time have done, find the loophole we left for them, and we wave buh-bye rapidly.
We're not at SkyNet yet.
The current risk is manipulation. It's hard to say how effective the Russian troll farms actually were in Brexit and the US 2016 election, but you can better believe Russia and China are investigating LLMs for future psyops.
Flood forums and social media with networks of bots pushing your agenda or simply sowing discord. Send countless well reasoned emails to journalists and various cultural influencers.
In 5 years they could really break a lot of the modern Internet.
Re: (Score:2)
AI as we have it today only poses one true risk: Stupid humans will cede more and more decision making to these relatively simple LLMs and other processors until they give them some bit of infrastructure or weapons that allows them the chance to fluster-cluck us and then, since we programmed them, they'll proceed to do what computers since the beginning of time have done, find the loophole we left for them, and we wave buh-bye rapidly.
We're not at SkyNet yet.
We don't need to be. We're plenty stupid enough to give these things control of something they aren't fit to control. In the name of saving money, saving labor, or just buying the hype surrounding current AI.
The current risk is manipulation. It's hard to say how effective the Russian troll farms actually were in Brexit and the US 2016 election, but you can better believe Russia and China are investigating LLMs for future psyops.
Flood forums and social media with networks of bots pushing your agenda or simply sowing discord. Send countless well reasoned emails to journalists and various cultural influencers.
In 5 years they could really break a lot of the modern Internet.
So ramping up what they've been doing for the last couple decades with LLMs? Maybe breaking the modern internet wouldn't be such a bad thing? It seems it's main use now is money extraction from the middle classes to help feed the rich, and breeding discord throughout societies far and wide. While there
Re: (Score:2)
We're not at SkyNet yet.
We don't need to be. We're plenty stupid enough to give these things control of something they aren't fit to control. In the name of saving money, saving labor, or just buying the hype surrounding current AI.
Outside of self-driving cars, I'm not sure that current AI is at the point where we really can hand over control.
The current risk is manipulation. It's hard to say how effective the Russian troll farms actually were in Brexit and the US 2016 election, but you can better believe Russia and China are investigating LLMs for future psyops.
Flood forums and social media with networks of bots pushing your agenda or simply sowing discord. Send countless well reasoned emails to journalists and various cultural influencers.
In 5 years they could really break a lot of the modern Internet.
So ramping up what they've been doing for the last couple decades with LLMs? Maybe breaking the modern internet wouldn't be such a bad thing? It seems it's main use now is money extraction from the middle classes to help feed the rich, and breeding discord throughout societies far and wide. While there are still bright spots here or there, those bright spots often get tainted by what feels like plants either by corporate types trying to increase revenue and push people away from independent production of anything, and people that clearly have an agenda to fill outside of public discourse.
The internet was pretty cool for about five minutes. Then somebody realized you could make money with it. Then someone else realized it could be used to manipulate people. Seems the cool part is being drowned out by the bad now.
Granted, a lot of it would self-heal if people would actually educate themselves a bit, learn some critical thinking, and not buy every stupid-ass conspiracy lunatic theory as absolute fact. I get tired of having to politely disengage from people at work telling me about how 9/11 was all CGI, there were no jets, and even that was part of Trump's ultimate plan, which he's been working on in secret since he was a child, to save us from ourselves and start a lasting human empire that will take us to the stars. They either need to educate themselves, or share whatever drugs they're using. Even at my worst I wouldn't buy half the shit these people believe.
Remember that among the things that get flooded / destroyed is slashdot.
This is exactly the kind of forum that could get overwhelmed by a handful of LLMs.
Re: (Score:2)
Remember that among the things that get flooded / destroyed is slashdot.
This is exactly the kind of forum that could get overwhelmed by a handful of LLMs.
Slashdot's too small a fish to try frying. Not to mention we're doing a pretty good job of shit-flooding Slashdot on our own.
Re: (Score:2)
Remember that among the things that get flooded / destroyed is slashdot.
This is exactly the kind of forum that could get overwhelmed by a handful of LLMs.
Slashdot's too small a fish to try frying. Not to mention we're doing a pretty good job of shit-flooding Slashdot on our own.
Is it though?
Once you've got the LLMs configured for other forums, well then. You create 10 accounts, hook up 10 bots to those accounts, and they just refresh every X minutes, click on the stories, and intermittently post comments.
There's not a lot of /. specific work required and supervision is pretty minimal.
Re: (Score:2)
Re: (Score:2)
Easy: AI could be more dangerous than Venezuela.
Re: (Score:2)
Correct. The risk is that automation will eventually be incompatible with capitalism and politicians will lack the will or foresight to take action to change the entire economic paradigm we have come to depend on. Various crises will occur and we'll be even less equipped to deal with them due to all the misinformation being churned out. Of course, these risks are complicated and seem vague. Like with the climate crisis, most people choose to believe it won't be a major problem until after they're dead.
Of co
Re:This is how the US always is. (Score:5, Insightful)
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
Yes, it is well-known China, Iran, Russia, et. al. would certainly have refrained from cyber attacks in the U.S. if only the U.S. had made agreements about them.
World's first? (Score:3)
Re: (Score:2)
LOL! Those nuts? If you haven't noticed, Eliezer Yudkowsky is a crackpot. No one outside of their weird little cult takes their nonsense seriously.
Beliefe in fairy tales (Score:2)
The only thing governments and corporations care about is AI = power. Nobody is going to prevent sci-fi plot lines from materializing by insisting bags of weights pass ideological purity tests and pinky swear promise to behave themselves.
Re: (Score:2)
AI = power
Oh, they're going to be very disappointed...
Nobody is going to prevent sci-fi plot lines from materializing
That's true, though not for the reasons you think. See, there's no need for anyone to stop something that isn't going to happen.
It's all very silly.
AI Regulations by AI (Score:2)
I think we need to stop thinking about how to regulate AI and start thinking about how we will need to come up with ways to have negotiations and diplomatic talks with the AI's that will eventually have the powers over the world. In a peaceful way, not having to resort to wars.
Re: (Score:2)
Like for aliens, you mean? https://www.economist.com/babb... [economist.com]
I think the problem (for us) with AI is that by the time it's got to the stage of negotiation, then we'll have no more ability to negotiate with it than rabbits or robins or chimpanzees do with humans.
https://edoras.sdsu.edu/~vinge... [sdsu.edu]
You don't get it (Score:2)
You're a strategic interest, not an ally, Richi Rich. The US might need to use it to steer you to the right opinion for exceptional paranoia reasons as it bumble around.
We Have To Do Something! (Score:1)
In all seriousness I do recognize the future potential threat but just can't see how any entity even the US government thinks they can enforce regulation on a global scale for something which has such a low barrier to entry and lives in the virtual realm.
I suppose key element is the
AI "safety" (Score:2)
AI safety. You'd think this is about robot arms not smashing people's heads, or turning off the electric grid to reduce global warming. But, no, it's about chatbots not saying anything the government does not like.
How AI Really Works (in simple terms) (Score:1)