Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
United Kingdom AI

UK To Host AI Safety Summit at Start of November (ft.com) 7

The UK government will host a summit on the safety of artificial intelligence at the start of November, with "like-minded" countries invited to the event in Bletchley Park to address global threats to democracy, including the use of AI in warfare and cyber security. From a report: Leading academics and executives from AI companies, including Google's DeepMind, Microsoft, OpenAI and Anthropic, will be asked to the AI Safety Summit at the Buckinghamshire site where British codebreakers were based during the second world war. "The UK will host the first major global summit on AI safety this autumn," a spokesperson for the government said on Wednesday, adding that Downing Street would set out further details in due course. Prime minister Rishi Sunak initially announced in June the UK would be organising a summit on AI regulation after a meeting in Washington with President Joe Biden.
This discussion has been archived. No new comments can be posted.

UK To Host AI Safety Summit at Start of November

Comments Filter:
  • by bugs2squash ( 1132591 ) on Wednesday August 16, 2023 @01:26PM (#63772662)
    Can't the AI handle this itself, no need to involve humans, travel etc. Just tell us the outcome
  • by mysidia ( 191772 ) on Wednesday August 16, 2023 @01:35PM (#63772692)

    global threats to democracy, including the use of AI in warfare

    Seriously.. The robots have been coming to destroy the humans for decades. There are tons of movies on the subject.

    A gun hooked up to a vehicle with software capable of running it is essentially a killing machine. So why the heck is it titled a concern for "AI Safety"?

    AI is not the threat: it is automatic killing machines. You cannot create a "Safe" AI to run an automatic mass-killing machine, Nor for that matter can you have a dumb algorithm run such a machine -- the whole thing will be unacceptably unsafe no matter what, AND an AI can do you no harm if not put at the controls of such a machine And not relied upon to make decisions in lieu of proper analysis. It really is that simple... Don't put computers in control of something important - They are tools that can help you, tools that can automate data collection Or raise flags about suspicious things, but Humans are the only beings who possess the capacity for rational intelligent thought.

    I thought it should be pretty simple - make it Forbidden to have deadly weapons contain an automatic trigger. A physical initiation device should be mandatory.

    As for "Cyber security" - good luck. Hackers always have new tools, and "AI safety" can't address problems here, either. This again is a long-standing problem, and it's largely one that a ton of infrastructure relies on software that isn't actually built competently with correctness and security as the highest priorities -- instead of focusing on correctness and the minimal feature set; much of the time companies who make products like Desktop operating systems put their effort into issues such as Interface designs, Ease of Use (visually appealing, real-time reacting point-and-click interfaces), Features, and Compatibility with other products and previous versions.

  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Wednesday August 16, 2023 @02:06PM (#63772764) Homepage Journal

    Those with differing opinions not invited to attend? Way to create a bubble.

    • That's the point. They want a bunch of people to step in so they can point to them as justification when they say: "See???? This is the reason why AI must be tightly regulated by oligarchs....err.... the government."

Some people manage by the book, even though they don't know who wrote the book or even what book.

Working...