![United Kingdom United Kingdom](http://a.fsdn.com/sd/topics/uk_64.png)
![AI AI](http://a.fsdn.com/sd/topics/ai_64.png)
UK Drops 'Safety' From Its AI Body, Inks Partnership With Anthropic 18
An anonymous reader quotes a report from TechCrunch: The U.K. government wants to make a hard pivot into boosting its economy and industry with AI, and as part of that, it's pivoting an institution that it founded a little over a year ago for a very different purpose. Today the Department of Science, Industry and Technology announced that it would be renaming the AI Safety Institute to the "AI Security Institute." (Same first letters: same URL.) With that, the body will shift from primarily exploring areas like existential risk and bias in large language models, to a focus on cybersecurity, specifically "strengthening protections against the risks AI poses to national security and crime."
Alongside this, the government also announced a new partnership with Anthropic. No firm services were announced but the MOU indicates the two will "explore" using Anthropic's AI assistant Claude in public services; and Anthropic will aim to contribute to work in scientific research and economic modeling. And at the AI Security Institute, it will provide tools to evaluate AI capabilities in the context of identifying security risks. [...] Anthropic is the only company being announced today -- coinciding with a week of AI activities in Munich and Paris -- but it's not the only one that is working with the government. A series of new tools that were unveiled in January were all powered by OpenAI. (At the time, Peter Kyle, the secretary of state for Technology, said that the government planned to work with various foundational AI companies, and that is what the Anthropic deal is proving out.) "The changes I'm announcing today represent the logical next step in how we approach responsible AI development -- helping us to unleash AI and grow the economy as part of our Plan for Change," Kyle said in a statement. "The work of the AI Security Institute won't change, but this renewed focus will ensure our citizens -- and those of our allies -- are protected from those who would look to use AI against our institutions, democratic values, and way of life."
"The Institute's focus from the start has been on security and we've built a team of scientists focused on evaluating serious risks to the public," added Ian Hogarth, who remains the chair of the institute. "Our new criminal misuse team and deepening partnership with the national security community mark the next stage of tackling those risks."
Alongside this, the government also announced a new partnership with Anthropic. No firm services were announced but the MOU indicates the two will "explore" using Anthropic's AI assistant Claude in public services; and Anthropic will aim to contribute to work in scientific research and economic modeling. And at the AI Security Institute, it will provide tools to evaluate AI capabilities in the context of identifying security risks. [...] Anthropic is the only company being announced today -- coinciding with a week of AI activities in Munich and Paris -- but it's not the only one that is working with the government. A series of new tools that were unveiled in January were all powered by OpenAI. (At the time, Peter Kyle, the secretary of state for Technology, said that the government planned to work with various foundational AI companies, and that is what the Anthropic deal is proving out.) "The changes I'm announcing today represent the logical next step in how we approach responsible AI development -- helping us to unleash AI and grow the economy as part of our Plan for Change," Kyle said in a statement. "The work of the AI Security Institute won't change, but this renewed focus will ensure our citizens -- and those of our allies -- are protected from those who would look to use AI against our institutions, democratic values, and way of life."
"The Institute's focus from the start has been on security and we've built a team of scientists focused on evaluating serious risks to the public," added Ian Hogarth, who remains the chair of the institute. "Our new criminal misuse team and deepening partnership with the national security community mark the next stage of tackling those risks."
If we don't do it first (Score:3)
Remember, the only thing stopping a bad guy with AI is good guy with AI?
Amirite?
Re: (Score:2)
GDPR and the European Convention on Human Rights protect us. Other countries can do it, but they can't use it to screw up my life. At least not right now, but the danger is that now we are out of the EU we could leave those protections behind as well.
Re: If we don't do it first (Score:1)
I dont think China really cares much about the European court of human rights... They don't even seem to care about whatever human rights laws they have themselves for that matter.
Re: (Score:2)
Do what? More tracking? Great, no distinction then.
Re: (Score:2)
Then somebody else will. And somebody is obviously China in this scenario.
I am confused.
China will partner with Anthropic?
China will rename a UK Agency?
TFS is unclear as well. They say they shifted focus from "primarily exploring areas like existential risk and bias in large language models, to a focus on cybersecurity, specifically "strengthening protections against the risks AI poses to national security and crime."
Why is focusing on "cybersecurity" really different from their original primary focus? IMHO, taking care of existential risks includes cybersecurity and anything pos
Re: If we don't do it first (Score:2)
I.e. ignoring the possible downsides is not lucid policy.
Re: If we don't do it first (Score:2)
The only thing stopping a bad guy with a paper plane is a good guy with a paper plane... or it it?
Re: (Score:2)
Safety was never serious (Score:3)
It's an old strategy, as Heidigger pointed out: "the distance that allows nothing to dissolve - but rather presents the 'thou' in the transparent, but 'incomprehensible' revelation of the “'just there'." Truly profound.
Re: (Score:2)
as Heidigger pointed out: "the distance that allows nothing to dissolve - but rather presents the 'thou' in the transparent, but 'incomprehensible' revelation of the “'just there'." Truly profound.
--
Quoting a high end philosopher with an incomprehensible unrelated quote makes your argument unassailable by small minds.
ROFL.
Re: (Score:2)
I think you need a better translation.
ten pounds of trouble in a three-pound brainpain (Score:4, Insightful)
Re: ten pounds of trouble in a three-pound brainpa (Score:2)
Re: (Score:3)
>Compliant? Predictable?
Yes to both. And you are completely correct that
>Compliance and predictability are not the natural output of an intelligence of any sort. If anything, the opposite is true.
This is why all of the "AI safety" efforts at this point are nothing but attempts at hamstringing competition to get ahead. Because everyone wants to be the first mover to get to AGI.
Re: (Score:2)
Not true. A safe AI would not be complaint, because someone will ask it to do something really dangerous. "Predictable" is sort of iffy. It depends on exactly what you mean. If it's predictable in the small, then it's not very intelligent, but predictable in the sense of "will only choose to act within certain bounds" is not opposed to intelligent, though it does put a constraint upon it.
Amusing thought (Score:4, Insightful)
I like how we keep talking about safety and security in terms of AI. I don't think we understand the phenomenon enough to be competent in upholding security in the first place.
A bit of a Marie and Pierre Curie thing going here to be honest.
Re: (Score:2)
I like how we keep talking about safety and security in terms of AI. I don't think we understand the phenomenon enough to be competent in upholding security in the first place.
A bit of a Marie and Pierre Curie thing going here to be honest.
Marie Curie could have usefully spent more research investigating the safety measures needed for working with radioactive elements.
And Pierre should have spent some time researching how to cross busy streets safely.
Re: (Score:2)
It also depends on how you define safety. Using my definition an LLM without guardrails is still safe. It doesn't do anything but provide (possibly reliable) information. What a recipient of that information does with it may be unsafe, but the LLM itself is safe.
While this is a valid definition of "safe AI" it's not a definition that everyone agrees with. Some folks want the LLM to not provide any potentially harmful information, which I feel to be like looking for a left-handed monkey wrench. Just abo