Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI United States

US, Britain, Other Countries Ink Agreement To Make AI 'Secure by Design' (reuters.com) 36

The United States, Britain and more than a dozen other countries on Sunday unveiled what a senior U.S. official described as the first detailed international agreement on how to keep AI safe from rogue actors, pushing for companies to create AI systems that are "secure by design." From a report: In a 20-page document unveiled Sunday, the 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse. The agreement is non-binding and carries mostly general recommendations such as monitoring AI systems for abuse, protecting data from tampering and vetting software suppliers.

Still, the director of the U.S. Cybersecurity and Infrastructure Security Agency, Jen Easterly, said it was important that so many countries put their names to the idea that AI systems needed to put safety first. "This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs," Easterly told Reuters, saying the guidelines represent "an agreement that the most important thing that needs to be done at the design phase is security."

This discussion has been archived. No new comments can be posted.

US, Britain, Other Countries Ink Agreement To Make AI 'Secure by Design'

Comments Filter:
  • Did they specify how to do it?

    • Re:Yes but (Score:4, Insightful)

      by Chris Mattern ( 191822 ) on Monday November 27, 2023 @09:51AM (#64034911)

      Of course not. They're politicians. They have about much idea of how AI works as your cat.

      • Of course not. They're politicians. They have about much idea of how AI works as your cat.

        In other words, we should expect something similar to IoT security from the intertube gang. Great.

        And protecting it from 'rouge actors' conveniently overlooks the concern regarding what our own governments will do with the technology.

      • ... I'm an experienced dev and have a pretty good handle on most types of software and a fair amount of hardware, but I also have have about as much real idea of how these ANNs work deep down as the cat. Yes training data at the top, yes neurons with weightings and back propagation at the bottom. But how the magic happens in between? Not a fecking clue.

        • by HiThere ( 15173 )

          Well, IIUC, current designs use tensor algebras to select probably correct choices. (Whoopi!)

          P.S.: I think that's the wrong approach, as it leads to systems that are inappropriately homogeneous. But I've never tried to build one, either.

      • Of course not. They're politicians. They have about much idea of how AI works as your cat.

        I doubt that even the foremost AI experts know how to do it. The agreement might as well recommend that the arms industry should start making guns and bombs that only hurt bad people.

        The agreement is part of that ever-popular Security Theater genre. "Don't be afraid, we're on it!"

    • by znrt ( 2424692 )

      this is the "agreement": https://www.ncsc.gov.uk/files/... [ncsc.gov.uk]

      the main focus is on hardening the lm against exploits but the recommendations are pretty vague indeed. for the most part it's a short summary of generic software development good practices.

  • Oh, so you are saying that distributing pre-loaded guns for free isn't a good idea?

    But it's so convenient for our customers... We'll tell you what, we'll put safety switches on the guns... problem solved...

    • You laugh now, but one of the Manson Girls tried to shoot President Ford with a .45 automatic and failed because of a safety that she didn't know about. The recoil on that is big enough that there's a safety in the grip that won't let it fire if you're not holding it firm enough, and of course, she wasn't.
  • The article says: "The rise of AI has fed a host of concerns, including the fear that it could be used to disrupt the democratic process, turbocharge fraud, or lead to dramatic job loss, among other harms."

    AI isn't limited to what chat models produced by the US. Sure, these could also be used for fake news and fraud, and there's probably not much that could be done about that, but other players could develop their own models, and use them to do whatever they want, including disrupting the democratic process

    • by HiThere ( 15173 )

      Yeah, but that's pretty much true. People only worry about things that they notice or expect to notice.

  • ...to be good. With no oversight or binding agreements.

    Mmm... sounds like "tethics(TM)." Let's see how that plays out.
  • Yeah (Score:4, Insightful)

    by stealth_finger ( 1809752 ) on Monday November 27, 2023 @10:01AM (#64034977)
    Yeah, good luck with that.
  • by NMBob ( 772954 ) on Monday November 27, 2023 @10:20AM (#64035053) Homepage
    I thought I read that it might be able to discover a cure for cancer. It won't be able to discover a way around us and our "agreements"? Let's just call it Colossus and be done with it.
    • AI will discover a cure for cancer, decide that we don't deserve it, then give us something that actually makes cancer worse. After a few dozen or more people die in clinical trials, AI will say, "Oops, my bad, bro! I guess I was hallucinating again!"

    • The "security" part they're interested in is making sure the cure for cancer doesn't get accidentally released to everyone and is only controlled by a few powerful monopolies.

  • It's a non binding agreement to do something impossible.

    We wasted time and money on this when real issues are at stake.

    Remember when you vote that everyone who was involved with this is not a serious person.

    • by HiThere ( 15173 )

      You're right, but nobody knows how to solve the real problem. And countries don't do just one thing. (See today's article about the US military robotic swarms.)

      Think of this "treaty" as a consciousness raising event...because that's all it is. I'd be really surprised if they got this "treaty" through the Senate.

      • by Pieroxy ( 222434 )

        nobody knows how to solve the real problem

        You cannot solve it unless every country and everyone of their citizens agree.

        So you cannot solve it.

      • "nobody knows how to solve the real problem"

        Frank Herbert knew.

        • by HiThere ( 15173 )

          Unfortunately(?) both mentats and genetic memory are fictional.

          • They were a bit overzealous about it. You could keep CPUs but ditch AI.

            It would however take religion to accomplish that at this point, and we don't need more of that more than we don't need potentially hostile AI.

  • Because the only currently known way is to switch if off. Politicians and lawmakers again hallucinating that they control how reality works?

  • "The agreement is non-binding and carries mostly general recommendations such as monitoring AI systems for abuse, protecting data from tampering and vetting software suppliers."

    I thought the entire post of 'making AI secure by design' was ludicrous until I read the above, then I realized this is pure bureaucratic comedy.

    Here's how you make AI "intrinsically secure" : DON'T EVER FUCKING CONNECT IT TO ANYTHING.*
    What are the odds that anyone is going to follow that utterly basic principle? 0? Lower? Of cour

  • rogue actor to totally undo anything these clowns are trying to do. I think this cat is already out of the sack, now the question is- how much damage will be done and how useful is that cat really?
  • This seems about as useful as when the State of Indiana got involved in math in 1897 https://en.wikipedia.org/wiki/... [wikipedia.org]
  • ...to perform the futile exercise of 'putting the AI cat back in the bag'

  • Yeah... sure. We can't even say this about our current code generating systems. Another bunch of lugnuts who know nothing about how any of this works making stupid decisions that impact the rest of us.
  • So no AI until we have Positronic Brains?
  • Anyone else old enough to remember the early internet days when there was concern that objectionable or malicious material or software might enter a person's home through the internet and so people demanded some sort of filtering by ISPs, OS and browser developers, etc.? Technically minded people jokingly suggested extending the TCP/IP protocol to include an "evil bit" in the TCP header which indicates that a packet of data contains something undesirable. Criminals and other bad actors would set this evil
  • There is no 'security by design', other than not designing AI at all. What the developers might think is secure, can be as leak as an open window. We've already seen way too many 'secure by design' frameworks being not so secure as the original designers thought it would be. There are always people (and in future, AI) that will think outside the box.
  • certainly isn't from their eyes,
  • Sounds about as likely as making lockpicks that can only be used by licensed locksmiths but not by burglars.

Some people manage by the book, even though they don't know who wrote the book or even what book.

Working...