Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
United Kingdom AI

UK Government Urges Regulators To Come Up With Rules For AI (cnbc.com) 15

The U.K. government on Wednesday published recommendations for the artificial intelligence industry, outlining an all-encompassing approach for regulating the technology at a time when it has reached frenzied levels of hype. From a report: In a white paper to be put forward to Parliament, the Department for Science, Innovation and Technology (DSIT) will outline five principles it wants companies to follow. They are: safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress. Rather than establishing new regulations, the government is calling on regulators to apply existing regulations and inform companies about their obligations under the white paper.

It has tasked the Health and Safety Executive, the Equality and Human Rights Commission, and the Competition and Markets Authority with coming up with "tailored, context-specific approaches that suit the way AI is actually being used in their sectors. Over the next twelve months, regulators will issue practical guidance to organisations, as well as other tools and resources like risk assessment templates, to set out how to implement these principles in their sectors," the government said. "When parliamentary time allows, legislation could be introduced to ensure regulators consider the principles consistently."

This discussion has been archived. No new comments can be posted.

UK Government Urges Regulators To Come Up With Rules For AI

Comments Filter:
  • The "AI" we are talking about today will come one way or another; I am curious what could actually be done to regulate it in a meaningful way. Maybe 3-4 generations out we have different issues, but it seems too late to do much about what is on the immediate horizon.

    The Luddites come to mind...

    • Like anything, there are at least three (if not more) aspects to "AI": the development of the technology, the use and application of it and the distribution.

      Regulation could, hypothetically, target any or all of the above.

      If we're speaking about the development of the technology then I think most on /. would agree that pandora's box has been opened. You can regulate the development in your own jurisdiction but others in other jurisdictions will still have a go at it and that's assuming that you could preven

      • by AmiMoJo ( 196126 )

        AI is already regulated by GDPR, although the UK government keeps talking about leaving that.

        For example you have a right to know how automated decisions about you were made, and to have them reviewed. For AI that could be a real problem, because nobody really knows how it decides to do what it does. With a traditional algorithm you can say "this variable caused the result to be lower" quite easily.

        I wonder if it will end up like those phone menus. You know the AI is crap so you just give it any old data an

        • One of the references talks about AI in recruitment:

          Providing guidance on how the Equality Act applies to the use of new technologies in automated decision-making. Working with employers to make sure that using artificial intelligence in recruitment does not embed biased decision-making in practice.

          Another is about regulating medical devices:

          Software (and AI in particular) plays an increasingly prominent role in health systems, having a wide set of applications across health and social care. Many of these applications will be regulated as medical devices. It is increasingly important then that medical device regulation be fit for purpose [...]

          There's presumably going to be regulation around AI used in self driving vehicles as well.

      • Regulation of data collection and data sets will likely be another target. And imo, something that shouldâ(TM)ve been done years ago. LLMs still require large amounts of training data in order to work properly, which will likely incentivize increasingly more invasive methods of data collection from users if left unchecked.
        It wonâ(TM)t be just PCs, tablets, and smartphones that AI companies will scrape from, but also various consumer products such as kitchen appliances, home security devices, etc.

        T

  • Isaac Asimov (and John W Campbell) invented the 3 laws of robotics for this purpose a long time ago.

    • Worked fine. Try to add one more law, though.. say, a Zeroth law", and "poof" goes the positronic brain.

      • No they didn't. As has been said by every answer to your comment in the hundreds of times the same comment has been posted: His stories were about the inadequacies of the laws and pointed out the flaws in attempting to use simplistic answers to solve the alignment problem. [wikipedia.org]
  • by UnknownSoldier ( 67820 ) on Wednesday March 29, 2023 @10:02AM (#63408832)

    * How is this going to be (independently?) tested? Force handover of source code and the TB of training data to the government?
    * How is this going to be enforced?
    * Which programs does this effect? How is AI defined?

    Link to actual whitepaper. [www.gov.uk] (Editors had ONE job. /s)

  • It will be an oven-ready deal, I'm sure.

  • by MooseTick ( 895855 ) on Wednesday March 29, 2023 @10:45AM (#63408962) Homepage

    Anyone can buy computer and storage capable of creating AI models in the comfort of their own home. At this point, AI is about as easy to regulate as sex positions.

  • by PastTense ( 150947 ) on Wednesday March 29, 2023 @11:05AM (#63409016)

    Regulation of AI should only occur in those situations where non_AI programs are regulated. And very few computer programs are regulated: for example the computer programs used in the flying of airliners.

  • Are we going to once again have regulators writing rules for something that they have no actual understanding of?

    Insert examples from congressional hearings here.

  • The UK Government can ask AI companies all they want, and they can do nothing to regulate it at all ... ..and if they try then it will just be moved outside thier juristriction , or simply ignored

Technology is dominated by those who manage what they do not understand.

Working...