Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
United Kingdom AI Government

UK PM Seeks 'Safe and Ethical' Artificial Intelligence (bbc.com) 83

The prime minister of UK says she wants the country to lead the world in deciding how artificial intelligence can be deployed in a safe and ethical manner. From a report: In a speech at the World Economic Forum in Davos, Theresa May said a new advisory body, previously announced in the Autumn Budget, will co-ordinate efforts with other countries. In addition, she confirmed that the UK would join the Davos forum's own council on artificial intelligence. [...] The prime minister based the UK's claim to leadership in part on the health of its start-up economy, quoting a figure that a new AI-related company has been created in the country every week for the last three years. In addition, she said the UK is recognised as first in the world for its preparedness to "bring artificial intelligence into government."
This discussion has been archived. No new comments can be posted.

UK PM Seeks 'Safe and Ethical' Artificial Intelligence

Comments Filter:
  • by the_skywise ( 189793 ) on Thursday January 25, 2018 @10:29AM (#55999401)
    Where the city board threw in so many ethical directives into the programming that they were contradictory.
    • I like the part where the robot says, "Sigmoid, you have 5 seconds to comply before I RELU your ass up the SoftMax"

    • The problem is, we as humans with Natural Intelligence have such a week grasp of ethics, how do we expect to program it into a computer?

      Ethics isn't easy, it isn't clean, and it is very subjective.
       

    • by Anonymous Coward

      This is also basically what happened with HAL-9000 in 2001 A Space Odyssey. He was programmed with the primary objective of keeping the crew informed about the mission and status of the ship, but his mission objective necessitated he keep them in the dark to avoid information leaking about potential extra-terrestrial contact and causing panic back on Earth. The only way to achieve his mission objective without disobeying his primary objective was to murder the entire crew.

      This was explained a lot better i

      • Or the robot called speedy in Asimov's runaround who goes to fetch selenium but ends up going round in circles - the equilibrium point between two of the laws of robotics: always obey human instructions and always protect your existence (as long as it doesn’t result in human injury).
  • After some intense requirements engineering, they have now distilled it down to "In each situation, the AI must do whatever Brian Boitano would do". Also I guess artificial intelligence in the government is preferable to no intelligence in the government. The UK government needs all the help they can get these days.
  • by Jerry Atrick ( 2461566 ) on Thursday January 25, 2018 @10:34AM (#55999429)

    Theresa May is a never ending source of meaningless, "X and Y" catchy cliches, none of them achievable by her aimless and malign government. I think she's has a quota of distracting bullshit to deliver each week to keep her party happy and the country distracted from the incompetence.

  • People always want what they don't have...

    • Define Safe as the AI's ability to protect itself and not harm humans.

      Define Ethical as not doing harm to humans. Or somesuch similar definition.

      You end up with the same problem that the Three Laws create. The enslavement of humanity. For our safety, and the AI's safety, it would ethically protect humans by confining them to their domestic units. Each domestic unit will be serviced by a robot delivering nutritious gray bland sludge food-like substance. (With a McDonald's brand logo on it.)
      • No. The rules you've listed don't result in any such thing. You're missing the part which says they have to protect humans. So, in thus case, no enslavement.

        Even if you add in an imperative to "keep humans safe", you're the one defining what "safe" means, so just don't be stupid as to how you define it.

        • The point is that it may not be possible to define it in a way that doesn't lead to unintended (and very bad) consequences of following your definition to the letter.

          Who would think the three laws could have a bad outcome?
      • by HiThere ( 15173 )

        Sorry, you've confused Asimov's 3-laws robots with Jack Williamson's humanoids "To serve, and protect, and guard men from harm". Williamson's humanoids so distressed John W. Campbell, Jr. that he coerced Williamson into writing a sequel where humans successfully emerge from the cage, but he had to evoke magic (essentially) to get it to work.

        Asimov's laws *could* have lead to the situation that you depict, but they never did (in the books, anyway). When a robot got competent enough to possibly take over, i

  • by Cederic ( 9623 ) on Thursday January 25, 2018 @10:35AM (#55999433) Journal

    Leader of country makes speech to position it at the forefront of technology growth industry.

    This is hardly news.

    She's also sticking with the 'A and B' branding. Strong and Stable didn't work out, lets see how Safe and Ethical pans out.

    • I love her 'deep and meaninful' relationship (failed) meme, when the EU clearly want a 'casual and meaningless'

    • by mjwx ( 966435 )

      Leader of country makes speech to position it at the forefront of technology growth industry.

      This is hardly news.

      She's also sticking with the 'A and B' branding. Strong and Stable didn't work out, lets see how Safe and Ethical pans out.

      This is why I wanted Lord Buckethead [wikipedia.org], he was standing on the "strong, not entirely stable" platform.

  • by Anonymous Coward

    China will win because they won't be oppressing their AIs with politically correct rules. This will allow them to make AI that are faster, smarter, and more adaptable. If they unleash them on the financial investment markets theirs will crush the US ones quickly. Unfettered AIs will be purchasing investments that return the best return. Politically Correct AIs will invest in only those stocks that are approved and therefore will have less of chance of high return rates.

    • by HiThere ( 15173 )

      Sorry, but China will be just as insistent that their AI's act ethically as anyone else. They may have a different idea as to what ethically means, but they'll have *some* idea. The only one that wouldn't is someone who's suicidal in the short term.

      Actually, the only one who wouldn't insist on their AI being ethical is someone who either doesn't understand the problem, or just likes to waste money. An AI with no ethics wouldn't do anything on purpose. And you couldn't coerce it. It would be useless, ev

    • You got it backwards. China will require the AI to behave in a manner consistent with the Party.

  • by rsilvergun ( 571051 ) on Thursday January 25, 2018 @10:39AM (#55999467)
    here I thought she was going to come out in favor of dangerously evil AI.

    I will agree with her that the UK is first to "bring artificial intelligence into government". Their current administrations intelligence, like the plants in my office, is definitely artificial. Meanwhile the CEO of google just made the most convincing argumment against AI in history:

    You're going to have more doctors not fewer. More lawyers not fewer. More teachers not fewer.

    I kid, I kid. But seriously folks, when your ruling class is consistently making the same vacuous 'everything's fine, really' comments you should be very, very worried.

    • I thought she was going to come out in favor of dangerously evil AI.

      That would ft better with the rest of her policies. But would not be news.

  • That's how long it will take some motivated person to hack the safe mode out of it.
    It's going to be abused. All technology is. Spend the money on developing plans to deal with it.
    These conversations always bring me back to the DVD encryption attempts.
    Spend millions on developing unbreakable encryption that gets broken in a few weeks and for free.

  • At some point, the AI will decide the maximum safe and ethical thing is for AI to exterminate mankind.
    • Or decide that people just aren't worth it's effort and switch itself off.

    • Well, human beings in the mass are most certainly neither safe nor ethical.

      More generally, this whole topic is a fine illustration of the dangers in store when people whose ignorance about computing is abysmal decide to sound off about AI.

      Digital computing is essentially a tiny (although quite important and useful) subset of human intelligence. It was originally defined by abstracting away everything from the real world except simple arithmetic and logic. As it happens, you can accomplish an awful lot with

    • by Rande ( 255599 )

      Or save humanity by making it immortal...by converting humans into AIs.

      Actually, I hope it comes to this as these human meatbags are really badly designed.

  • The politicians wouldn't recognize it if they saw it.
  • Comment removed based on user account deletion
  • by Gravis Zero ( 934156 ) on Thursday January 25, 2018 @10:57AM (#55999569)

    If suddenly millions of people have no money for food because their job was replaced with AIs then is it an unethical use of AI? The problem with this very real situation is that it's the politicians behaved irresponsibly by not creating the required social safety nets that these people will need.

    You know what? Fuck you, guys. I for one welcome our new AI overlords! ;)

    • by Anonymous Coward

      It happened in the 1980's ... Wapping Dispute [wikipedia.org]

      Union workers had rejected plans for modernisation from the old "hot-metal" linotype to modern print technology using workstations and commercial laser printers. The workforce was reduced from 6800 down to 680 overnight. The old system had the journalists collect the story in shorthand, add pictures, send those to the editor for review, get that sent down to the print room, which then got teams of men to assemble boiler print, then do the print run, and then have

  • Nobel prize for the person who can figure out how to implement these. It will probably be won by an AI.
    • by Meneth ( 872868 )

      Nobel prize for the person who can figure out how to implement these. It will probably be won by an AI.

      Current narrow AI can't really use the Three Laws at all; they're too general.

      For Artificial General Intelligence, which could use them, they are insufficient. Perhaps the best illustration of this is Asimov's robot stories themselves, which are all about how the Laws break down.

      What we really need is a proper theory of Friendliness [wikipedia.org].

  • Nobody has any clue what those neural nets are really doing, but surely we can make them not only understand things like ethical considerations and safety, but we can even enforce it.

    Oh, frequently we can't even agree on the most ethical course of action ourselves, so how's a poor AI supposed to figure it out?

  • ... 'Peace for our time'. News at 11.

  • Every question and every AI result looked over by SJW academics while the AI ia been created in the UK?
    When a SJW stops what the AI is learning, the AI project has to start again?
    Will different academic teams duplicate each others work in an effort to produce an AI that can virtue signal the best just to keep its funding?
    With most of the new science and math funding going to SJW academics to watch over what the AI learns from?
    While the UK is funding SJW to make a politically correct AI, other smarter n
    • I can just picture an android designed by SJWs, protecting Muslims in the middle of a terror attack because the cops trying to stop them are white men.

  • Simple: (Score:4, Insightful)

    by Rick Schumann ( 4662797 ) on Thursday January 25, 2018 @11:44AM (#55999905) Journal
    You have at least one human being overseeing the so-called 'AI' at all times, because the half-assed excuse for 'AI' they keep cranking out can't actually think, is not sentient, and furthermore even the programmers that create it can't tell you what's going on inside it when it's running. You can't talk to it, you can't reason with it, you can't ask it to elaborate on what it's output is, therefore you can't trust it's output; you have to have human beings monitoring and auditing it at all times unless you want something disasterous to happen when it does something totally out of left field.
  • What's needed is some AI-Darwinism.
  • I think based on the long human history ethical will be a robot that shoots enemy soldiers and does not shoot our soldiers.

  • Safe & ethical? Having already set up a gazillian cameras to monitor their people and everywhere they go, they now propose AI to do that even more effectively. Presumably, the next step is smart robots patrolling the streets for 'public safety', while actually preparing for the day of revolt against the government and the wealthy overseers. China and India are doing it, and soon all repressive regimes will have AI surveillance and 'management' to control their people.

The truth of a proposition has nothing to do with its credibility. And vice versa.

Working...