Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
United Kingdom AI EU Privacy

EU Should Ban AI-Powered Citizen Scoring and Mass Surveillance, Say Experts (theverge.com) 74

A group of policy experts assembled by the EU has recommended that it ban the use of AI for mass surveillance and mass "scoring of individuals"; a practice that potentially involves collecting varied data about citizens -- everything from criminal records to their behavior on social media -- and then using it to assess their moral or ethical integrity. From a report: The recommendations are part of the EU's ongoing efforts to establish itself as a leader in so-called "ethical AI." Earlier this year, it released its first guidelines on the topic, stating that AI in the EU should be deployed in a trustworthy and "human-centric" manner. The new report offers more specific recommendations. These include identifying areas of AI research that require funding; encouraging the EU to incorporate AI training into schools and universities; and suggesting new methods to monitor the impact of AI. However, the paper is only a set of recommendations at this point, and not a blueprint for legislation. Notably, the suggestions that the EU should ban AI-enabled mass scoring and limit mass surveillance are some of the report's relatively few concrete recommendations.
This discussion has been archived. No new comments can be posted.

EU Should Ban AI-Powered Citizen Scoring and Mass Surveillance, Say Experts

Comments Filter:
  • Just AI? (Score:5, Insightful)

    by AmiMoJo ( 196126 ) on Thursday June 27, 2019 @09:10AM (#58834190) Homepage Journal

    All citizen rating systems, outside of some very specific ones such as limited financial history scoring, should be banned. The fact that it's done by AI is irrelevant, being done by a human is no better.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Yep, going with AI seems like an odd choice.
      I don't even like the idea of banning automated systems.

      What I don't like is that companies and governments can avoid responsibility by saying "the computer did it by itself, it wasn't us" whenever things go wrong.

      If you automate something you should have the same responsibility for the automations actions as if you did the actions directly.
      Yes, it is hard to know how an AI will behave in any given situation, that is why we don't use them for critical appliactions

      • Re:Just AI? (Score:5, Insightful)

        by AmiMoJo ( 196126 ) on Thursday June 27, 2019 @09:34AM (#58834296) Homepage Journal

        What I don't like is that companies and governments can avoid responsibility by saying "the computer did it by itself, it wasn't us" whenever things go wrong.

        That's why it's so important to make companies own those decisions, and to have a right of review and to have the decision explained to you. GDPR does some of that but could go further.

      • >If you automate something you should have the same responsibility for the automations actions as if you did the actions directly.
        I absolutely agree, with some wiggle room for deciding whether "you" is the user or the manufacturer.

        I've got no problem with automated systems - an automated system does a known task in a known way to deliver a consistent output. You've got to make sure you handle corner cases correctly, either explicitly or by alerting a human overseer to exercise good judgment, but essenti

    • Re:Just AI? (Score:5, Insightful)

      by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Thursday June 27, 2019 @09:35AM (#58834300) Homepage Journal

      All citizen rating systems, outside of some very specific ones such as limited financial history scoring, should be banned.

      It should also be illegal to sell someone's personally identifiable data to a third party. It's one thing to collect information about people who willingly use your service and use it to advertise to them. It's something else entirely to then pass that data on to others, who can then pass it on to others ad infinitum.

      • by mjwx ( 966435 )

        All citizen rating systems, outside of some very specific ones such as limited financial history scoring, should be banned.

        It should also be illegal to sell someone's personally identifiable data to a third party. It's one thing to collect information about people who willingly use your service and use it to advertise to them. It's something else entirely to then pass that data on to others, who can then pass it on to others ad infinitum.

        This is what the GDPR is meant to do in Europe. By making companies responsible for how they treat and use PII (Personally Identifiable Information) we've reduced how easy it is to sell on your personal information.

        This is why American companies are going apeshit about the GDPR and some are even blocking European visitors from their sites, they don't want anything like the GDPR to come to the US where they can be held liable for profiteering from your personal information.

    • Re:Just AI? (Score:5, Insightful)

      by evanh ( 627108 ) on Thursday June 27, 2019 @09:44AM (#58834352)

      Remember that AI, in the modern computing sense, is really just a catch-all phrase anyway. It gets used and abused willy-nilly with no rigid meaning ... so it can cover all if desired. Just like how "mobile" gets used now, ie: anything transportable. Shipping containers will be called mobile lock-ups soon.

      • I don't know it seems to me that AI is fairly consistently used to mean "asking a machine to exercise good judgement", as opposed to automation, which is asking a machine to apply a pre-established procedure in a consistent manner.

        Whether it's "identify what this mass of pixels is a photo of, "drive this car down the road without causing any accidents", "design a heavily optimized widget to these specifications", or "find the most likely criminals/suckers/threats/assets using this big pile of data". It al

    • by Anonymous Coward

      It seems obvious that such rating systems will be used primarily to punish those who don't fit in or refuse to toe the line, to extract vengance on enemies, and to hamper one's opponents in the rat race. And I emphasize the word "rat".

    • I'd rather be judged by AI than a human. Humans are far more advanced than AI and are currently using all of that advancement to hate everything that doesn't fit their ideological profile of perfection requirements.

      • by AmiMoJo ( 196126 )

        The key is making decisions explainable. Because human bias is well understood most companies have a decision making process, e.g. affordability and credit checks for getting a mortgage and then calculating the offered interest rate. So it's easy to check the human's work and discover bias.

        AI sometimes offers that kind of detail when making decisions, but sometimes it's a black box. The box needs to be kept transparent.

        • Lady Justice is blindfolded and using a scale for a reason. The scale represents the logic of the universe itself as the acting mechanism of proof. If she takes her blindfold off, then may as well toss out the scale, and then we're right back to the $torches and $pitchforks of the day.

          I remember reading a story about some guy that supposedly came from the future and told of how politics worked by way of some type of biological-computer that was capable of using bio-logic, but without preference. I don't

      • by mysidia ( 191772 )

        and are currently using all of that advancement to hate everything that doesn't fit their ideological profile of perfection requirements.

        Well, sort of, yes, because the human brain is not designed to make logical decisions regarding actions to take and choices to effectuate. Its design tends to make it able to understand what is going on and select the options that first and foremost are expected to (A) Make the human it feel comfortable emotionally, (B) Extend their survival, and (C) Extend/Maxim

      • Thing is, an AI can judge everyone, relatively quickly and cheaply, while humans can only judge a small minority without putting a truly staggering number of people on the domestic surveillance payroll. And once you have a database of the machine's judgement on everyone, humans can readily abuse it in many, many different ways to enforce their ideology.

      • I'd rather be judged by AI than a human.

        Me too, but there's no such thing, only algorithms written by humans, who themselves have biases. Or by committee, same thing.

    • Agree wholeheartedly with you and a number of commentors in this thread as well. Amounts to gossiping on an institutional level. Evil, evil, evil.
    • All citizen rating systems, outside of some very specific ones such as limited financial history scoring, should be banned.

      You want to outlaw spam filters?!

      • by AmiMoJo ( 196126 )

        Spam filters don't rate individuals in most cases, they rate individual messages and servers,

    • by AHuxley ( 892839 )
      Re "limited financial history scoring"
      Education?
      Health care?
      Who can do the draft/mil service? Who will never be asked for gov/mil service.
      Criminal background? Is the person a citizen, can they be trusted with mil/gov/telco work?
      Ability for a gov/mil to trust a person in a "profession"?
      Renting a home?
      Questions of citizenship? Gov spending and who gets what gov support. Fraud and fake ID use?
      Using a nations banking system to not pay tax?
      A tourist staying in hotel for a few weeks not leaving the n
    • by mjwx ( 966435 )

      All citizen rating systems, outside of some very specific ones such as limited financial history scoring, should be banned. The fact that it's done by AI is irrelevant, being done by a human is no better.

      Financial rating systems especially should be banned. Give lenders the data and let them make up their own minds rather than having a central source provide a score. Scores are subject to gaming, both by the end user and the scorer.

  • by Anonymous Coward

    Comply or die!

  • by Anonymous Coward

    I for one welcome AI-enabled mass scoring.

  • What is this "AI training" that I keep hearing about that is so damn important? There are already multiple courses on statistics and a few on programming/algorithms. Given the available education we already have students should be able to read a 20 min howto on the basic construction of "AI". Why put that in a new course? Should it be more inclusive? A pink bunny that explains it using finger-puppets and song?
    If you can't understand statistics or algorithms, learning about AI makes no sense. If you do under

  • by lucasnate1 ( 4682951 ) on Thursday June 27, 2019 @09:35AM (#58834302) Homepage

    Germany already uses SHUFA to assign financial reliability scores to citizens. No way they will give up on it.

    • SHUFA is a club.
      Financial organizations join it.

      As a private customer of one of that financial organization you have to agree that the collect/share data.

      In other words, if you never agreed, it is illegal for them to have/collect/share data about you.

  • by Anonymous Coward

    That would be a start.

  • by Anonymous Coward

    The people who are advocating against the right to remember and think about your memories, should drop the term "AI" from their platform. You can do so much without ever even jokingly approaching something that anyone would consider "AI." If you want to ban computers, then call them computers.

    I was doing some of this kind of stuff decades ago, and I can't stress how simple and trivial it is. I am not a particularly smart guy, my code wasn't particularly clever, my models not particularly sophisticated. I ha

  • the suggestions that the EU should ban AI-enabled mass scoring ...

    We already have mass scoring systems, as in credit scores.
    At some point to say "AI shouldn't be used to do these things" is to say that these things should not be done with computers,
    and these things should be done poorly and in a way that tends to make more errors.

    For example: look at current credit scores. They are kind of arbitrary and in some ways suck, because they're often not even an accurate reflection; for example, they don't

  • Won't stop it (Score:3, Interesting)

    by The Snazster ( 5236943 ) on Thursday June 27, 2019 @11:45AM (#58834996)
    They will just have some other country do it, then "hack" their results . . . visa versa.
  • There isn't even free speech in the EU. [coe.int] What's to stop the adoption of such a powerful tool for maintaining control over a population? An appeal to decency, to the same corrupt bureaucrats that passed article 13?

    I can already imagine the paid shills and useful idiots: "Without this, we won't know who the nazis are, and we'll never be safe. That's a threat to our democracy!"

  • In ten years everything will be banned in EU.
    • by AHuxley ( 892839 )
      The more control the EU and EU nations want, the more nations that will want to exit.
      • The more control the EU and EU nations want, the more nations that will want to exit.

        And the more the establishment will want to keep up mass migration to order to buy votes, and weaken the possibility of political resistance.

        It'll be interesting to see if the indigenous Europeans can just be shamed into submission to give up everything without even a fight.

  • We all know it's not really "artificial intelligence" anyway, just (somewhat) sophisticated data analytics. If they ban AI, they might have to start calling it what it really is, just to make it legal.

Time is the most valuable thing a man can spend. -- Theophrastus

Working...