EU Should Ban AI-Powered Citizen Scoring and Mass Surveillance, Say Experts (theverge.com) 74
A group of policy experts assembled by the EU has recommended that it ban the use of AI for mass surveillance and mass "scoring of individuals"; a practice that potentially involves collecting varied data about citizens -- everything from criminal records to their behavior on social media -- and then using it to assess their moral or ethical integrity. From a report: The recommendations are part of the EU's ongoing efforts to establish itself as a leader in so-called "ethical AI." Earlier this year, it released its first guidelines on the topic, stating that AI in the EU should be deployed in a trustworthy and "human-centric" manner. The new report offers more specific recommendations. These include identifying areas of AI research that require funding; encouraging the EU to incorporate AI training into schools and universities; and suggesting new methods to monitor the impact of AI. However, the paper is only a set of recommendations at this point, and not a blueprint for legislation. Notably, the suggestions that the EU should ban AI-enabled mass scoring and limit mass surveillance are some of the report's relatively few concrete recommendations.
Re: (Score:2, Troll)
Re: (Score:1)
Most certainly not.
His wet dream of a hot vacation is boom bomming hot hookers in Mexico.
He believes Hitler is still alive and Europeans either have no electricity or have the most expensive electricity in the world.
Re: But! (Score:2)
No. Throw away the computers and hire more bureaucrats.
Just AI? (Score:5, Insightful)
All citizen rating systems, outside of some very specific ones such as limited financial history scoring, should be banned. The fact that it's done by AI is irrelevant, being done by a human is no better.
Re: (Score:2, Insightful)
Yep, going with AI seems like an odd choice.
I don't even like the idea of banning automated systems.
What I don't like is that companies and governments can avoid responsibility by saying "the computer did it by itself, it wasn't us" whenever things go wrong.
If you automate something you should have the same responsibility for the automations actions as if you did the actions directly.
Yes, it is hard to know how an AI will behave in any given situation, that is why we don't use them for critical appliactions
Re:Just AI? (Score:5, Insightful)
What I don't like is that companies and governments can avoid responsibility by saying "the computer did it by itself, it wasn't us" whenever things go wrong.
That's why it's so important to make companies own those decisions, and to have a right of review and to have the decision explained to you. GDPR does some of that but could go further.
Re: (Score:2)
>If you automate something you should have the same responsibility for the automations actions as if you did the actions directly.
I absolutely agree, with some wiggle room for deciding whether "you" is the user or the manufacturer.
I've got no problem with automated systems - an automated system does a known task in a known way to deliver a consistent output. You've got to make sure you handle corner cases correctly, either explicitly or by alerting a human overseer to exercise good judgment, but essenti
Re:Just AI? (Score:5, Insightful)
All citizen rating systems, outside of some very specific ones such as limited financial history scoring, should be banned.
It should also be illegal to sell someone's personally identifiable data to a third party. It's one thing to collect information about people who willingly use your service and use it to advertise to them. It's something else entirely to then pass that data on to others, who can then pass it on to others ad infinitum.
Re: (Score:2)
All citizen rating systems, outside of some very specific ones such as limited financial history scoring, should be banned.
It should also be illegal to sell someone's personally identifiable data to a third party. It's one thing to collect information about people who willingly use your service and use it to advertise to them. It's something else entirely to then pass that data on to others, who can then pass it on to others ad infinitum.
This is what the GDPR is meant to do in Europe. By making companies responsible for how they treat and use PII (Personally Identifiable Information) we've reduced how easy it is to sell on your personal information.
This is why American companies are going apeshit about the GDPR and some are even blocking European visitors from their sites, they don't want anything like the GDPR to come to the US where they can be held liable for profiteering from your personal information.
Re:Just AI? (Score:5, Insightful)
Remember that AI, in the modern computing sense, is really just a catch-all phrase anyway. It gets used and abused willy-nilly with no rigid meaning ... so it can cover all if desired. Just like how "mobile" gets used now, ie: anything transportable. Shipping containers will be called mobile lock-ups soon.
Re: (Score:2)
I don't know it seems to me that AI is fairly consistently used to mean "asking a machine to exercise good judgement", as opposed to automation, which is asking a machine to apply a pre-established procedure in a consistent manner.
Whether it's "identify what this mass of pixels is a photo of, "drive this car down the road without causing any accidents", "design a heavily optimized widget to these specifications", or "find the most likely criminals/suckers/threats/assets using this big pile of data". It al
Re: (Score:1)
It seems obvious that such rating systems will be used primarily to punish those who don't fit in or refuse to toe the line, to extract vengance on enemies, and to hamper one's opponents in the rat race. And I emphasize the word "rat".
Re: (Score:3)
I'd rather be judged by AI than a human. Humans are far more advanced than AI and are currently using all of that advancement to hate everything that doesn't fit their ideological profile of perfection requirements.
Re: (Score:3)
The key is making decisions explainable. Because human bias is well understood most companies have a decision making process, e.g. affordability and credit checks for getting a mortgage and then calculating the offered interest rate. So it's easy to check the human's work and discover bias.
AI sometimes offers that kind of detail when making decisions, but sometimes it's a black box. The box needs to be kept transparent.
Re: (Score:3)
Lady Justice is blindfolded and using a scale for a reason. The scale represents the logic of the universe itself as the acting mechanism of proof. If she takes her blindfold off, then may as well toss out the scale, and then we're right back to the $torches and $pitchforks of the day.
I remember reading a story about some guy that supposedly came from the future and told of how politics worked by way of some type of biological-computer that was capable of using bio-logic, but without preference. I don't
Re: (Score:3)
and are currently using all of that advancement to hate everything that doesn't fit their ideological profile of perfection requirements.
Well, sort of, yes, because the human brain is not designed to make logical decisions regarding actions to take and choices to effectuate. Its design tends to make it able to understand what is going on and select the options that first and foremost are expected to (A) Make the human it feel comfortable emotionally, (B) Extend their survival, and (C) Extend/Maxim
Re: (Score:2)
Thing is, an AI can judge everyone, relatively quickly and cheaply, while humans can only judge a small minority without putting a truly staggering number of people on the domestic surveillance payroll. And once you have a database of the machine's judgement on everyone, humans can readily abuse it in many, many different ways to enforce their ideology.
Re: (Score:2)
I'd rather be judged by AI than a human.
Me too, but there's no such thing, only algorithms written by humans, who themselves have biases. Or by committee, same thing.
Re: (Score:2)
I think you're the only one that picked up what I was saying.
Re: (Score:2)
Re: (Score:2)
You want to outlaw spam filters?!
Re: (Score:2)
Spam filters don't rate individuals in most cases, they rate individual messages and servers,
Re: (Score:2)
Education?
Health care?
Who can do the draft/mil service? Who will never be asked for gov/mil service.
Criminal background? Is the person a citizen, can they be trusted with mil/gov/telco work?
Ability for a gov/mil to trust a person in a "profession"?
Renting a home?
Questions of citizenship? Gov spending and who gets what gov support. Fraud and fake ID use?
Using a nations banking system to not pay tax?
A tourist staying in hotel for a few weeks not leaving the n
Re: (Score:2)
All citizen rating systems, outside of some very specific ones such as limited financial history scoring, should be banned. The fact that it's done by AI is irrelevant, being done by a human is no better.
Financial rating systems especially should be banned. Give lenders the data and let them make up their own minds rather than having a central source provide a score. Scores are subject to gaming, both by the end user and the scorer.
M.o.t.h.e.r. knows best! (Score:1)
Comply or die!
Move along, Johann Sebastian Bach (Score:1)
I for one welcome AI-enabled mass scoring.
Always with the "awareness" (Score:2)
What is this "AI training" that I keep hearing about that is so damn important? There are already multiple courses on statistics and a few on programming/algorithms. Given the available education we already have students should be able to read a 20 min howto on the basic construction of "AI". Why put that in a new course? Should it be more inclusive? A pink bunny that explains it using finger-puppets and song?
If you can't understand statistics or algorithms, learning about AI makes no sense. If you do under
Germans won't give up on SHUFA. (Score:3)
Germany already uses SHUFA to assign financial reliability scores to citizens. No way they will give up on it.
Re: (Score:2)
SHUFA is a club.
Financial organizations join it.
As a private customer of one of that financial organization you have to agree that the collect/share data.
In other words, if you never agreed, it is illegal for them to have/collect/share data about you.
Ban Facebook? (Score:1)
That would be a start.
Stop concentrating on AI (Score:1)
The people who are advocating against the right to remember and think about your memories, should drop the term "AI" from their platform. You can do so much without ever even jokingly approaching something that anyone would consider "AI." If you want to ban computers, then call them computers.
I was doing some of this kind of stuff decades ago, and I can't stress how simple and trivial it is. I am not a particularly smart guy, my code wasn't particularly clever, my models not particularly sophisticated. I ha
Wrong approach (Score:2)
the suggestions that the EU should ban AI-enabled mass scoring ...
We already have mass scoring systems, as in credit scores.
At some point to say "AI shouldn't be used to do these things" is to say that these things should not be done with computers,
and these things should be done poorly and in a way that tends to make more errors.
For example: look at current credit scores. They are kind of arbitrary and in some ways suck, because they're often not even an accurate reflection; for example, they don't
Re: Wrong approach (Score:2)
We already have mass scoring systems, as in credit scores.
But this is about the EU...
Won't stop it (Score:3, Interesting)
It's coming (Score:1)
There isn't even free speech in the EU. [coe.int] What's to stop the adoption of such a powerful tool for maintaining control over a population? An appeal to decency, to the same corrupt bureaucrats that passed article 13?
I can already imagine the paid shills and useful idiots: "Without this, we won't know who the nazis are, and we'll never be safe. That's a threat to our democracy!"
Don't worry (Score:2)
Re: (Score:2)
Re: (Score:2)
The more control the EU and EU nations want, the more nations that will want to exit.
And the more the establishment will want to keep up mass migration to order to buy votes, and weaken the possibility of political resistance.
It'll be interesting to see if the indigenous Europeans can just be shamed into submission to give up everything without even a fight.
Uh-oh, time to rename the technology (Score:2)
We all know it's not really "artificial intelligence" anyway, just (somewhat) sophisticated data analytics. If they ban AI, they might have to start calling it what it really is, just to make it legal.