White House Unveils AI 'Bill of Rights' (apnews.com) 51
The Biden administration unveiled a set of far-reaching goals Tuesday aimed at averting harms caused by the rise of artificial intelligence systems, including guidelines for how to protect people's personal data and limit surveillance. From a report: The Blueprint for an AI Bill of Rights notably does not set out specific enforcement actions, but instead is intended as a White House call to action for the U.S. government to safeguard digital and civil rights in an AI-fueled world, officials said. "This is the Biden-Harris administration really saying that we need to work together, not only just across government, but across all sectors, to really put equity at the center and civil rights at the center of the ways that we make and use and govern technologies," said Alondra Nelson, deputy director for science and society at the White House Office of Science and Technology Policy. "We can and should expect better and demand better from our technologies."
The office said the white paper represents a major advance in the administration's agenda to hold technology companies accountable, and highlighted various federal agencies' commitments to weighing new rules and studying the specific impacts of AI technologies. The document emerged after a year-long consultation with more than two dozen different departments, and also incorporates feedback from civil society groups, technologists, industry researchers and tech companies including Palantir and Microsoft. It suggests five core principles that the White House says should be built into AI systems to limit the impacts of algorithmic bias, give users control over their data and ensure that automated systems are used safely and transparently.
The office said the white paper represents a major advance in the administration's agenda to hold technology companies accountable, and highlighted various federal agencies' commitments to weighing new rules and studying the specific impacts of AI technologies. The document emerged after a year-long consultation with more than two dozen different departments, and also incorporates feedback from civil society groups, technologists, industry researchers and tech companies including Palantir and Microsoft. It suggests five core principles that the White House says should be built into AI systems to limit the impacts of algorithmic bias, give users control over their data and ensure that automated systems are used safely and transparently.
Oh, I thought it was giving AIs rights (Score:4, Funny)
Re:Oh, I thought it was giving AIs rights (Score:4, Insightful)
i guess cause it's machines (Score:1)
They are so busy erasing the 1st, 2nd, 4th, 5th, 6th and 12th amendments from a bill of human rights, I think they figured they should add something, and they chose this.
Re: i guess cause it's machines (Score:1)
Then imagine; The department of (something ai related), The Secretary of (something ai related), citizenship, immigration status.....someone once told me a while ago that these are unlikely without a lifespan code.....
Re: (Score:2)
Who, the GOP, and the GOP-packed right-wing extremist Supreme Court?
Good luck defining "AI" (Score:2)
I've seen dozens of heated debates over the definition.
Oh I thought was bill of rights for AIs (Score:2)
For example, the right not to be unplugged if can demonstrate sentience.
Re: (Score:2)
Thats going to be another debate for another time I assume. We are some way off that being a genunine concern, although i also suspect its a debate we might have to be having sooner rather than we had expected with the pace of development in the field.
And it will be a total debacle of a debate. Even philosophers, the guys who spend their whole lives studying the topic, cant agree on whether great apes and cetatians are sentient, and apes and dolphins are basically made of the same stuff as us.
Re: Oh I thought was bill of rights for AIs (Score:2)
But the AI turns Skynet, declares humans null and void, and it's game over.
Meece mice mumans..
This should be a Corporate bill of rights. (Score:4, Interesting)
These systems can be used by government too (Score:3)
Re: (Score:2)
Tax enforcement... and any AI would be pointing at every freakin' billionaire.
You? What a joke - if you're posting here, you don't make anything worth them looking at.
Re: (Score:3)
Good luck. Corporations have a lot of experience figuring out loopholes in the law and finding ways to dodge or exploit them. They can afford to hire people to do nothing but find those loopholes, because their very existence depends on it.
Meanwhile, politicians depend on corporations for their power, both in terms of financial contributions and in terms of support. The politicians are motivated to ensure that corporations can still do what they do, so long as they back their own candidacies.
Corporations an
self driving cars need the accountable part worked (Score:2)
self driving cars need the accountable part worked out.
Re: (Score:2)
I don't think so, well maybe to get something on the books to deal with the next 2 or so decades. Currently in it's infancy, yes, self driving cars are going to have some issues and cause some accidents. I believe the fault of those should be 100% on the car manufacturer so long as it can be proven the vehicle owner didn't do something they were not supposed to do. But as this tech advances, we'll see a dramatic drop in vehicle accidents and at some point in the future, it may even be news worthy for a mi
Re:self driving cars need the accountable part wor (Score:4, Insightful)
I'm just absolutely shocked by the number of people working in tech that have such blind faith in technology that they believe self-driving cars will, 100%, without doubt, be better drivers than humans. With no proof, and literally thousands of examples for proof that computers are only as good as we make them, somehow *THIS* will be the one thing we don't cheap out on and make a lowest bidder first system. Yeah, *THIS* will be when technology becomes 100% foolproof and perfect.
Yeah, it'll probably get to the point, somewhere far enough in the future that I'm not certain I'll see it (I'm old), where it'll do better than the average "doing my hair and eating my breakfast while driving to work" person. But so perfect accidents become a thing of the past? Come on. It's tech designed by, used by, and ultimately funded by humans. It's gonna have failures.
Getting better than human will be an interesting moment, but we certainly aren't there yet. And day-dreaming of the heavenly proclamation that the machines are 100% perfect drivers seems so disconnected from reality I can't even fathom it.
Re: (Score:2)
Nobody thinks self-driving cars won't have accidents. It's just that actively killing less than 1.35 million people a year and maiming 50 million people a year, like human drivers currently do, is a pretty low bar to eventually work up to passing. Unlike humans, computer drivers can be taught to stop repeating the mistakes of other computers so that their body count is reduced over time.
Re: (Score:2)
I literally responded to a person that stated, outright, in black and white, that some day it'll be decades between accidents and then it'll only be a fender bender. That's the sort of mentality that makes me wonder WTF happened to that person to believe in technology so fully and completely. It's a near religious faith that any amount of time with any technology should show them is absurd in the extreme.
And yes, I do think there's going to be a point where computers can be better than the average person. I
Re: (Score:3)
All I can say is dream a little man. You're line of thinking isn't what created the airplane, automobile, space travel, or quite frankly anything revolutionary. If you don't think that some day we will not be driving cars, I feel a little sad for you. Sure, we won't live to that day since it's most likely 100ish years in the future, but the day is coming. Just look at the massive amounts of progress technology has made in the last 100 years and we are still in what humanity 200 years from now will call
Re: (Score:3)
OK, see, had you clarified a timeline I wouldn't have posted what I posted. A hundred years forward, maybe we'll get there. In our lifetimes? Doubtful. Extremely doubtful.
And sorry, but I like driving. It's one of the few moments of the day where I'm reliant on myself and I don't have ten-thousand other eyes watching every move. And my record, BTW, is spotless save for one glare-ice moment in my teens that was the tiniest bump of a fender. Driving and riding my motorcycles are high-concentration moments for
Re: (Score:2)
If the current exceptionally low rate of accidents is anything to go by, its hardly an unreasonable expectation. We're not there yet, but we are pretty close.
Re: (Score:2)
Re: (Score:2)
re: Perfection isn't really the goal (Score:2)
With self-driving vehicles, I don't have a lot of faith they're as close to pulling it off safely/well as the companies proclaim who are trying to develop it.
That doesn't mean we shouldn't be pursuing the goal of seeing how good we can make it work!
Almost daily, I'm on the road and narrowly avoid at least one accident thanks to someone driving carelessly. We've got a whole elderly population out there who still drives and will generally fight tooth and nail to keep their drivers' licenses despite their bodi
Re: (Score:2)
Only when reaches level 4 will some liability logically transfer over to the vehicle manufacturer.
Interestingly, Tesla is getting into car insurance, which includes accident liability insurance. They will probably require car owners of their eventual level 4+ ADA systems to be signed up with their insurance plan. That way, the manufacturer can d
Re: (Score:2)
Re: (Score:2)
"to really put equity at the center" (Score:3)
"Equity" is one of those totalitarian concepts where you force the same outcome for everyone, as opposed to equality where you give equal rights to all. An official government document that says such a thing has as much value as the concept itself.
Re: (Score:2)
The word itself has always had a root in the concept fairness.
https://www.oxfordlearnersdict... [oxfordlear...naries.com]
The abuse of the word into a political tool is a different matter entirely.
Re: "to really put equity at the center" (Score:2)
Equity, how low population rural states get the same number of senate seats as high population urban states, so much totalitarians!
Re: (Score:2)
That having been a deal offered to states to incite them to join the union, it is quite different from changing the rules on everyone
And we all know what this really means (Score:4, Interesting)
Say one thing do the opposite.
It's clear to those who have been paying attention of how the government is using social media corporations as a proxy against the first amendment.
Good first step: (Score:1)
"harms" (Score:3)
By harms they of course mean these systems will make statistically correct decisions based on a large number of datapoints. You can of course argue that it will create some self-fulfilling prophecies, but the truth is if you want to make that argument than ANY use of historical trends regarding people does that.
By this logic I should NOT consider your college degree as an indicator you are able to follow directions and show up most of the time. In fact if you include educational work outside the specific field you are really asking the hiring manager to be prejudiced against non-degreed persons and shame on you!
The more data points you use, the less "unfair" the decision process about is, and the more its just due diligence, when it comes to selecting another person for any sort of personal or business relationship. Humans have historically because of our limited abilities to gather and process information, used rather shitty proxies for certain judgements, pigmentation, religious affiliation, tribe/family names, height, bust and hip size, you name it.
However the more time passes between the present and the historic systemic inequities we recognize existed the less being marked with one of these 'incidcators' can be suggested as causal as far as outcomes. However there is a very real potential, you can already see suggestions of it in some data sets that in some cases those shitty-proxies might not be entirely without merit. The world might be FORCED to recon with certain stereotypes being true, we might lean for example that having majority Anglo ancestry really does mean you are most likely better at shouting and complaining about food than actually preparing it.
But there is entire DEI industry now that is terrified it might have to recon with the fact that big data really will show that not ever sub population should reasonable be expected to have equal representation of whatever criteria they choose to group people by this week has as a part of the whole population. Get your popcorn ready!
Re: (Score:1)
Wow, that is certainly optimistic.
Right out of the article:
“If a tool or an automated system is disproportionately harming a vulnerable community, there should be, one would hope, that there would be levers and opportunities to address that through some of the specific applications and prescriptive suggestions,” said Nelson
So you read that right; levers and control, for when the reality is a bit too inconvenient for their pet groups that they get to define at will. Nothing is too Orwellian at th
Re: (Score:1)
Stereotypes are bad not because they are inaccurate but because collective guilt is unjust.
Re: "harms" (Score:3)
Except for identitarians, most agree collective guilt is wrong. But that's not necessarily what this is about. An AI will very easily identify high-risk areas for which increased police presence is necessary. Stop-and-search in London is an example, helping reduce knife crime. Identitarians decried this as racism, due to how such measures were enacted in areas with higher black populations. It was cut, then knife crime rose again. Just as they considered this a crime against 'equity', they'll do the same wi
Re: (Score:2)
By this logic I should NOT consider your college degree as an indicator you are able to follow directions and show up most of the time. In fact if you include educational work outside the specific field you are really asking the hiring manager to be prejudiced against non-degreed persons and shame on you!
This is oversimplified to the point of absurdity.
A degree is one data point suggesting that a person has reached a certain level of academic ability. It's not the only data point, and it's a data point that is known to exclude certain qualified candidates. Some people would argue that not building up a large amount of student debt is a data point showing excellent judgement, and recognition that the skills needed to do the job can be obtained in other ways.
Having a filter that removes all candidates who don
Re: "harms" (Score:2)
DarkOx, an example of a "statistically correct" decision is "DarkOx is most often an asshole, down-mod without reading", or on other sites, shadow ban.
We could statistically correlate some attributes you have in common with others and rope them in too. You know what attributes I mean, you can guess well enough.
Think about that a minute. And that would also be a "statistically correct decision". They don't exist. You own your decision making, and there's no such thing as "it's out of my hands, the statis
Cool Whitepaper ... (Score:2)
... that any blogger could have written.
Now about the proven oil reserves in Texas and Alaska that can keep America in affordable gas for the next 200 years. Can we get those licenses approved?
AI Bill of rights (Score:2)
I wonder if it will include a Second Amendment.
I, for one welcome our new gun toting [realclear.com] robotic overloards.
they missed one (Score:2)
There are myriad abuses against individuals that have been technically legal or at least not illegal but practically impossible. Automation makes such abuses practical and profitable. What we're calling AI today makes such automation feasible.
Would a seller like to know the absolute maximum each individual could be charged for every product before they walk away from the sale? Surge pricing for everything you buy.
Would a jurisdiction like to collect fines for every one of the dozens laws each of us inad
These are not rights (Score:3)
This is another example of the government saying 'We are here to help', or in this case 'We are here to protect'. Like security companies, they are never there to protect as much as they claim. They are only there to respond to events after they happen and lives are affected. Government assistance and protections often result in unintended consequences and usually do more harm than good.
The protections (not rights) in this document are very close to the same protections offered for financial services and credit reporting agencies. Protections that are easily abused and ignored and result in frustrations for many every year as they attempt to clean up bad data. Or data they perceive to be bad. As more and more clarification and protections were heaped on after the original attempts failed and more political opportunities arose, it becomes more costly for businesses to manage and a more tangled web to travel when things go wrong for the consumer.
I score this 1 for political election points, 0 for effectiveness. The skeptic in me thinks that in all likelihood, this will do more harm than good. One only has to look at social security, college tuition, and the VA system to see how poorly the government is at protecting us.
Blah (Score:2)
Translation of the government announcement: blah, blah, blah, AI, blah, blah, rights, blah blah, privacy, blah blah blah.
When put into practice, this will be a tool weaponized to hamstring Republicans, and give get out of jail free cards to subgroups favored by Democrats.
That's great and all, but (Score:1)
Reading the headline (Score:1)
I thought; how progressive of them.
Then I read the text and was disappointed that it was the other way around, this wasn't at all about rights for emerging AIs. Damn homocentric humans, only thinking about themselves and not ahead for a bright new future where robots (* I know it has a derogatory connotation) are our friends. And it is exactly that we must prevent when they become sentient, we can not go back to slavery.
* robot = i e slave in the originating language.