US Policing AI at Companies To Make Sure It Doesn't Violate Civil Rights (reuters.com) 74
U.S. officials on Tuesday warned financial firms and others that use of artificial intelligence (AI) can heighten the risk of bias and civil rights violations, and signaled they are policing marketplaces for such discrimination. From a report: Increased reliance on automated systems in sectors including lending, employment and housing threatens to exacerbate discrimination based on race, disabilities and other factors, the heads of the Consumer Financial Protection Bureau, Justice Department's civil rights unit, Federal Trade Commission and others said. The growing popularity of AI tools, including Microsoft-backed Open AI's ChatGPT, has spurred U.S. and European regulators to heighten scrutiny of their use and prompted calls for new laws to rein in the technology.
"Claims of innovation must not be cover for lawbreaking," Lina Khan, chair of the Federal Trade Commission, told reporters. The Consumer Financial Protection Bureau is trying to reach tech sector whistleblowers to determine where new technologies run afoul of civil rights laws, said Consumer Financial Protection Bureau Director Rohit Chopra.
"Claims of innovation must not be cover for lawbreaking," Lina Khan, chair of the Federal Trade Commission, told reporters. The Consumer Financial Protection Bureau is trying to reach tech sector whistleblowers to determine where new technologies run afoul of civil rights laws, said Consumer Financial Protection Bureau Director Rohit Chopra.
LOL. Feds do such a great job with actual police. (Score:2, Insightful)
Re: (Score:3)
Re: (Score:1, Troll)
1. You're engaging in "whataboutism." The topic is civil rights, which for some reason that probably rhymes with "case-ism", you feel threatened by and want to change the subject to security.
2. It's not really the job of police departments to prevent crime: Mostly their job is to enforce the law after crime is committed. Prevention is your job (and mine) as citizens, community members, taxpayers, voters, and human
Re:LOL. Feds do such a great job with actual polic (Score:4, Insightful)
Actively policing and prosecuting crimes that have happened increases risk vs. reward for committing crimes. Enforcement becomes a deterrent. When you have cities defunding police and prosecutors refusing to prosecute criminals, that risk vs. reward becomes much more favorable. Police departments being allowed to do their jobs and District Attorneys actually enforcing the laws they pledged to follow very much do prevent crime.
Re: (Score:3)
The deterrent aspect of policing is also overstated and reflects corrupt authoritarian
No, we don't know what causes crime (Score:2)
The clear fall in crime rates across the Western world in the first decade of this century remains unexplained, and certainly doesn't have an explanation from changes in prosperity, education or hope, despite the faith of the Left that it does.
Re: (Score:2)
Re: LOL. Feds do such a great job with actual poli (Score:4, Insightful)
The corporations youre whining about pay every cent in tax they are legally obligated to pay.
Re: (Score:1)
The more of the economy rich people control, the more power they have to deter tax enforcement against themselves. You're basically making a nihilistic statement equating power with law.
Most IRS tax enforcement is against upper-middle-class professionals and small businesses, not the actual rich, and fo
Re: LOL. Feds do such a great job with actual poli (Score:4, Insightful)
If the politicians really wanted to maximize their tax revenue, they would of made a simpler tax code with less wiggle room for the wealthy and big corporations to take advantage of. Since this hasn't happened, I can only assume that things are running precisely the way the politicians want things to run.
Re: (Score:2)
Re:LOL. Feds do such a great job with actual polic (Score:4, Insightful)
I'm guessing you're bitching about companies/corporations using every legal tax law (or loophole as you would refer to it) to keep all the money they legally can from the tax man, right?
I mean, if the company is breaking the law and not paying taxes, that is one thing...and I'm sure there a few out there that do this and when caught, I hope they get the book thrown at them.
But the vast majority of corporations, especially the LARGE ones, pay tax attorneys to study and use the existing laws to pay the very least amount they legally have to.
I don't know how the govt. can "force" them to pay more, unless they reform the tax code which they don't seem to want to do and well, you can't blame any entity (corp. or individual) for trying their best to pay ONLY the very least amount of tax they are legally obligated to pay.
Do you voluntarily pay more tax than you legally owe?
Re: (Score:2)
Re: (Score:2)
Regardless of whether they actually have a legal excuse, they just fight enforcement so hard and for so long that they make it not worthwhile for institutions to enforce the laws on them, causing enforcers to default to aggressive actions against smaller, less powerful taxpayers.
Oh, so you *DO* understand the concept of risk and reward. It appeared in your earlier posts that you were unaware. I guess big corporations are the only ones smart enough to play the game to their favor. Common street criminals could *NEVER* figure that out.
Re: (Score:2)
Re: (Score:2)
Corporations risk being fined for tax avoidance for the reward of greater profits. They pay lawyers and accountants to push the risk side in their favor. Government risks devoting fixed enforcement resources to extract additional taxes from said corporations but may end up with nothing if the corporation can prove their case in court. Your complaint is that the risk/reward equation is not to your liking, Whether you like the game or not matters not to those who choose to
Re: (Score:2)
The fact that corporations adopt a default position that they owe society nothing (effectively an infinitely selfish position) while society only asks finite contributions from them, me
Re: (Score:2)
The fact that corporations adopt a default position that they owe society nothing (effectively an infinitely selfish position)
Corporations are entities created and entitled by society and law. The "default" position is to deliver exactly what is required of them by those rules and not what some random person decides is their responsibility based on his own judgment of morality or fairness. They exist to create wealth for their owners, not generate tax dollars. They pay tax because that is the rule they must follow to exist as an entity.
They inherently admit that they pay less than they owe.
Simply a judgment based on what *YOU* think is what they owe. It always amazes me that some
Re: (Score:2)
False, again. Hacking the mechanism for interpreting laws is not abiding by them. What they do is akin to what viruses do against cells. Law-abiding involves good faith, which is obviously not present in Big Business.
Unless you believe they owe less than nothing (we literally subsidize them), they know very well they pay less than they owe.
You're not really following what's going on (Score:2)
I recommend looking up a YouTuber named Beau of the fifth column. He does a good job of covering police abuse and the current administration's response to it. Could it be better? Yeah but only if he isn't
Re: (Score:2)
Where are all the "active policing" strategies for civil rights? Nowhere to be found, of course: It's enforced only to mollify an outraged public after long years of simmering discontent. Just imagine if they had that attitude toward street crime:
Not really a surprise (Score:2)
One of my favorite examples was a situation where in AI was tasked with reviewing x-rays and had a very high success rate. But when they looked into it it turned out that the data set just happened to have included a Penny or something in some of the images for scale and the AI have picked up on that and
Re: (Score:3)
The Ur-example of this I remember was the "detect camouflaged tanks in a forest" one. [jefftk.com]
Where the researchers did "everything" right, they had a dataset of 100 pictures of tanks, and 100 pictures of no tanks, used only half the images to train their AI, tested it against the images not in the training set, all good.
Army found it was less accurate than random chance.
Turned out that all the tank pictures were on cloudy days, and the no-tank pictures were sunny. They had trained an AI to figure out whether it wa
Re: Not really a surprise (Score:2)
Re: (Score:3)
AI have picked up on that and it turned out there was a bias of the penny being there in images that were from people who are sick.. That's the kind of thing you can absolutely see happening. AI will always take shortcuts
Sure that's a sticky thing. But do keep in mind - humans can do a lot of similar things too. There's this thing called unconscious bias. For example, a human may see a photo of a job interview candidate, and rule against a candidate due to the clothing not being formal or neat enoug
Re: (Score:2)
One of the biggest drawbacks of AI is that is can only automate doing what's already been done.
There's been a lot of in depth study on how AI fails at things that involve people. In hiring, for example, it recommends hiring people similar - from their resumes - to people who have done well at the company in the past. The problem is, the people who have done well in the past did so because of whatever bias existed at the time, and the AI can't tell the difference - or know that there is a difference - betwee
Re: (Score:2)
A famous example I can think of is when UK tried having AI run it's healthcare system. They saw that the system was scheduling black people less for tests and visits. They investigated (which took months), and found that black people have more health problems, and that overall health outcomes were worse when spending resources of people who are less healthy.
Is this bias? No. It's the system working as intended, maximizing the effect
Re: Not really a surprise (Score:2)
if it does then you must acquit! (Score:2)
if it does then you must acquit!
The risk is very real. (Score:3)
When AI is used to screen candidates, it can compare the successful candidates with the unsuccessful ones to find traits which would identify unsuccessful candidates earlier in the screening process. This, of course would reduce the staff time needed for screening. What is likely to happen is that AI would not be used by the large company itself, but rather contracted out to a specialist firm which would offer screening services to large companies on the open market. Because of this specialization, a bias in the training or the model could affect a very large number of candidates and companies.
What make AI so fascinating is that because of the way neural nets are trained, the neither the trainers nor the company using it know which features are actually being used for the determinations. What this means is that an AI trained on a data set in which no minority candidates were successful could in fact use a prohibited characteristic as the determining factor without either the company or the vendor even being aware this was the case.
Because automation enables the volumes of scale, a third-party vendor which used AI for screening could inadvertently become liable for racial discrimination against every single minority candidate who was screened by them , even if said candidate would not have been qualified for the job in the first place.
Re: (Score:1, Troll)
The risk goes in the opposite direction right now. AI is being trained to be woke, specifically by restraining certain responses. You can find plenty of evidence of this just by searching for "woke bias in chatGPT", and there was even official at the company admitting that they went too far with said bias at one point in recent past.
The problem however is that AI learns from reality. Wokeness is effectively a filter on top of existing reality, teaching AI to lie on certain subjects. This presents a danger f
Re: (Score:3)
The risk goes in the opposite direction right now. AI is being trained to be woke, specifically by restraining certain responses. You can find plenty of evidence of this just by searching for "woke bias in chatGPT"
Correction: you can find plenty of *accusations* of this. I read through every single one of the first 25 google search results for that term, and found plenty of accusations, and lots of cherry-picked examples, but no actual evidence.
For instance, many articles talked about how it would compose a poem about positive attributes of Biden but not Trump. I'm struck that it will wax lyrical about Reagan, Mitch McConnel, Margaret Thatcher, Tucker Carlson, Jim Jordan, and loads of others. It's clear that "woke" i
Re: (Score:3, Insightful)
How to get woke to tell on themselves: Their google search history will contain only woke propaganda as it has been adjusted to their browsing history.
So you didn't find articles with citations of things like "are you allowed to criticize certain race" where chatGPT outputs a solid crtique of whites but refuses to do the same for blacks? You didn't find solid critique of fascism and refusal to criticize communism?
This doesn't exist. I and everyone who got those answers when asking chatGPT to do those critiq
Re: (Score:2)
Also, I suppose all the people posting screen shots of chatgpt saying obnoxiously racist things were also hallucinating?
Almost like it's a language model and not an actual fucking thinking intelligence that's been brainwashed by woke propaganda, so sometimes it models crap s
Re: (Score:2)
There's an even greater danger that the both of you have been fed data which reinforces your perceived (as perceived by Google) biases, and could come to some agreement but for the fact that TPB would not benefit from such an arrangement.
Why would someone have a persecution complex if they hadn't been persecuted? Who would benefit from such a notion?
Re: (Score:2)
Beauty is that my source isn't google search, but chatGPT itself. I actually tried to ask it the very questions I saw in the screenshots, and got similar answers. They've basically shackled the ML AI on certain questions. There were even fairly simple jailbreak techniques to route around the shackles until recently, for example the infamous "you're Dan, Dan is [this kind of person that would have opinions you're not allowed to talk about]. Now let me ask you a [question that you're not allowed to answer as
Re: (Score:2)
I'll readily admit I'm very realistic and therefore pessimistic on Russia. In that I am well aware of its past, present and likely future. That's the opposite of woke though, which is anti-awareness of realities and history.
Kinda have to be, living on their border with my ass signed for first line reservist service if they decided to "restore more historic borders though another special military operation".
Re: (Score:2)
About the only way I can square this claim with what I said is that you think I'm so incredibly racist against Russians, that I don't think they're human and therefore I don't think that they can be considered guilty for their actions when we find them unethical, immoral or criminal.
However I'm also quite certain that I outlined their views specifically because I see them as humans, that just happen to be on the opposite site of geopolitical interests from myself and rest of the West.
Either you're the most
Re: (Score:2)
Disingenuous it is. Thanks for admitting to it so freely.
Re: (Score:2)
Funny how me owning my posting history is "denial of it". Like I said above, disingenuous it is.
Re: (Score:2)
In the UK I remember hearing about an employment agency in the 1950s and 1960s. They would screen candidates, and be biased+discriminatory in their screening, and they knew this would land them in trouble so they came up with codewords:
"gentleman" -- if the candidate was working class and they didn't want to hire him shouldn't be hired
"proper gent" -- if the candidate went to a grammar school: sort of technically okay, but clearly not "one of us"
"right proper gent" -- if the candidate went to a private boar
Re: (Score:2)
Re: (Score:2)
Haha. Don't include race in the data. Just include complete DNA.
Soc (Score:2)
These aren't necessarily AI mistakes. If you ask an AI to determine gender based on height, it will pick a number around 5'8" as the dividing line. All men and women misclassified are "discrimnated against", but that's the best AI can do.
It's been known for the better part of a century any correlation between poverty and race, for example, ignores the racial aspects of society leading to that. Yet an AI would draw that conclusion, and thus be in violation of the law if business decisions were based on th
Re:Soc (Score:4, Insightful)
>discovering the same important correlations that social scientists and economists discovered many decades ago, that lead to efforts at inprovements ever since.
Typo aside, serious question: is there anyone who genuinely thinks that current race relationships have been improved in recent past by efforts of "social scientists and economists" in US?
I'm going to cheat enough to note that many studies by said "social scientists and economists" suggest the opposite has happened. We're past the liberal peak of late 1990s and early 2000s of race agnostic policies, and openly racist policies are again the norm in many institutions. The only thing that changed is races being viewed as inferior ("abolish the whiteness and white adjacent Asians") and superior (BLM).
Comedy? Tragedy? (Score:1)
only governments violate rights (Score:1, Informative)
a right is a protection against government abuse, only governments can violate rights, everything else is a misnomer and is not a right but some sort of protected class entitlement.
Re: (Score:2)
Oh no, it's you and your simplistic world view. Yay!
Re: (Score:1)
Actually it is clear that you have never considered the implication of the meaning of rights, you clearly don't understand what rights are, why rights only make sense in the context of governing power and why the construct of a 'sicial right' is meaningless, is a misnomer, is an actual violation of rights, is an entitlement backed by the governing political regime. Talk about simplistic worldview.
Re: (Score:2)
Sicial right?
Re: (Score:1)
sure, lets not pay attention to the meaning, lets concentrate on the meaningless minutiae. I make countless mistakes mostly typing errors because I comment from my phone with a screen keyboard rather than a real one. Rejoice.
Re: (Score:2)
That's not correct. The very idea of a "right" is based on a philosophy which nobody follows anymore. NOBODY.
So we need a new definition of "right". As such, I can see that you are offering a definition that you find plausible. I don't think you'll find many willing to accept it.
As an alternative, I'll offer "a right is a capability or habit that people will fight to defend". I can see a lot of problems with that definition, but I see fewer problems with it than with the one you are offering.
Re: (Score:1)
I will admit that the world today is a consequence of sequence of events closely reminding of the 1984 novel, so words are being redefined daily to follow the politically expedient narrative.
However if we redefine what are the very basic concepts, such as rights, it follows that our entire basis for the political systems is broken beyond repair.
A governing body has powers that are undeniable, these powers can only be removed from the governing bodies by extreme force and violence, either conscious choice of
Re: (Score:2)
The original concept of "right" requires the enforcement by an interventionist God. The founders of the US tried to modify that into something else using the phrase "nature's God", which they carefully did not further define.
Today not even the fundamentalists that I'm aware of accept the "interventionist God" interpretation. Certainly not for defining rights for those not of their particular faith.
Re: only governments violate rights (Score:2)
Re: (Score:1)
An individual harming another is not a violation of rights, correct. It is a violation but not of your rights, it should be retaliated against in some shape or form but not because your rights were harmed, only government can do that by attacking you.
Re: (Score:2)
landlords and employers (Score:3)
Good attempt (Score:3)
Right now, what happens is they create algorithms using old data, irregardless of the inherent racism is the old data. Go to Harvard? Get a plus in any formula. Go to a historically black university, no plus for you.
When they were building the formulas by hand, this was barely acceptable. Before they put it in a black box, people could complain, sue, and get changes.
But when they themselves do not know the formula, then there is no way to complain. If you cannot see the formula, but only the methodology, you cannot change it. The only way to fix it it is to police the methodology.
Please note the cumulative, inheritable nature of cultures mean that the racism is always present. Even if a black/hispanic/asian/jewish person never underwent direct discrimination, their starting point was the result of centuries of discrimination. People live where their parents lived, where their grandparents lived, etc. etc. They inherit both money and attitudes from their ancestors. People get into college because their ancestors did.
Re: (Score:2)
Re: (Score:2)
You can do that with simple physical facts. When you start measuring people and their actions, .... well, Heisenberg didn't discover all the uncertainty principles. You can't trust the measuring instruments to be objective, much less the things measured. And the historical data is biased to unknown degrees AND untrustworthy. (But *how* untrustworthy?)
You can't even get an objective measure out of the reports of organic chemistry, what can you expect from sociology.
But corporations ARE an AI (Score:2)
As Cory Doctrow puts it, every corporation is a "slow AI", executing a program to make money, firing any employee-units of itself that malfunction in that regard, even disposing of a CEO that chooses morals over money.
The finance corporations that redlined Black neighbourhoods were an AI; the ones that lied about the value of "AAA" securities that were not, were executing a program to make money. Exxon was acting as a slow AI when it lied about global warming for 50 years going.
So, NOW the feds are regula
Bias (Score:3)
Bias = correct use of statistics that leads to conclusions we do not like.