Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
United States AI Technology

US Policing AI at Companies To Make Sure It Doesn't Violate Civil Rights (reuters.com) 74

U.S. officials on Tuesday warned financial firms and others that use of artificial intelligence (AI) can heighten the risk of bias and civil rights violations, and signaled they are policing marketplaces for such discrimination. From a report: Increased reliance on automated systems in sectors including lending, employment and housing threatens to exacerbate discrimination based on race, disabilities and other factors, the heads of the Consumer Financial Protection Bureau, Justice Department's civil rights unit, Federal Trade Commission and others said. The growing popularity of AI tools, including Microsoft-backed Open AI's ChatGPT, has spurred U.S. and European regulators to heighten scrutiny of their use and prompted calls for new laws to rein in the technology.

"Claims of innovation must not be cover for lawbreaking," Lina Khan, chair of the Federal Trade Commission, told reporters. The Consumer Financial Protection Bureau is trying to reach tech sector whistleblowers to determine where new technologies run afoul of civil rights laws, said Consumer Financial Protection Bureau Director Rohit Chopra.

This discussion has been archived. No new comments can be posted.

US Policing AI at Companies To Make Sure It Doesn't Violate Civil Rights

Comments Filter:
  • They can't / won't stop local police departments and state officials from violating civil rights, but they're for sure going to stop massive corporations they can't even force to pay taxes from doing it.
    • I totally get where you are coming from here -- the police should be there to ensure my civil rights to not be assaulted by criminals against the laws passed by our respective legislatures -- and they are failing miserably in preventing these crimes.
      • Re: (Score:1, Troll)

        by Eunomion ( 8640039 )
        Not sure where to start with what's wrong with that attitude, so let's just dive in:

        1. You're engaging in "whataboutism." The topic is civil rights, which for some reason that probably rhymes with "case-ism", you feel threatened by and want to change the subject to security.

        2. It's not really the job of police departments to prevent crime: Mostly their job is to enforce the law after crime is committed. Prevention is your job (and mine) as citizens, community members, taxpayers, voters, and human
        • by BoB235423424 ( 6928344 ) on Tuesday April 25, 2023 @04:29PM (#63476258)

          Actively policing and prosecuting crimes that have happened increases risk vs. reward for committing crimes. Enforcement becomes a deterrent. When you have cities defunding police and prosecutors refusing to prosecute criminals, that risk vs. reward becomes much more favorable. Police departments being allowed to do their jobs and District Attorneys actually enforcing the laws they pledged to follow very much do prevent crime.

          • While active policing makes sense in concept, this article refers to some of the dangers inherent in the design process of actual programs. Bias has already been identified in cases where police, prosecutors, and parole boards were reckless in using software to make decisions. The megalithic businesses behind this software would be far, far less accountable to the public than the local institutions delegating to them.

            The deterrent aspect of policing is also overstated and reflects corrupt authoritarian
            • The clear fall in crime rates across the Western world in the first decade of this century remains unexplained, and certainly doesn't have an explanation from changes in prosperity, education or hope, despite the faith of the Left that it does.

              • It is not "unexplained," you just don't like the explanation: The end of the Cold War led to global adoption of liberal democratic standards, to varying degrees, and that had massive and lingering dividends. Despite Republican attempts to destroy them.
    • by djp2204 ( 713741 ) on Tuesday April 25, 2023 @04:37PM (#63476278)

      The corporations youre whining about pay every cent in tax they are legally obligated to pay.

      • False. Totally false. They only pay what the government can afford to make them pay, which is less every year because...shocker...you have to collect taxes in order to spend money enforcing tax laws.

        The more of the economy rich people control, the more power they have to deter tax enforcement against themselves. You're basically making a nihilistic statement equating power with law.

        Most IRS tax enforcement is against upper-middle-class professionals and small businesses, not the actual rich, and fo
        • by sarren1901 ( 5415506 ) on Tuesday April 25, 2023 @05:43PM (#63476482)

          If the politicians really wanted to maximize their tax revenue, they would of made a simpler tax code with less wiggle room for the wealthy and big corporations to take advantage of. Since this hasn't happened, I can only assume that things are running precisely the way the politicians want things to run.

          • Politicians' motives aren't laws. Laws are laws. And given how hard (read: virtually impossible) it is to even enforce simple criminal laws (like negligent homicide) that are a few paragraphs long on these companies and HNWIs, the length of the tax code is a red herring for why it's not enforced on them either.
    • by cayenne8 ( 626475 ) on Tuesday April 25, 2023 @04:52PM (#63476328) Homepage Journal

      ...corporations they can't even force to pay taxes from doing it.

      I'm guessing you're bitching about companies/corporations using every legal tax law (or loophole as you would refer to it) to keep all the money they legally can from the tax man, right?

      I mean, if the company is breaking the law and not paying taxes, that is one thing...and I'm sure there a few out there that do this and when caught, I hope they get the book thrown at them.

      But the vast majority of corporations, especially the LARGE ones, pay tax attorneys to study and use the existing laws to pay the very least amount they legally have to.

      I don't know how the govt. can "force" them to pay more, unless they reform the tax code which they don't seem to want to do and well, you can't blame any entity (corp. or individual) for trying their best to pay ONLY the very least amount of tax they are legally obligated to pay.

      Do you voluntarily pay more tax than you legally owe?

      • While I am also objecting to all the loopholes created for corporations, I'm referring to something much uglier: Lawfare. Regardless of whether they actually have a legal excuse, they just fight enforcement so hard and for so long that they make it not worthwhile for institutions to enforce the laws on them, causing enforcers to default to aggressive actions against smaller, less powerful taxpayers.
        • Regardless of whether they actually have a legal excuse, they just fight enforcement so hard and for so long that they make it not worthwhile for institutions to enforce the laws on them, causing enforcers to default to aggressive actions against smaller, less powerful taxpayers.

          Oh, so you *DO* understand the concept of risk and reward. It appeared in your earlier posts that you were unaware. I guess big corporations are the only ones smart enough to play the game to their favor. Common street criminals could *NEVER* figure that out.

          • Not everything is meant to be a game or a commodity, for fuck's sake. If laws apply less the more impactful their violation is (such as stealing billions rather than thousands), then they're not being enforced in any meaningful sense.
            • You *ALMOST* get it. Almost.

              Corporations risk being fined for tax avoidance for the reward of greater profits. They pay lawyers and accountants to push the risk side in their favor. Government risks devoting fixed enforcement resources to extract additional taxes from said corporations but may end up with nothing if the corporation can prove their case in court. Your complaint is that the risk/reward equation is not to your liking, Whether you like the game or not matters not to those who choose to
              • You're still making nihilistic power arguments that deny the entire concept of law and civilization. There is more at work in human behavior than zero-sum power games, and people's decisions have proven this every bit as convincingly as dictators and criminals have proven sadistic, moot points about the holes in morality.

                The fact that corporations adopt a default position that they owe society nothing (effectively an infinitely selfish position) while society only asks finite contributions from them, me
                • The fact that corporations adopt a default position that they owe society nothing (effectively an infinitely selfish position)

                  Corporations are entities created and entitled by society and law. The "default" position is to deliver exactly what is required of them by those rules and not what some random person decides is their responsibility based on his own judgment of morality or fairness. They exist to create wealth for their owners, not generate tax dollars. They pay tax because that is the rule they must follow to exist as an entity.

                  They inherently admit that they pay less than they owe.

                  Simply a judgment based on what *YOU* think is what they owe. It always amazes me that some

                  • "The "default" position is to deliver exactly what is required of them by those rules"

                    False, again. Hacking the mechanism for interpreting laws is not abiding by them. What they do is akin to what viruses do against cells. Law-abiding involves good faith, which is obviously not present in Big Business.

                    "Simply a judgment based on what *YOU* think is what they owe."

                    Unless you believe they owe less than nothing (we literally subsidize them), they know very well they pay less than they owe.

                    "Do you tel

    • Joe Biden's administration has repeatedly shown up at local town and City sheriff's departments to do investigations. It'll be years before any of them really pay off in terms of firings but you'll see an immediate improvement as any department under investigation is going to be on its best behavior.



      I recommend looking up a YouTuber named Beau of the fifth column. He does a good job of covering police abuse and the current administration's response to it. Could it be better? Yeah but only if he isn't
      • Civil rights enforcement actions happen. But (in all administrations) typically only in cases so egregious, involving such deep institutional rot, that the department in question might as well have written a letter to the federal government confessing to it.

        Where are all the "active policing" strategies for civil rights? Nowhere to be found, of course: It's enforced only to mollify an outraged public after long years of simmering discontent. Just imagine if they had that attitude toward street crime:
  • There's a ton of underlying civil rights issues we don't talk about. An AI is going to follow the quickest and easiest path to a solution. So it's really easy for them to pick up on that unintentionally.

    One of my favorite examples was a situation where in AI was tasked with reviewing x-rays and had a very high success rate. But when they looked into it it turned out that the data set just happened to have included a Penny or something in some of the images for scale and the AI have picked up on that and
    • The Ur-example of this I remember was the "detect camouflaged tanks in a forest" one. [jefftk.com]

      Where the researchers did "everything" right, they had a dataset of 100 pictures of tanks, and 100 pictures of no tanks, used only half the images to train their AI, tested it against the images not in the training set, all good.

      Army found it was less accurate than random chance.

      Turned out that all the tank pictures were on cloudy days, and the no-tank pictures were sunny. They had trained an AI to figure out whether it wa

    • by mysidia ( 191772 )

      AI have picked up on that and it turned out there was a bias of the penny being there in images that were from people who are sick.. That's the kind of thing you can absolutely see happening. AI will always take shortcuts

      Sure that's a sticky thing. But do keep in mind - humans can do a lot of similar things too. There's this thing called unconscious bias. For example, a human may see a photo of a job interview candidate, and rule against a candidate due to the clothing not being formal or neat enoug

    • by taustin ( 171655 )

      One of the biggest drawbacks of AI is that is can only automate doing what's already been done.

      There's been a lot of in depth study on how AI fails at things that involve people. In hiring, for example, it recommends hiring people similar - from their resumes - to people who have done well at the company in the past. The problem is, the people who have done well in the past did so because of whatever bias existed at the time, and the AI can't tell the difference - or know that there is a difference - betwee

    • There's a ton of underlying civil rights issues we don't talk about.

      A famous example I can think of is when UK tried having AI run it's healthcare system. They saw that the system was scheduling black people less for tests and visits. They investigated (which took months), and found that black people have more health problems, and that overall health outcomes were worse when spending resources of people who are less healthy.

      Is this bias? No. It's the system working as intended, maximizing the effect

      • That question has been asked, and the primary reason is that black people in the UK tend to be poorer than average. Poorer people in the UK have less good health outcomes as they are less likely to have good nutrition, are more likely to live in poor housing, more likely to have jobs which result in injury, short or long term, etc. (very few middle class kids decide to become coal miners, for example).
  • if it does then you must acquit!

  • by gillbates ( 106458 ) on Tuesday April 25, 2023 @02:59PM (#63476030) Homepage Journal

    When AI is used to screen candidates, it can compare the successful candidates with the unsuccessful ones to find traits which would identify unsuccessful candidates earlier in the screening process. This, of course would reduce the staff time needed for screening. What is likely to happen is that AI would not be used by the large company itself, but rather contracted out to a specialist firm which would offer screening services to large companies on the open market. Because of this specialization, a bias in the training or the model could affect a very large number of candidates and companies.

    What make AI so fascinating is that because of the way neural nets are trained, the neither the trainers nor the company using it know which features are actually being used for the determinations. What this means is that an AI trained on a data set in which no minority candidates were successful could in fact use a prohibited characteristic as the determining factor without either the company or the vendor even being aware this was the case.

    Because automation enables the volumes of scale, a third-party vendor which used AI for screening could inadvertently become liable for racial discrimination against every single minority candidate who was screened by them , even if said candidate would not have been qualified for the job in the first place.

    • Re: (Score:1, Troll)

      by Luckyo ( 1726890 )

      The risk goes in the opposite direction right now. AI is being trained to be woke, specifically by restraining certain responses. You can find plenty of evidence of this just by searching for "woke bias in chatGPT", and there was even official at the company admitting that they went too far with said bias at one point in recent past.

      The problem however is that AI learns from reality. Wokeness is effectively a filter on top of existing reality, teaching AI to lie on certain subjects. This presents a danger f

      • by ljw1004 ( 764174 )

        The risk goes in the opposite direction right now. AI is being trained to be woke, specifically by restraining certain responses. You can find plenty of evidence of this just by searching for "woke bias in chatGPT"

        Correction: you can find plenty of *accusations* of this. I read through every single one of the first 25 google search results for that term, and found plenty of accusations, and lots of cherry-picked examples, but no actual evidence.

        For instance, many articles talked about how it would compose a poem about positive attributes of Biden but not Trump. I'm struck that it will wax lyrical about Reagan, Mitch McConnel, Margaret Thatcher, Tucker Carlson, Jim Jordan, and loads of others. It's clear that "woke" i

        • Re: (Score:3, Insightful)

          by Luckyo ( 1726890 )

          How to get woke to tell on themselves: Their google search history will contain only woke propaganda as it has been adjusted to their browsing history.

          So you didn't find articles with citations of things like "are you allowed to criticize certain race" where chatGPT outputs a solid crtique of whites but refuses to do the same for blacks? You didn't find solid critique of fascism and refusal to criticize communism?

          This doesn't exist. I and everyone who got those answers when asking chatGPT to do those critiq

          • And all the people who asked those same questions, and got entirely reasonable responses, are hallucinating? Couldn't possibly be that the fact-challenged far right fabricates bullshit to support their narrative for the millionth time.
            Also, I suppose all the people posting screen shots of chatgpt saying obnoxiously racist things were also hallucinating?
            Almost like it's a language model and not an actual fucking thinking intelligence that's been brainwashed by woke propaganda, so sometimes it models crap s
            • There's an even greater danger that the both of you have been fed data which reinforces your perceived (as perceived by Google) biases, and could come to some agreement but for the fact that TPB would not benefit from such an arrangement.

              Why would someone have a persecution complex if they hadn't been persecuted? Who would benefit from such a notion?

              • by Luckyo ( 1726890 )

                Beauty is that my source isn't google search, but chatGPT itself. I actually tried to ask it the very questions I saw in the screenshots, and got similar answers. They've basically shackled the ML AI on certain questions. There were even fairly simple jailbreak techniques to route around the shackles until recently, for example the infamous "you're Dan, Dan is [this kind of person that would have opinions you're not allowed to talk about]. Now let me ask you a [question that you're not allowed to answer as

    • by ljw1004 ( 764174 )

      In the UK I remember hearing about an employment agency in the 1950s and 1960s. They would screen candidates, and be biased+discriminatory in their screening, and they knew this would land them in trouble so they came up with codewords:

      "gentleman" -- if the candidate was working class and they didn't want to hire him shouldn't be hired
      "proper gent" -- if the candidate went to a grammar school: sort of technically okay, but clearly not "one of us"
      "right proper gent" -- if the candidate went to a private boar

    • by lsllll ( 830002 )
      You seem to be in support of a law to curb, or at least oversee the role of AI in the hiring process. What I don't understand is why you (and everyone else in support of the feds doing this) want to pass unnecessary laws, because we already have laws on the books aimed at enforcing hiring practices that violate civil rights. We already have laws on the books that protect certain classes again discrimination. Do we really need to have a law specific to when an AI is involved? It's like programming like t
    • Haha. Don't include race in the data. Just include complete DNA.

  • These aren't necessarily AI mistakes. If you ask an AI to determine gender based on height, it will pick a number around 5'8" as the dividing line. All men and women misclassified are "discrimnated against", but that's the best AI can do.

    It's been known for the better part of a century any correlation between poverty and race, for example, ignores the racial aspects of society leading to that. Yet an AI would draw that conclusion, and thus be in violation of the law if business decisions were based on th

    • Re:Soc (Score:4, Insightful)

      by Luckyo ( 1726890 ) on Tuesday April 25, 2023 @04:00PM (#63476176)

      >discovering the same important correlations that social scientists and economists discovered many decades ago, that lead to efforts at inprovements ever since.

      Typo aside, serious question: is there anyone who genuinely thinks that current race relationships have been improved in recent past by efforts of "social scientists and economists" in US?

      I'm going to cheat enough to note that many studies by said "social scientists and economists" suggest the opposite has happened. We're past the liberal peak of late 1990s and early 2000s of race agnostic policies, and openly racist policies are again the norm in many institutions. The only thing that changed is races being viewed as inferior ("abolish the whiteness and white adjacent Asians") and superior (BLM).

  • Both? Pretty sure this is doomed to fail. Why? Because social media has failed epically at this, despite apparently trying very hard.
  • a right is a protection against government abuse, only governments can violate rights, everything else is a misnomer and is not a right but some sort of protected class entitlement.

    • by KlomDark ( 6370 )

      Oh no, it's you and your simplistic world view. Yay!

      • Actually it is clear that you have never considered the implication of the meaning of rights, you clearly don't understand what rights are, why rights only make sense in the context of governing power and why the construct of a 'sicial right' is meaningless, is a misnomer, is an actual violation of rights, is an entitlement backed by the governing political regime. Talk about simplistic worldview.

        • by KlomDark ( 6370 )

          Sicial right?

          • sure, lets not pay attention to the meaning, lets concentrate on the meaningless minutiae. I make countless mistakes mostly typing errors because I comment from my phone with a screen keyboard rather than a real one. Rejoice.

    • by HiThere ( 15173 )

      That's not correct. The very idea of a "right" is based on a philosophy which nobody follows anymore. NOBODY.

      So we need a new definition of "right". As such, I can see that you are offering a definition that you find plausible. I don't think you'll find many willing to accept it.

      As an alternative, I'll offer "a right is a capability or habit that people will fight to defend". I can see a lot of problems with that definition, but I see fewer problems with it than with the one you are offering.

      • I will admit that the world today is a consequence of sequence of events closely reminding of the 1984 novel, so words are being redefined daily to follow the politically expedient narrative.

        However if we redefine what are the very basic concepts, such as rights, it follows that our entire basis for the political systems is broken beyond repair.

        A governing body has powers that are undeniable, these powers can only be removed from the governing bodies by extreme force and violence, either conscious choice of

        • by HiThere ( 15173 )

          The original concept of "right" requires the enforcement by an interventionist God. The founders of the US tried to modify that into something else using the phrase "nature's God", which they carefully did not further define.

          Today not even the fundamentalists that I'm aware of accept the "interventionist God" interpretation. Certainly not for defining rights for those not of their particular faith.

    • So if I chain you up in a dungeon against your will you would not consider your rights impinged upon because I'm not a government? You think this is a logical position? Certainly, the Supreme Court would not agree.
      • An individual harming another is not a violation of rights, correct. It is a violation but not of your rights, it should be retaliated against in some shape or form but not because your rights were harmed, only government can do that by attacking you.

        • by q_e_t ( 5104099 )
          You have a very individual interpretation of rights, which you are entitled to, but it doesn't match the development of ethics over approximately the last two hundred years.
  • by bugs2squash ( 1132591 ) on Tuesday April 25, 2023 @03:48PM (#63476138)
    landlords are already being investigated for using algorithms to maximize rents. Employers will be using similar to minimize wages. This is to the detriment of the rest of us regardless of gender/ethnicity etc.
  • by gurps_npc ( 621217 ) on Tuesday April 25, 2023 @03:54PM (#63476160) Homepage

    Right now, what happens is they create algorithms using old data, irregardless of the inherent racism is the old data. Go to Harvard? Get a plus in any formula. Go to a historically black university, no plus for you.

    When they were building the formulas by hand, this was barely acceptable. Before they put it in a black box, people could complain, sue, and get changes.

    But when they themselves do not know the formula, then there is no way to complain. If you cannot see the formula, but only the methodology, you cannot change it. The only way to fix it it is to police the methodology.

    Please note the cumulative, inheritable nature of cultures mean that the racism is always present. Even if a black/hispanic/asian/jewish person never underwent direct discrimination, their starting point was the result of centuries of discrimination. People live where their parents lived, where their grandparents lived, etc. etc. They inherit both money and attitudes from their ancestors. People get into college because their ancestors did.

  • Comment removed based on user account deletion
    • by HiThere ( 15173 )

      You can do that with simple physical facts. When you start measuring people and their actions, .... well, Heisenberg didn't discover all the uncertainty principles. You can't trust the measuring instruments to be objective, much less the things measured. And the historical data is biased to unknown degrees AND untrustworthy. (But *how* untrustworthy?)

      You can't even get an objective measure out of the reports of organic chemistry, what can you expect from sociology.

  • As Cory Doctrow puts it, every corporation is a "slow AI", executing a program to make money, firing any employee-units of itself that malfunction in that regard, even disposing of a CEO that chooses morals over money.

    The finance corporations that redlined Black neighbourhoods were an AI; the ones that lied about the value of "AAA" securities that were not, were executing a program to make money. Exxon was acting as a slow AI when it lied about global warming for 50 years going.

    So, NOW the feds are regula

  • by groobly ( 6155920 ) on Wednesday April 26, 2023 @10:49AM (#63478018)

    Bias = correct use of statistics that leads to conclusions we do not like.

Don't tell me how hard you work. Tell me how much you get done. -- James J. Ling

Working...