Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
United States

NYC's Anti-Bias Law For Hiring Algorithms Goes Into Effect (techcrunch.com) 84

After months of delays, New York City today began enforcing a law that requires employers using algorithms to recruit, hire or promote employees to submit those algorithms for an independent audit -- and make the results public. From a report: The first of its kind in the country, the legislation -- New York City Local Law 144 -- also mandates that companies using these types of algorithms make disclosures to employees or job candidates. At a minimum, the reports companies must make public have to list the algorithms they're using as well an an "average score" candidates of different races, ethnicities and genders are likely to receive from the said algorithms -- in the form of a score, classification or recommendation. It must also list the algorithms' "impact ratios," which the law defines as the average algorithm-given score of all people in a specific category (e.g., Black male candidates) divided by the average score of people in the highest-scoring category.

Companies found not to be in compliance will face penalties of $375 for a first violation, $1,350 for a second violation and $1,500 for a third and any subsequent violations. Each day a company uses an algorithm in noncompliance with the law, it'll constitute a separate violation -- as will failure to provide sufficient disclosure. Importantly, the scope of Local Law 144, which was approved by the City Council and will be enforced by the NYC Department of Consumer and Worker Protection, extends beyond NYC-based workers. As long as a person's performing or applying for a job in the city, they're eligible for protections under the new law.

This discussion has been archived. No new comments can be posted.

NYC's Anti-Bias Law For Hiring Algorithms Goes Into Effect

Comments Filter:
  • by caseih ( 160668 ) on Wednesday July 05, 2023 @10:30AM (#63658686)

    This is the first I've heard of this law. Sounds great in theory, unless it's just about maintaining diversity.

    I've always felt that the real danger from AI isn't the skynet sort of thing the media goes on about, nor what the AI moguls talk about, but rather the simple things of putting AIs in charge of making all sorts of decisions that involve human beings lives, wants, and needs. Everything from loans to health care coverage to hiring can and will be done by AI in the not-so-distant future. Need access to a government program or assistance? Must convince the AI. No appeals, decisions are final because the AI said so.

    I've long felt that in all of these sort of applications, if the company, organization, or governments cannot tell precisely the grounds on which the AI or any algorithm made its decision, AI an algorithms should not be used. Full stop. And there should always be a means for appealing to real humans.

    But alas I fear as much as possible will be turned over to the AI models where patterns will be discerned and decisions made that we, and their creators, have no clue how came to its conclusions.

    On the other hand, with this law, who does the examination and analysis of the algorithms? What would be the unintended consequences?

    • Re: (Score:3, Interesting)

      by Anonymous Coward

      Will it have the unintended consequence of helping to maintain diversity? I believe that's an intent. Not that a company have to maintain a certain percentage of various groups, but that the various groups aren't being discriminated against through some sort of artificial AI bias.

      As a white-male in IT, having worked with just about every protected class, and a lot of individuals not in a protected class, I have found that diversity isn't woke, but essential for solving today's complex business challenges.

      • by caseih ( 160668 )

        Thank you for sharing such an insightful comment. It's very much good to hear. Encouraging even.

    • I wonder what data they even train the hiring AIs on. Whatever it is, it's probably going to reinforce, codify, and automate biases and shortcomings already present in the hiring process. People who worked for bad, unresponsive, or smaller employers previously will be penalized. People who held titles that didn't accurately match their job. People trying to switch careers or branch out. People who have skills without necessarily having credentials.

      What the companies pushing the AI will do is add a racial qu

      • by djinn6 ( 1868030 )

        My concern is not that companies will use AI to do whatever they think is the ideal process. It's that they'll all use the exact same AI from a single company, which means anyone blacklisted can never get a job. This is a form of collusion that is not recognized by the law and can be easily abused for all sorts of nefarious purposes.

        If it was one AI per company, developed in-house, then someone being blacklisted (or just incorrectly rated low) by one AI can just go apply to a competitor.

      • by AmiMoJo ( 196126 )

        I read that they train the AI on CVs from employees they deem to be good ones. In other words, the system is designed to build a monoculture of people who all look and act same, went to the same universities, used the same keywords in their bio etc.

        You could be right about adding a racial quota, because their shitty "AI" can't recognize a good non-white candidate thanks to the bad training data. Rather than fixing it to address actual, quantifiable, and provable racism, they just try to fudge the output eno

    • let alone an LLM. This is just algorithms. i.e. code. I mentioned this elsewhere, but all this law does is say that the existing anti-discrimination laws about record keeping apply to computer systems.

      The way it works is simple, companies are required to keep records of their hiring processes & decisions. They're also required to keep track of the make up of their workforce. If the make up varies substantially from the surrounding community then the gov't makes them produce the records to prove they
    • This isn't a problem that comes about because of 'AI' per se, at least not large language models etc. It's just any old algorithmic filtering. A regex passed to `grep` counts as an algorithm in this case. They actually won't have a clue whether or not your model is biased if you're actually using enough deep learning to obscure it in the weight matrix. And if they DO have some miracle way of deconstructing those weight matrices semantically, well they'll be billionaires in their own right because this is
  • Their "legal reasoning" for doing so will be something between a poop-stained wad of toilet paper and a recording of them masturbating to snuff films.
    • by twocows ( 1216842 ) on Wednesday July 05, 2023 @11:23AM (#63658938)
      I don't agree. Supreme Court Justices usually do a very good job of post-hoc justification for whatever position they already had. They pull their justifications from written law, "case law" aka other court judgments, but also sometimes from historical laws and judgments or from outside the US. They're not just partisan hacks, they're very highly educated partisan hacks. It's completely unfair to say their legal reasoning is garbage; it's usually a master class in how to justify a conclusion that you already came to.
      • It's completely unfair to say their legal reasoning is garbage; it's usually a master class in how to justify a conclusion that you already came to.

        This sounds like we can't call it garbage unless it came from the Litter parish of County of Cork, Ireland. Instead, we must call it "highly educated rationalization".

  • job application personality tests need to go as they more or less the same thing.

  • by WoodstockJeff ( 568111 ) on Wednesday July 05, 2023 @10:41AM (#63658746) Homepage

    You must prove that you discriminate CORRECTLY.

    What if you do not ask a candidate to indicate race or sex? How will you prove that you didn't discriminate? Or must you make them indicate their race and sex to show that the algorithm you use doesn't make choices based on their declarations?

    • Not asking about race or sex in the alg is proof they didn't discriminate. Maybe the whole hiring process should be blind to race and sex to focus on merit. Cue commies warping logic to make it all about race and sex.
    • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Wednesday July 05, 2023 @10:55AM (#63658802) Homepage Journal

      What if you do not ask a candidate to indicate race or sex?

      They can't not ask in many cases. They're often obligated to ask by law [adp.com]. You're not obligated to answer (see same cite) but if you do, they have the opportunity to use it against you. It's illegal to do so, but proving it is hard.

    • by ET3D ( 1169851 )

      The law requires an audit of the algorithm, not the hiring results. It can use test data or historical data from other companies. When it's impossible to judge the candidate's race or gender, that data isn't included in the audit.

      • Then why is the law asking for reports with average scores and such for gender/race/ethnicity categories?
    • the way it works is extremely well documented. For the record it's that you have to keep records of your hiring practices, and if the makeup of your staff deviates substantially from the makeup of the surrounding community you needed paperwork to show why.

      This was necessary because you'd have factories in a 20% black city with 2% black employees, if that.

      All this does is extend those rules to the current algorithmic hiring practices most companies use. It means a company can't hide behind the "it's
    • by radaos ( 540979 )
      Amazon tried AI-based recruiting and it was a disaster.
      https://www.bbc.com/news/techn... [bbc.com]
      The tool may pick up on certain words that are a proxy for race, sex or other characteristic.
      It doesn't understand that basic principle of statistics: Correlation is not causation.
    • by migos ( 10321981 )
      That's exactly why this law was created - to audit the algorithm to make sure it doesn't hire based on last name, or photo, age, gender, or average income level of the zip code of the mailing address, etc. The most challenging part though, is that once the algorithms are created using machine learning, no one knows what's going on. AI might become discriminatory without anyone's help.
  • by Tyr07 ( 8900565 ) on Wednesday July 05, 2023 @10:42AM (#63658748)

    This is just a continuation of diversity politics

    I can understand if they want to make sure that the method they use doesn't parse any potential racial data and use that to score them, but they're looking for the great equal outcome. So if people of (insert race here) that are applying mostly live in a poor neighborhood near that company, which may have resulted in schools that do not educate as well or education is not valued as highly in their community, and results with applications that do not have the desired qualifications, they'll take the baisc information, go and discover the persons race, and if there is a pattern, scream racism and discrimination.

    Even if prior the were no way to know the applicants race or other items you could use for discrimination, they're going to turn around, find these people, find their race, and then claim racism.

    • by ET3D ( 1169851 )

      Agreed. The law should have asked the audit to be done on specific test data, that has only slight differences in the CVs, making it clear that the only difference between candidates is the gender or race. Using real data in retrospect isn't statistically valid, because, as you say, candidate qualifications will have an effect on the result, and it's possible that candidates of different groups will have different qualifications on average.

      I don't think it's trivial to create such data, but it's worth it in

  • ... I call them up for a screening interview. If I like what I hear, I call them up for an interview. I use a nebulous undefined set of criteria that includes "gut feeling" to decide who to make an offer to.

    Or how about this one: ... I randomly assign each candidate to a gerbil then I hold a gerbil race. The winner gets an offer.

    Or this one:

    I have an AI simulate said gerbil race, then make an offer to the simulated winner.

    --

    OK, the first one may not technically be an algorithm, but it's probably what man

  • It must also list the algorithms' "impact ratios," which the law defines as the average algorithm-given score of all people in a specific category (e.g., Black male candidates) divided by the average score of people in the highest-scoring category.

    What if the algorithm is not "impacting anyone" but rather ruthlessly enforcing meritocracy, and the Goodthinkers don't want to concede that based purely on personal qualifications most of the Black men shouldn't have made the cut.

    When you dig into most of the ass

    • by Tyr07 ( 8900565 )

      Yep.

      Like most things as soon as it quiets down and no one cares as it's not actually a problem and the risk of accountability for ones actions and choices appears, the drum beats, the potential scandal is investigated, quickly, look over here, not what laws and investments we have going on.

    • When you dig into most of the assaults on standardized testing and such things in the name of "fighting racism," you invariably find it was "racist" because[...]

      Blah de blah de blah.

      No, standardized tests work great as a predictor of future success in America where you are all obsessed with standardized tests and so good performance on the tests is necessary to advance. In other countries the correlation is substantially less strong. Standardized tests test rather narrow and peculiar set of skills that has

  • How would the algorithm know the different races, ethnicities, genders and black male catagories?

    ‘At a minimum, the reports companies must make public have to list the algorithms they're using as well an an "average score" candidates of different races, ethnicities and genders are likely to receive from the said algorithms -- in the form of a score, classification or recommendation. It must also list the algorithms' "impact ratios," which the law defines as the average algorithm-given score of all
  • by PPH ( 736903 ) on Wednesday July 05, 2023 @11:15AM (#63658886)

    It must also list the algorithms' "impact ratios," which the law defines as the average algorithm-given score of all people in a specific category (e.g., Black male candidates) divided by the average score of people in the highest-scoring category.

    We don't inquire about a person's race when we select applications for consideration.

    • We don't inquire about a person's race when we select applications for consideration.

      That's all well and good, but what happens when the applicant comes in for an in-person interview? Depending on the interviewer's prejudices, they may reject all applicants from a specific race, or ethnic group, or prefer to hire from that group.
      • by Tyr07 ( 8900565 )

        It's the equal outcome fight again.

        Basically, like how spoiled children get jealous and upset if their sibling did their chores and receives a treat, and they didn't do their chores, and didn't receive a treat, that it's not fair that their brother / sister received one and they didn't.

      • by PPH ( 736903 )

        That's all well and good, but what happens when the applicant comes in for an in-person interview?

        That's beyond the scope of a hiring algorithm. The AI is an initial filter on applications/resumes. And that isn't an appropriate place to be asking race questions. Once a batch of selected applicants show up for in-person interviews, race becomes evident. And problems with this stage of the hiring process can be analyzed. But we will never know what the rejection statistics are at the first (automated) step.

  • by Virtucon ( 127420 ) on Wednesday July 05, 2023 @11:17AM (#63658896)

    Interview Questions from the Future:

    Choose your favorite sauce:

    • BBQ
    • Soy
    • Ranch
    • Add "mayo" to the list. After all we want to weed out those pesky European spreading their socialism too.

  • What constitutes a "candidate"? Is it everyone who actually applied for the job with your firm? Or is it a set of fictitious candidates for similar roles, or the full set of actual candidates who applied for similar jobs across the nation? If the latter, your company could receive a rather low "impact ratio" score, if you happen to be located in an area that has relatively few qualified candidates from minority groups, even if you hire solely on merit. The relatively larger amount of good candidates fro
  • and I must hire that asshole?

  • No more bias when you hire those algorithms.

  • The article talks about AI systems which screen resumes, and cover letters. It mentions that people are categorized unfairly, or discriminated against based on attributes, but it never really talks about the single most important aspect; hiring is about getting skilled workers, not about diversity.
  • This may be good or it may be bad. however, I would wager either way, one unintended side affect will be to strengthen the Republican position in congress. The actions of CA and NY, and in this case NYC, are pushing more people from the center to the right.
  • It's not a hyperbolic question. Why are hiring practices any different from any other freedom? If I don't like someone, FOR WHATEVER REASON, why should I be forced to hire them? It's so counterintuitive. And if you do hire someone you don't like, isn't that simply forced charity, by the government?
    • It's not a hyperbolic question. Why are hiring practices any different from any other freedom? If I don't like someone, FOR WHATEVER REASON, why should I be forced to hire them? It's so counterintuitive. And if you do hire someone you don't like, isn't that simply forced charity, by the government?

      I think it's to do with the balance that society tries to strike between individual rights and the the rights of society as a whole.

      So as you say, if we focus on individual rights, it would seem fine an employer to not like someone for any reason (e.g. because of the colour of their skin). Furthermore, I should therefore be in my rights to advertise for a position making my condition clear - "Non-aryans need not apply".
      However, if society decides that that sort of behaviour is not acceptable because of the

  • I understand the concern re: black box algorithms / AI producing systematically biased recommendations for hiring decisions, but it doesnt follow that a good and reasonable algorithm must produce recommendations that are âoefairâ with respect to equal outcomes. Even the simplest and most reasonable algorithm âoeif X has 5+ years experience then hireâ will probably cut differently across different socioeconomic and demographic segments. itâ(TM)s not appropriate for the law to fine c
  • Get in touch with Remotespyhacker for all your hack related such as; Cloning, Tracking, Spying, Retrieving of deleted text messages, Upgrading of results, Hack social media accounts, Erase criminal record. His service is safe and secure. Get in touch with him via his email remotespyhacker @ gm ail com

In case of injury notify your superior immediately. He'll kiss it and make it better.

Working...