Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI United States

NYC Announces Plans To Test Algorithms For Bias (betanews.com) 79

The mayor of New York City, Bill de Blasio, has announced the formation of a new task force to examine the fairness of the algorithms used in the city's automated systems. From a report: The Automated Decision Systems Task Force will review algorithms that are in use to determine that they are free from bias. Representatives from the Department of Social Services, the NYC Police Department, the Department of Transportation, the Mayor's Office of Criminal Justice, the Administration for Children's Services, and the Department of Education will be involved, and the aim is to produce a report by December 2019. However, it may be some time before the task force has any sort of effect. While a report is planned for the end of next year, it will merely recommend "procedures for reviewing and assessing City algorithmic tools to ensure equity and opportunity" -- it will be a while before any recommendation might be assessed and implemented.
This discussion has been archived. No new comments can be posted.

NYC Announces Plans To Test Algorithms For Bias

Comments Filter:
  • by Anonymous Coward on Thursday May 17, 2018 @01:55PM (#56628514)

    ...that happens to be correct, do they throw the whole thing out to spare everyone's feelings?

    • by cayenne8 ( 626475 ) on Thursday May 17, 2018 @02:02PM (#56628552) Homepage Journal

      ...that happens to be correct, do they throw the whole thing out to spare everyone's feelings?

      I'm guessing it won't come to that, they won't allow any white people (especially males) on that task force, just to make sure it keeps things "fair".

      • by arglebargle_xiv ( 2212710 ) on Friday May 18, 2018 @12:51AM (#56631128)
        The headline is actually reversed. Algorithms aren't biased, or racist, or whatever, they take all the input data they can get, crunch the numbers, and produce a result based on the data. The goal in this case is to take unbiased algorithms and, if they produce a result that SJWs object to, bias them to produce a result more in line with what the SJWs want to see. So the idea is to make the algorithms biased, not unbiased.
    • by butchersong ( 1222796 ) on Thursday May 17, 2018 @02:06PM (#56628576)
      They will just keep updating it to not be aware of x and then getting upset when they still see the correlation. So rather than using the AI to determine where cops should be deployed heaviest and having it return "African American neighborhoods" they'll instead get a result like "where menthol cigarette sales are highest" or some-such.
    • by Anonymous Coward on Thursday May 17, 2018 @02:11PM (#56628604)

      An un-biased algorithm means removing race and sex from consideration. But this is not what is being sought. The results from removing these two factors from school admission or job qualification determinations will produce results that will be automatically labeled racist or sexist. Competency and quality will always rise to the top regardless of race or sex but that is not an acceptable result in today's society.

      • by Ichijo ( 607641 )

        Is it legal to advertise more in minority communities in order to expand the pool of minority applicants?

      • by Altus ( 1034 )

        I guess that depends on how you a measuring competency among people who don't yet work for you (in your example anyway). If you are ignoring race and sex but are using other indicators that are simply standins for race and sex (went to a women's university or a predominantly black university, home address in a minority part of town) then you are ending up with the same problem, you just changed the way the bias is introduced.

      • by AmiMoJo ( 196126 )

        Explain why you believe that's not what they are seeking?

        Do you have some evidence or is this just an automatic Pavlovian response? Or just some cynical "must be oppressing white men" rubbish?

    • ...that happens to be correct, do they throw the whole thing out to spare everyone's feelings?

      They either throw it out or change it to get the result they like. If those are the choices, I'd prefer the former.

      Of course they can also go after whoever wrote the racist algorithm.

  • fair (Score:4, Insightful)

    by cascadingstylesheet ( 140919 ) on Thursday May 17, 2018 @02:04PM (#56628568) Journal
    So by "fair" they mean "unfair" ... since we can't have a mere algorithm making decisions based on silly facts and stuff. It might not produce the "right" outcomes.
    • To be "fair", a model might operate on correlations - just a naive fit of the data. Some of these correlations might have a reasonable chance of reflecting cause and effect, and others might be a coincidence. In a perfect world, you'd suss out all of the bogus correlations - but if you are resource constrained, then going after the ones that are reinforcing systemic bias is a reasonable place to direct your efforts.

      • Re: (Score:2, Interesting)

        by Anonymous Coward

        What if the bias happens to NOT be a spurious correlation?

        There's no natural law that says that unwanted correlations aren't real. For example, white men really can't jump as well as black ones. Women tend to be better educated. What we need to come to terms with socially is how to accept uncomfortable findings like that without assigning collective blame, to realize that individuals can be exceptions, especially with effort, etc.

        The hard part is dealing with things like crime. We want to stop crime so

        • What if the bias happens to NOT be a spurious correlation?

          Then cascadingstylesheet's critique is valid.

          white men really can't jump as well as black ones

          Well, here's a good example... if you have a scientific reason to say this, then fine - it's a cold hard fact. But if you just happen to notice that blacks are overrepresented in athletics, that does not tell you whether blacks jump higher or whether blacks are raised in conditions which result in an emphasis on athletics. If you right an athlete-recruiting algorithm and it weights blacks for no particular reason other than this correlation... that is exactly the

      • by rtb61 ( 674572 )

        What seems to confusing the geeks and nerds is the whole idea of bias in algorithms, makes no sense but then you stop and think. What if a couple of racist idiots got the programs to add in racist elements to the data manipulation functions and they simply don't want to fess up to it and instead claim biased algorithms. You know the political appointees who got into office purely based on their electioneering and nothing what so ever to do with their honesty, reliability, skill set or fitness for the job. T

    • Reality is racist (Score:2, Insightful)

      by Anonymous Coward

      It's the typical response you get from liberals when you point out that their reasoning doesn't match the world: reality is racist.

      Try and bring crime statistics or other facts into a discussion about how weird it is that the areas with the most liberal voters are also the ones that, strangely, have the most crime, or that weirdly statistics proves that it's not racism, immigrants really do bring crime with them - and they'll just reply that "reality is racist."

      This is the same thing. Of course the computer

    • by rsilvergun ( 571051 ) on Thursday May 17, 2018 @07:14PM (#56629946)
      There's this [nytimes.com]. That doesn't seem very fair to me. Every bit of data fed into that algorithm was a fact though. Now, if some of those facts were choose... shall we say "selectively", well, that's fair. It's a free country right? Right?
  • If they want precrime algorithms and AI to ensure no one is smoking or drinking sugary beverages then more power to them, as long as they keep their technocratic dystopia confined to NYC they can do whatever they want.

  • Cat tongue (Score:4, Insightful)

    by Impy the Impiuos Imp ( 442658 ) on Thursday May 17, 2018 @02:08PM (#56628590) Journal

    "Well, the legislature, governor, and city council have to go..."

    "Yes. Wait, we haven't even started yet!"

  • by TheZeitgeist ( 5083373 ) on Thursday May 17, 2018 @02:24PM (#56628654)
    ...Affirmative Algorithms.
  • by darkain ( 749283 ) on Thursday May 17, 2018 @02:33PM (#56628694) Homepage

    Put in a mandate that all government algorithms most be open sourced in an easily accessible fashion, and all data passed through them must also be easily accessed. This will enable 3rd parties, ANY 3rd party, not just contracted "companies" (usually in the pockets of the people making decisions) to audit the code and data for flaws.

    One of the largest issues I've seen in the past with these systems is that they falsely assume correlation = causation. And quite often, the cause and effect are backwards, too. One example I always liked was that "overhead high voltage power lines caused health issues for those that live near them" - when once the data was updated with more inputs, it was discovered that it was an entirely different cause all together. High voltage power lines are unsightly, causing housing values around them to be below the average for the community. Poorer families were buying/renting them. Poorer families are more likely to have health issues due to financial constraints. In the end, the correlation wasn't causation, but each item both shared a similar root cause.

    • Re: (Score:1, Insightful)

      by CajunArson ( 465943 )

      "Put in a mandate that all government algorithms most be open sourced in an easily accessible fashion, and all data passed through them must also be easily accessed. "

      Congratulations, you just earned yourself a lifetime ban in all conversations discussing climate change models and the input data used by the climate change models.

    • "and all data passed through them must also be easily accessed"

      May not be possible due to releasing personally-identifiable information that should otherwise be kept confidential.
    • Put in a mandate that all government algorithms most be open sourced in an easily accessible fashion, and all data passed through them must also be easily accessed. This will enable 3rd parties, ANY 3rd party, not just contracted "companies" (usually in the pockets of the people making decisions) to audit the code and data for flaws.

      This will prevent the use of modern "AI" technology, as the source code produced by layered neural networks is not reasonably comprehensible to humans. Although this isn't a bad price to pay for rooting out biased black boxes.

      Also making all data passed through the algorithms public could conflict with legitimate privacy needs, such as with health records.

  • A plan to have police respond to an area that has a high crime rate around 3 pm seems reasonable and in a way race is not involved. Gut the effect of such a plan is very racist in its consequences. Poor people do drug deals and prostitute on the side walks and they are easily caught so the area gets a high crime rate designation. Rich people frequently commit far greater crimes but it takes place behind mansion walls and usually goes unnoticed. On top of that the cops know that a bust on a rich p
    • by Anonymous Coward

      So what is the solution then to fix the legal system.
      Just let the burglar go and drop the charges?

  • The algorithms deal only in facts and data. Even if they WERE presenting biased results, there is not one human alive, not me, not you, not the almighty assholes in government, nobody, that can assess that bias fairly without throwing their own biases on top of the algorithm for comparison.

    You need an algorithm to check the algorithms, but then you're right back to wondering where the bias comes from, the algorithm, or the creator of the algorithm.

    And if this is literally just another chance to shove polit

  • "unbiased" (Score:5, Insightful)

    by supernova87a ( 532540 ) <kepler1@NoSpaM.hotmail.com> on Thursday May 17, 2018 @03:22PM (#56628940)
    One problem I have about political use of the term "bias" versus the scientific use is that when policymakers (under public pressure) find that otherwise unbiased selectors or factors that produce groups or divisions of the population that are "biased", they feel that the algorithms are "wrong" or need to be fixed.

    I happen to also be quite skeptical of the legitimacy of disparate impact [wikipedia.org] policy, which states that even if a policy is facially neutral (not imposing rules or criteria associated with protected classes attributes), if it affects one group more than another it may be considered discriminatory or "biased".

    While good in theory, I have real trouble about how "unbiased" principles are applied in practice.
  • by radarskiy ( 2874255 ) on Thursday May 17, 2018 @03:58PM (#56629136)

    People decide what variables to put in or not, what data to test on or not. Using an "algorithm" doesn't eliminate subjectivity, it removes it in the sense of setting it at a distance where it is out of people's minds even though it's still there.

  • is a complete joke as a public servant. Is there any limit to the ways he and his administration can come up with to waste tax dollars..

    Just my 2 cents ;)
  • Based on what I've seen in recent years, a "racist" or "biased" output will be deemed to be any that results in a protected group being disproportionately represented, either too highly or too little depending on which outcome hurts it. No matter that, for example, a higher-than-normal rejection rate for loans for a group doesn't result in a lower-than-normal default rate on those loans (meaning the rejection were reasonable, not biased), it will be, definitionally, evidence of discrimination.
    • "a higher-than-normal rejection rate for loans for a group doesn't result in a lower-than-normal default rate on those loans"

      If the higher-than-normal rejection rate is correct, it SHOULD lead to a "normal" default rate on loans for that group, not significantly more (too lenient) nor significantly lower (too stringent). Managing risks is the whole point and the risk involved should be at a base level for whatever group you look at.

      For those that still don't get it, and say "oh well, the lender should just accept the higher losses for those groups" as the price for Looking Decent in the Eyes of the Public, think again. They're n

  • by AHuxley ( 892839 ) on Thursday May 17, 2018 @05:53PM (#56629554) Journal
    Is the person poor? Generational poverty? No job? Living in an area with an above average crime rate?
    Who is committing the crime? What crimes? What policing method has best reduced that crime rate around the USA?
    Make some maps of the city and fund some new police in the areas with crime.
    Add some CCTV. Bring in the next gen IMSI-catchers to see who is talking with criminals. A better voice print system?
    Start tracking crime down to a street and building level in real time.
    Ask the FBI to provide some statistics on who is doing what crime, when and where. Find the predicable patterns for the police to work with.
    Todays tech can map most of the problem areas and detect most of the repeat criminals.
    The only bias is not funding the police to do their jobs. Stop looking for "bias" and start funding police work.
    • by AmiMoJo ( 196126 )

      You missed it.

      The chief of police has limited resources. He looks at crime stats and directs more officers to areas with more crime. Since there are more of them they detect more crime and make more arrests, pushing them stats for that area up even further.

      In other words, without understanding what is happening these simplistic assumptions are misleading.

      • by AHuxley ( 892839 )
        Crime is not a simplistic assumption.
        Some areas have low crime. A car is safe during the day and at night. A home is not getting its door or window opened and property is not getting removed.
        A person can wait for a bus, drive their car without having to worry about robbery.
        Shop for food without having to consider the risks.
        Police can map that crime rate. Move in and track the bad people. Entire areas can be policed with better methods and the crime rate can be reduced.
        Get the voice prints so

news: gotcha

Working...