Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Books Microsoft

Microsoft's Kate Crawford: 'AI Is Neither Artificial Nor Intelligent' (theguardian.com) 173

An anonymous reader shares an excerpt from an interview The Guardian conducted with Microsoft's Kate Crawford. "Kate Crawford studies the social and political implications of artificial intelligence," writes Zoe Corbyn via The Guardian. "She is a research professor of communication and science and technology studies at the University of Southern California and a senior principal researcher at Microsoft Research. Her new book, Atlas of AI, looks at what it takes to make AI and what's at stake as it reshapes our world." Here's an excerpt from the interview: What should people know about how AI products are made?
We aren't used to thinking about these systems in terms of the environmental costs. But saying, "Hey, Alexa, order me some toilet rolls," invokes into being this chain of extraction, which goes all around the planet... We've got a long way to go before this is green technology. Also, systems might seem automated but when we pull away the curtain we see large amounts of low paid labour, everything from crowd work categorizing data to the never-ending toil of shuffling Amazon boxes. AI is neither artificial nor intelligent. It is made from natural resources and it is people who are performing the tasks to make the systems appear autonomous.

Problems of bias have been well documented in AI technology. Can more data solve that?
Bias is too narrow a term for the sorts of problems we're talking about. Time and again, we see these systems producing errors -- women offered less credit by credit-worthiness algorithms, black faces mislabelled -- and the response has been: "We just need more data." But I've tried to look at these deeper logics of classification and you start to see forms of discrimination, not just when systems are applied, but in how they are built and trained to see the world. Training datasets used for machine learning software that casually categorize people into just one of two genders; that label people according to their skin color into one of five racial categories, and which attempt, based on how people look, to assign moral or ethical character. The idea that you can make these determinations based on appearance has a dark past and unfortunately the politics of classification has become baked into the substrates of AI.

What do you mean when you say we need to focus less on the ethics of AI and more on power?
Ethics are necessary, but not sufficient. More helpful are questions such as, who benefits and who is harmed by this AI system? And does it put power in the hands of the already powerful? What we see time and again, from facial recognition to tracking and surveillance in workplaces, is these systems are empowering already powerful institutions -- corporations, militaries and police.

What's needed to make things better?
Much stronger regulatory regimes and greater rigour and responsibility around how training datasets are constructed. We also need different voices in these debates -- including people who are seeing and living with the downsides of these systems. And we need a renewed politics of refusal that challenges the narrative that just because a technology can be built it should be deployed.

This discussion has been archived. No new comments can be posted.

Microsoft's Kate Crawford: 'AI Is Neither Artificial Nor Intelligent'

Comments Filter:
  • Exactly (Score:4, Informative)

    by paravis ( 4999401 ) on Monday June 07, 2021 @07:26PM (#61464198)
    I'm getting real sick of hearing about "AI". That is the truth: it is not artificial, nor is it intelligent. It has to be guided and trained by an actual human being to be worth anything but trash.
    • Re:Exactly (Score:5, Insightful)

      by OrangeTide ( 124937 ) on Monday June 07, 2021 @07:33PM (#61464220) Homepage Journal

      Where as humans don't need any training at all. 100% natural intelligence.

      • Re:Exactly (Score:5, Interesting)

        by ShanghaiBill ( 739463 ) on Monday June 07, 2021 @07:43PM (#61464258)

        Where as humans don't need any training at all. 100% natural intelligence.

        I am an Aspie and sometimes have difficulty recognizing sarcasm.

        But just in case you are serious: Most humans require about two decades of education and training before they can do useful work.

        A human with no training is not going to beat Alpha Zero.

        • Re: (Score:3, Insightful)

          by gweihir ( 88907 )

          Probably was sarcastic. However, humans do not really need training to have intelligence at their disposal that far surpasses anything machines can do. Sure, that is not enough to do useful work in a modern society, but that "training" does not create intelligence. It merely provides additional data for the intelligence to work on and it also allows automation of a lot of things. The last step is critically needed, because thought processes are to slow for many things.

          • Re: (Score:3, Interesting)

            by ceoyoyo ( 59147 )

            Really? In my experience they're capable of some random movement for the first few months, then they start figuring out a bit of crude coordination. Along the way they develop some image recognition. After a year or two most of them start using basic language.

            Where do you find the ones that are intelligent straight away?

            • He is just defining "intelligence" to be the innate ability to learn all that, rather than the ability to do that. E.g. if after 2 years of age, a child cannot walk a bit, in his definition that child's past self would be retrospectively called unintelligent.

              And the term "intelligence" is vague enough for either of your definitions to make sense. The fields of science that use this term, heavily modify its meaning from the general understanding of this term, for the purpose of their field of study.

              • by ceoyoyo ( 59147 )

                If you define intelligence as the ability to learn, which is a pretty good definition, then AI systems are pretty straightforwardly intelligent. If you trained one to walk for two years and it still couldn't do it, you would have screwed up badly.

                This discussion is just another in a centuries long line of special pleading for human intelligence as a unique phenomenon. Every time someone demonstrates that an animal, an equation, or a computer can do something previously held up as being a defining character

                • Yes, almost agreed, I was pointing out the different definitions used by post vs counter-post.

                  But you say "it is not useful". What is "it" ? You mean defining intelligence with precision is not useful ? Or the moving goalpost of intelligence for AI is not useful ?

            • Newborns have pretty weak brains, the brain continues to grow and become more capable throughout childhood. That would happen to a good degree even without "training".

              That's different from AI which is as "intelligent" as ever (in terms of memory size, processing speed, etc) from the beginning but needs training to do useful tasks.

            • Re:Exactly (Score:4, Funny)

              by dotancohen ( 1015143 ) on Tuesday June 08, 2021 @09:18AM (#61465666) Homepage

              Where do you find the ones that are intelligent straight away?

              Alia Atreides, maybe?

        • "Most humans require about two decades of education and training before they can do useful work" --> this entirely depends on what you classify as useful work. Take the Amish. Their kids can be out of school as soon as 13 years old (8th grade) and most children start around 5 years old (so 8 years of education). They train on the job, but typically are working fairly well on their own by 15-16 and that's more or less due to strength needing to develop (generally puberty is a big thing here) for the men a
        • Most humans require about two decades of education and training before they can do useful work.

          Only because child labor laws stopped us from putting children in a coal mine or on a spinning jenny.

        • by rtb61 ( 674572 )

          Artificial intelligence, simply software capable of learning, how much it can learn, what it can learn, how it learns, how complex can it's learning be based upon a subset of learning algorithms each learning different things and compiling the result into a more complex memory pattern. The term becomes abused when the majority less knowledgable buried in empty beliefs get a hold of it and it becomes grossly distorted by those far greedier than they are intelligent.

          The creepiest statement by far "Problems of

        • by dvice ( 6309704 )

          > A human with no training is not going to beat Alpha Zero.

          Best human with years of training since childhood did not beat AlphaGo. Alpha Zero is what trained without human help and because of that, it beat AlphaGo easily. And Alpha Zero is currently old version.

          This is the new version "MuZero: Mastering Go, chess, shogi and Atari without rules"
          https://deepmind.com/blog/arti... [deepmind.com]

          Good luck beating that, humans.

      • by Luthair ( 847766 )
        Depends on what you mean by training - humans (and some animals) are very good at replicating a task they've observed, more over humans have figured out how to do an awful lot of the past few thousand years.
    • by XXongo ( 3986865 )
      I'll agree it isn't intelligent by our meaning of the word, but I don't understand the point "it isn't artificial".

      What does that mean? I would have thought computers are artificial.

      • by Luthair ( 847766 )
        It is actually part of the excerpt in the summary - a lot of companies claim to have something driven by AI but in reality there is a crowd sourcing backend (or just out sourcing) using cheap human labour.
      • by gweihir ( 88907 )

        I think she means service sold as "AI" driven is basically almost never fully artificial. Of course there are artificial steps in there.

        • by mark-t ( 151149 )
          Really? What non-artificial components do you know of that are in what we might call AI?
    • It's made by humans. That's LITERALLY the definition of something being artificial.

      Also, do you really think humans don't need to be guided and trained?
      • It's made by humans. That's LITERALLY the definition of something being artificial.

        Babies are made by humans ... Just sayin'.

        • Oh, you're always just saying.
        • by mark-t ( 151149 )
          Babies are made by a natural process* of which humans are a part, but humans do not "make" babies, per se. They are made by a process that itself was not made by us either, so they are not artificial.

          *In general, Clones, etc notwithstanding.

          • by q_e_t ( 5104099 )
            So if I deliberately put a rock in a stream that leads to diverting the stream, knowing it will do so, or has a high probability of doing so, it's natural?
            • by q_e_t ( 5104099 )
              Doing so by the response of the stream to the obstruction creating a new channel, I should have added. And it can be a big rock, of course.
        • Babies are made by eggs and sperm, which humans deliver by fucking.

          We all know what you're trying to say, but no.

      • Yeah, I was reading the statement "I is neither artificial nor intelligent. It is made from natural resources and it is people who are performing the tasks to make the systems appear autonomous." and I don't even know what that means. Everything made by humans is made from natural or artificial resources. This distinction appears to be being made to create a pity soundbyte, not to express some deeper truth.

        The way it's written, it sounds like all AI approaches are Mechanical Turks, but there's a clear dif

    • by gweihir ( 88907 )

      Well, it typically has some artificial parts, but yes, it is not intelligent. There are some people that can somehow not deal with that and they begin redefining "intelligence" as things that are really just mechanical steps executed without any understanding. Because of these morons, you sometimes gave to use AGI (Artificial General Intelligence) to make a statement, but intelligence and general intelligence are really just the same thing.

      • There are also those who understand the word "artificial" to mean "fake." So rather than redefining "Intelligent" they recognize that the term "artificial intelligence" is semantically equivalent to saying "not-actual intelligence," which is an accurate description of our modern computers (they aren't actually intelligent).

      • Re:Exactly (Score:5, Insightful)

        by mark-t ( 151149 ) <markt AT nerdflat DOT com> on Monday June 07, 2021 @10:13PM (#61464630) Journal

        How do we measure the intelligence of a thing, particularly a non-human thing? Say a lab rat, or a chimpanzee, or a bird?

        And when a computer can perform tasks without having been expressly programmed to achieve that particular task, but instead has simply been programmed to do nothing more than find the most optimal path to whatever goal it is presented with, and the resulting outward behavior is outwardly indistinguishable from that of a creature that we would consider intelligent, how is the computer software that is achieving that goal not exhibiting intelligence?

        • How do we measure the intelligence of a thing, particularly a non-human thing? Say a lab rat, or a chimpanzee, or a bird?

          One method I learned in Psychology 101: Take a wire fence. Put an animal on one side, nice food just opposite on the other side of the fence, and an opening in the fence five meters away. A dog is clever enough to run away from the food, through the opening, and then back to the food. A chicken can't do that, it tries to get through the fence on the direct way to the food and fails.

          On the other hand, my dog is definitely totally incapable of identifying cars. She would run into traffic without hesitation

        • by physick ( 146658 )

          I would suggest that if the computer can "do nothing more than find the most optimal path" it is not intelligent as it is just finding an extremum. Computing gradient descent doesn't count as intelligent to me. A good test would be to change the problem domain and see if it can find the new optimal path. If it cannot recognise when it needs to change its algorithm, it is not intelligent.

      • They are not morons. You don't know that in the field of AI, "intelligence" does not mean what it means colloquially. It has a very specific, scientific meaning far removed (but inspired) from the meaning general public ascribes to the word "intelligence".

        The field of AI has made lots of progress in the parts of AI other than "AGI" too. That part is quite a genuine, well defined, productive field of study.

    • by hey! ( 33014 )

      It's like spinning straw into gold: algorithms can chew through piles of data that aren't worth paying humans to look at, then generate useful results. Those results don't have to be *great*, They don't even have to be provably *right*. They only have to be useful. So maybe it's more like spinning straw into brass.

      Whether the results are good enough depend on who you ask. If you ask the developer, he'll be delighted with the job the software does. Same with the manager who decided to buy the applicat

      • by q_e_t ( 5104099 )
        Indeed. And it depends on cost, too. If an old process cost $200 and was 90% effective then on costing $1 that is 70% effective might be fine. Even more so if it's a filter. For example, you have a QC process and err on the side of false positives. So if there are 10 in 1000 true QC failures your $200 process might pick out 11. Your $1 process might pick out 13. If the cost of each item is $1, then with the $1 process you are done as the total cost is $1 plus $3 false positives, and it's less than the previ
    • by mark-t ( 151149 )
      How is it not artificial?

      Think for a second before you just blindly repeat somebody.

      • It has to be guided and trained by an actual human being to be worth anything but trash.

        How is it not artificial?
        Think for a second before you just blindly repeat somebody.

        Well since you're snarky, the same way an Artificial Explorer is not artificial or an explorer, it's a robot on Mars. DUH.

        TFA context for the reading challenged.
        "Also, systems might seem automated but when we pull away the curtain we see large amounts of low paid labour, everything from crowd work categorizing data to the never-ending toil of shuffling Amazon boxes. AI is neither artificial nor intelligent. It is made from natural resources and it is people who are performing the tasks to make the systems a

        • by mark-t ( 151149 )

          How is a robot on Mars or anywhere else for that matter, not artificial?

          What natural process produced it?

          Obviously its components come from natural resources, but robots such as what is on Mars are artificially constructed by human beings, therefore they is artificial.

          What we call A.I. is man-made, therefore it too is artificial.

          As for whether or not it's intelligent, you need to come up with an unambiguous definition of intelligence. Unfortunately for A.I., the goalposts keep moving on that one.

    • Yep, people with a clue still call it "Machine Learning". I like to call them "purpose-optimized semi-arbitrary algorithms" (because that's what they are) but that's a bit of a mouthful.
  • by technoviking1 ( 6415930 ) on Monday June 07, 2021 @07:28PM (#61464204)
    Calling machine learning "AI" is the same as calling those handlebarless Segways "hoverboards" or head-mounted screens "virtual reality". It's all just marketing speak.
    • by ceoyoyo ( 59147 )

      No, it's literally the definition, from the people who coined the term.

      Some fiction authors got creative with the idea and a bunch of nerds read their books and are now bitter that they haven't come true yet, but that's about as far as the abuse of the term goes.

  • Finally! (Score:4, Interesting)

    by freeze128 ( 544774 ) on Monday June 07, 2021 @07:38PM (#61464232)
    Finally someone who knows just how bad AI really is, and how it's not going to develop SkyNet, build Terminators, and send them back in time to eradicate the resistance.
    • by PPH ( 736903 )

      eradicate the resistance

      That's Archibald Tuttle. Not Buttle.

  • But saying, "Hey, Alexa, order me some toilet rolls," ...

    Or two tons of creamed corn [xkcd.com] ...

  • Garbage Story (Score:5, Insightful)

    by inhuman_4 ( 1294516 ) on Monday June 07, 2021 @07:55PM (#61464290)

    I don't understand why the editors keep choosing these stupid stories. This interviewee is not and AI researcher. She "studies the social and political implications of artificial intelligence" which is the polite way of saying she's another social studies quack trying to jump on the AI bandwagon. And does the interviewer talk about new developments in this fast moving field and the exciting new possibilities it's opening up? No they talk about racism and sexism, because of course they do. They can't talk about anything technical because neither the interviewer nor the interviewee actually knows what they are talking about.

    Which is how we end up with brain dead quotes like "AI Is Neither Artificial Nor Intelligent". Really? You don't think AI, a piece of software is artificial?

    • Even the summary should have clued people in when mentioning "chains of extraction".

      • by PPH ( 736903 )

        "chains of extraction"

        Yeah. Everyone knows that the Internet is a series of tubes.

      • Even the summary should have clued people in when mentioning "chains of extraction".

        "Training datasets used for machine learning software that casually categorize people into just one of two genders" was a big clue that we're not dealing with someone who lives in reality.

    • by DThorne ( 21879 )
      I think the entire point is she's not in the field. People in the field are too invested in it(literally) to give trustworthy observations, and your dismissal of her as a quack appears to be based on the assumption you're not allowed to call the use of the term AI bullshit unless you code. AI is another hucksterism like "information superhighway" was in the mainstream a couple decades ago. It's a misleading term used by people to sell things.
  • If your business is using AI and over-charging potentially lucrative customers, your profits will suffer. The company not discriminating will be the winner and all will be right with the world.

    I don't understand all the fretting. The only caveat is that the government should ensure real competition and a level playing field. Don't fix the symptoms.

  • At least on the "intelligent" part. "Artificial" seems to be pretty descriptive with regards to a piece of code running on some computer hardware. Of course, the other parts of a process like ordering that toiled roll may vary.

  • by argStyopa ( 232550 ) on Monday June 07, 2021 @08:22PM (#61464372) Journal

    I *agree* with her assertion that it's neither Artificial nor Intelligent. Full marks.

    But as I read the article (IKR?) it's clear that what she's objecting to is that it doesn't give her the woke results she wants it to.

    What happens when you manage to program a perfectly unbiased system and it tells you for example that Asians are intrinsically smarter than everyone else?

    What if it says white people are dumber than everyone else?
    What if it says black people are dumber than everyone else?
    Were YOUR reactions to those two sentences different? An objective computer would parse them exactly the same.

    • What happens when you manage to program a perfectly unbiased system and it tells you for example that Asians are intrinsically smarter than everyone else?

      It means you overestimated how perfectly unbiased your system is. Anyone who tells you they've perfected the statistics of their data analysis is lying.

      Take your least socially controversial subject - I guarantee you there will still be debates around the statistics: the results as well as how the analysis methods are being used. How many wasted hours have Slashdotters bemoaned the accuracy or usefulness of "top 10 programming languages" list? People can't get that right, yet you want to believe that som

    • by JaredOfEuropa ( 526365 ) on Tuesday June 08, 2021 @01:19AM (#61464910) Journal
      The thing is: if your perfectly objective system figured out from data that people with skin colour X are smarter than the other colours, and takes that fact into account when making assumptions about individuals, you still end up with a biased system. Bias simply means that: making assumptions about individuals based on some traits that may have a strong statistical correlation with the characteristic you’re interested in, but without any causation. Skin colour X doesn’t make people smarter, not do their smarts cause them (and them alone) to have that skin colour. The problem with AI is not (just) biased data or not being objective; the problem is that AI only deals with correlation, without being able to figure out is there’s a relevant cause underpinning it.

      She makes a few good points, but I agree that she makes them very poorly, and the fact that she tries to shoehorn her woke ideas into every answer doesn’t help. And she offers no new insights in this interview: all of these concerns about AI have been raised years ago. I can only hope that her book is more insightful than what she tells us in this interview.
      • And she offers no new insights in this interview: all of these concerns about AI have been raised years ago.

        I think there are some concerns that need to be raised loudly at least once a year. New or not. I think the USA decided in 1865 that white and black people have the same rights, and 155 years later we raise this matter again and again.

        Now the point that _correlation_ must be removed from any decision making is really important. Even if a correlation was objectively there, it would be totally unfair to punish people for bad actions of others in a group, or reward them for good actions of others in a group

        • I think the USA decided in 1865 that white and black people have the same rights

          When did blacks get the right to vote in the USA?

  • now that sounds like a funny new category for the Darwin Awards.
    If humans fail miserably, AI would fail at 10 times the speed and scale of humans.
    I can only think of Microsofts chat bot turning into a xenophobic racist within a mere day of on the job training.

  • That there's someone truthful at a company like Microsoft.
  • She makes some good points. But here solution has an issue:

    What do you mean when you say we need to focus less on the ethics of AI and more on power? ... does it put power in the hands of the already powerful? ... these systems are empowering already powerful institutions -- corporations, militaries and police.

    What's needed to make things better?
    Much stronger regulatory regimes ...

    So the solution to AI being primarily in the hands of (and abused by) the powerful is to appeal to the most powerful to pass,

    • The "do not call list" does at not blocking phone spam (while simultaneously killing most state laws that had let the phone spammed sue the phone spammers)

      Some point in the 1980's I read a short story in a computer magazine, about a spam call going to someone's phone, and a duel starting between the spam caller's AI and the phone's AI with the spam AI trying to beat the phone AI into submission and let the call go to the human. (Both were operating at a level that no AI today could do).

  • Sounds like the biggest aspiration of Microsoft is to automate wokeness rather than open new horizons for humanity. Even then they are insulting everyone else by implying that only straight white men can code and save everyone else from bigotry. If there is such an underutilized market for giving credit to women, why not a female owned bank to undercut Wells Fargo for billions of potential customers? If camera apps suck for beautiful dark skin selfies, why not make it an niche for an independent developer t

  • by Budenny ( 888916 ) on Tuesday June 08, 2021 @03:34AM (#61465098)

    There are valid questions about AI but the ones she raises are not among them.

    She is confusing what we use AI for with the technology itself. The clearest illustration is her use of the case of ordering a toilet roll through Alexa. The effects of ordering it are the same whether its done by a phone call to a person or through an automated shopping system or through surface mail. They have nothing to do, in either case, with the technology of placing the order.

    If you object to the consequences of the wholesale use of toilet rolls, what you have to change is how we behave, the fact that we use them in the quantities we do, or maybe if its the shipping that bothers you, change where and of what they are made. There is zero point in blaming the AI that is serving as one of many ways of buying them.

    A similar point can be made about credit scores. The point is not that credit algorithms deliberately give lower credit scores to some groups. The lower scores result from a series of individual decisions on individual limits, given those individuals' risk factors.

    The question is whether the algorithms are successful. The test here is default rates. Is a given method of assessing credit delivering acceptable default rates? Is it doing better than an available alternative? If the result is lower average scores to some groups, is this simply because this is a consequence of minimize defaults by the individuals being processed?

    If so, the fact that some groups come up with lower average credit limits is simply an outcome of the fact that they have on average higher default rates at the same credit limits as other groups. Average color, race, gender, age or geographical disparities result from correct and completely neutral credit decisions. This isn't discrimination or even due to AI. Its simply a consequence of rational allocation of credit limits on a case by case basis. You are always going to find some groups on average higher and lower, its because they have more or fewer high or lower risk individuals.

    As soon as you start thinking seriously about this you realize the problem is not AI. The problem, if there is one, is how to process credit applications. You could argue that less restrictive credit limits will not increase default rates. Fine, try it and see. Don't blame AI if it fails, don't credit AI if it falls. All AI is doing is to implement the policies, it has nothing to do with whether these are correct and fit for purpose.

    Then we encounter sex. The idea that there are only two sexes is thought to be wrong, and AI is accused of holding it. Well, its probably true that current AI systems are set up to categorize individuals into male and female, and that is indeed binary. And some people think there are, in humans, more than two sexes. Others think we should not be distinguishing between male and female at all. Some think we should be using the concept of gender instead of that of sex.

    Is this an issue about AI? Certainly not. Once you decide how many sexes there are, and what the criteria are for deciding which an individual is, you can try and set up an AI system to sort cases into those buckets. But if you think the buckets are the wrong buckets in the first place, don't criticize AI. Its just implementing a policy which has been decided independently of it. Just as if you used a room full of humans sorting the cases into the buckets. If there are 2 buckets when there should be 4, its not down to the fact you are using people, these particular people, or an algorithm. Its down to your decision about something substantive, how many sexes there are for people to be sorted into.

    Also don't blame the decision to sort on sex on the method being used to sort. Its completely independent of it.

    The lady is deeply confused, and the fact that she is in a senior position at a major tech company, and is in the grip of such elementary confusions is perfectly extraordinary. What on earth is MS thinking of to put someone with this level of confusion in this position?

    • You are right about the toilet paper confusion.

      About credit scores, there are risks. If credit score is only affected by the individual's own actions, there is nothing unfair about it. But AI systems, especially supervised learning systems, might base their classification on any aspect of an individual's data - this includes race, maybe deduced by name, address, college etc. The operators of that supervised learning system need to carefully separate out pieces of data from which race (or other aspects of on

  • I don't have time to RTFA, but the summary makes her look like a person who needs to attend Thinking 101 - desperately:

    We aren't used to thinking about these systems in terms of the environmental costs. But saying, "Hey, Alexa, order me some toilet rolls," invokes into being this chain of extraction, which goes all around the planet... We've got a long way to go before this is green technology. Also, systems might seem automated but when we pull away the curtain we see large amounts of low paid labour, everything from crowd work categorizing data to the never-ending toil of shuffling Amazon boxes. AI is neither artificial nor intelligent. It is made from natural resources and it is people who are performing the tasks to make the systems appear autonomous.

    Right. Because ordering things through Alexa is somehow diifferent then doing it online. She confuses so many things just in the first two sentences, it's amazing.

    First, she considers Alexa an AI all-the-way-through, when it's more likely that there's some machine learning in the voice recognition and the search, but a large part of the "order a product for the Amazon account I have registe

  • by gnasher719 ( 869701 ) on Tuesday June 08, 2021 @04:44AM (#61465176)
    There is some software in development that detects Covid from x-rays.

    It turns out their machine learning algorithm found out quickly that older patients have Covid more often, so it looks at the age instead of the X-Ray image... Can we call this Artificial Stupidity instead?

    By the way, one of the first attempted AI applications tried to distinguish between Russian and American tanks on photos. They had a 100% success rate. Then someone pointed out that the Russian tank photos had all be taking in cloudy weather, and the American tank photos in sunshine. So the AI just looked at the brightness of the picture. Dark picture = Russian tank. Bright picture = American tank.
  • AI is a category, not a threshold.
    • But categorization is bad! And she's an expert in communications, so she definitely knows what she's talking about.
  • by LenKagetsu ( 6196102 ) on Tuesday June 08, 2021 @06:39AM (#61465302)

    If an AI cannot discriminate based on race, it will discriminate based on every single aspect associated with race instead. No father figure in your life? +1 black. Live in Detroit? +1 black. Listen to rap, R&B, or hip-hop? +1 black. Prior location was a BLM protest? +1 black. Suffer severe keloids? +1 black. If statistics prove that black people are less likely to pay back loans, and the AI cannot use their race as a factor, the AI will say "This man has no father figure, lives in Detroit, listens to Immortal Technique, and a knife injury caused a horrific keloid on his face, these factors all contribute to not paying back their loans."

    In extreme cases this could have a chilling effect and economic implications. If an AI discriminates against blacks without targeting blacks (let's be honest here, having Dance with the Devil on your playlist is not a protected class), then people will not want to do things associated with black people and black culture. People might end up moving out of a black city (or refuse to move in), or no longer buy products that are enjoyed by black people such as various magazines, movies, and hair products.

    The only way to stop this is to have a total ban on AI (and computers in general) from judging or profiling humans.

    • What would that change? You'd just be taking a useful tool out of the mix, so humans would go back to doing all the discriminating, just with less objectivity. Judging and profiling are core components of how we perceive the world and the systems we create.
  • Training datasets used for machine learning software that casually categorize people into just one of two genders

    Translation: "It's a big problem that actual data doesn't value our mass hysteria the way it should!"

  • By her reasoning, nothing is artificial because everything is constructed from things that exist in nature by beings that exist in nature. Clearly, this is ridiculous. The software was written by human beings and runs on hardware built by human hands.

    That nonsensical claim is, by itself, more than enough reason to consider her work suspect. That she goes on to express ignorance regarding the concept of ethics, claiming they are necessary but insufficient because of what amounts to a series of... poorl

The moon is made of green cheese. -- John Heywood

Working...