Microsoft's Kate Crawford: 'AI Is Neither Artificial Nor Intelligent' (theguardian.com) 173
An anonymous reader shares an excerpt from an interview The Guardian conducted with Microsoft's Kate Crawford. "Kate Crawford studies the social and political implications of artificial intelligence," writes Zoe Corbyn via The Guardian. "She is a research professor of communication and science and technology studies at the University of Southern California and a senior principal researcher at Microsoft Research. Her new book, Atlas of AI, looks at what it takes to make AI and what's at stake as it reshapes our world." Here's an excerpt from the interview: What should people know about how AI products are made?
We aren't used to thinking about these systems in terms of the environmental costs. But saying, "Hey, Alexa, order me some toilet rolls," invokes into being this chain of extraction, which goes all around the planet... We've got a long way to go before this is green technology. Also, systems might seem automated but when we pull away the curtain we see large amounts of low paid labour, everything from crowd work categorizing data to the never-ending toil of shuffling Amazon boxes. AI is neither artificial nor intelligent. It is made from natural resources and it is people who are performing the tasks to make the systems appear autonomous.
Problems of bias have been well documented in AI technology. Can more data solve that?
Bias is too narrow a term for the sorts of problems we're talking about. Time and again, we see these systems producing errors -- women offered less credit by credit-worthiness algorithms, black faces mislabelled -- and the response has been: "We just need more data." But I've tried to look at these deeper logics of classification and you start to see forms of discrimination, not just when systems are applied, but in how they are built and trained to see the world. Training datasets used for machine learning software that casually categorize people into just one of two genders; that label people according to their skin color into one of five racial categories, and which attempt, based on how people look, to assign moral or ethical character. The idea that you can make these determinations based on appearance has a dark past and unfortunately the politics of classification has become baked into the substrates of AI.
What do you mean when you say we need to focus less on the ethics of AI and more on power?
Ethics are necessary, but not sufficient. More helpful are questions such as, who benefits and who is harmed by this AI system? And does it put power in the hands of the already powerful? What we see time and again, from facial recognition to tracking and surveillance in workplaces, is these systems are empowering already powerful institutions -- corporations, militaries and police.
What's needed to make things better?
Much stronger regulatory regimes and greater rigour and responsibility around how training datasets are constructed. We also need different voices in these debates -- including people who are seeing and living with the downsides of these systems. And we need a renewed politics of refusal that challenges the narrative that just because a technology can be built it should be deployed.
We aren't used to thinking about these systems in terms of the environmental costs. But saying, "Hey, Alexa, order me some toilet rolls," invokes into being this chain of extraction, which goes all around the planet... We've got a long way to go before this is green technology. Also, systems might seem automated but when we pull away the curtain we see large amounts of low paid labour, everything from crowd work categorizing data to the never-ending toil of shuffling Amazon boxes. AI is neither artificial nor intelligent. It is made from natural resources and it is people who are performing the tasks to make the systems appear autonomous.
Problems of bias have been well documented in AI technology. Can more data solve that?
Bias is too narrow a term for the sorts of problems we're talking about. Time and again, we see these systems producing errors -- women offered less credit by credit-worthiness algorithms, black faces mislabelled -- and the response has been: "We just need more data." But I've tried to look at these deeper logics of classification and you start to see forms of discrimination, not just when systems are applied, but in how they are built and trained to see the world. Training datasets used for machine learning software that casually categorize people into just one of two genders; that label people according to their skin color into one of five racial categories, and which attempt, based on how people look, to assign moral or ethical character. The idea that you can make these determinations based on appearance has a dark past and unfortunately the politics of classification has become baked into the substrates of AI.
What do you mean when you say we need to focus less on the ethics of AI and more on power?
Ethics are necessary, but not sufficient. More helpful are questions such as, who benefits and who is harmed by this AI system? And does it put power in the hands of the already powerful? What we see time and again, from facial recognition to tracking and surveillance in workplaces, is these systems are empowering already powerful institutions -- corporations, militaries and police.
What's needed to make things better?
Much stronger regulatory regimes and greater rigour and responsibility around how training datasets are constructed. We also need different voices in these debates -- including people who are seeing and living with the downsides of these systems. And we need a renewed politics of refusal that challenges the narrative that just because a technology can be built it should be deployed.
Exactly (Score:4, Informative)
Re:Exactly (Score:5, Insightful)
Where as humans don't need any training at all. 100% natural intelligence.
Re:Exactly (Score:5, Interesting)
Where as humans don't need any training at all. 100% natural intelligence.
I am an Aspie and sometimes have difficulty recognizing sarcasm.
But just in case you are serious: Most humans require about two decades of education and training before they can do useful work.
A human with no training is not going to beat Alpha Zero.
Re: (Score:3, Insightful)
Probably was sarcastic. However, humans do not really need training to have intelligence at their disposal that far surpasses anything machines can do. Sure, that is not enough to do useful work in a modern society, but that "training" does not create intelligence. It merely provides additional data for the intelligence to work on and it also allows automation of a lot of things. The last step is critically needed, because thought processes are to slow for many things.
Re: (Score:3, Interesting)
Really? In my experience they're capable of some random movement for the first few months, then they start figuring out a bit of crude coordination. Along the way they develop some image recognition. After a year or two most of them start using basic language.
Where do you find the ones that are intelligent straight away?
Re: Exactly (Score:2)
He is just defining "intelligence" to be the innate ability to learn all that, rather than the ability to do that. E.g. if after 2 years of age, a child cannot walk a bit, in his definition that child's past self would be retrospectively called unintelligent.
And the term "intelligence" is vague enough for either of your definitions to make sense. The fields of science that use this term, heavily modify its meaning from the general understanding of this term, for the purpose of their field of study.
Re: (Score:2)
If you define intelligence as the ability to learn, which is a pretty good definition, then AI systems are pretty straightforwardly intelligent. If you trained one to walk for two years and it still couldn't do it, you would have screwed up badly.
This discussion is just another in a centuries long line of special pleading for human intelligence as a unique phenomenon. Every time someone demonstrates that an animal, an equation, or a computer can do something previously held up as being a defining character
Re: Exactly (Score:2)
Yes, almost agreed, I was pointing out the different definitions used by post vs counter-post.
But you say "it is not useful". What is "it" ? You mean defining intelligence with precision is not useful ? Or the moving goalpost of intelligence for AI is not useful ?
Re: (Score:2)
Newborns have pretty weak brains, the brain continues to grow and become more capable throughout childhood. That would happen to a good degree even without "training".
That's different from AI which is as "intelligent" as ever (in terms of memory size, processing speed, etc) from the beginning but needs training to do useful tasks.
Re: (Score:2)
A newborn has more neurons than you do. A lot more.
Re:Exactly (Score:4, Funny)
Where do you find the ones that are intelligent straight away?
Alia Atreides, maybe?
Re: (Score:2)
Re: Exactly (Score:2)
It may also depend on how they define "most people". Amish are sufficiently scarce to be excluded from "most people".
Re: (Score:2)
Most humans require about two decades of education and training before they can do useful work.
Only because child labor laws stopped us from putting children in a coal mine or on a spinning jenny.
Re: (Score:2)
Artificial intelligence, simply software capable of learning, how much it can learn, what it can learn, how it learns, how complex can it's learning be based upon a subset of learning algorithms each learning different things and compiling the result into a more complex memory pattern. The term becomes abused when the majority less knowledgable buried in empty beliefs get a hold of it and it becomes grossly distorted by those far greedier than they are intelligent.
The creepiest statement by far "Problems of
Re: (Score:2)
What is garbage? If you take plastic containers that are abandoned by people, (commonly called "trash" or "rubbish" if you are British), sort them out, clean, cut into small peaces, melt and create small pellets from the melted stuff, the end product is more valuable than steel, raw material that has high demand on industry.
So what does it mean,garbage in, garbage out? I don't get it.
Re: (Score:2)
> A human with no training is not going to beat Alpha Zero.
Best human with years of training since childhood did not beat AlphaGo. Alpha Zero is what trained without human help and because of that, it beat AlphaGo easily. And Alpha Zero is currently old version.
This is the new version "MuZero: Mastering Go, chess, shogi and Atari without rules"
https://deepmind.com/blog/arti... [deepmind.com]
Good luck beating that, humans.
Re:Exactly (Score:5, Insightful)
Train a human for a few weeks how to drive a car
You are starting with a 16-year-old human that has already spent years learning edge detection, shadowing, depth perception, object recognition, human behavior prediction, intuitive physics, and "common sense".
If you start with a newborn baby, it won't be able to drive a car in "a few weeks".
Re: (Score:2)
Without even talking about how streets and cars behave in general: what traffic lights are, what's a pedestrian crossing, etc.
Re: (Score:2)
Babies suffer more from poor motor skills than anything else. Wrt driving anyway. And it takes them about 2 years to really start learning language (minimum!) so they couldn't read any signs.
Re: (Score:3)
Re: Exactly (Score:3)
Chess is quite useless in itself. The best it can do for most people is train/predispose humans in certain ways that may turn out to be beneficial. It is a "job" only because other people like to watch "sportspeople" scratch their heads thinking, outmanoeuvreing each other, winning, losing. So not really a "job".
I challenge the alpha to drive a two wheeler in suburban Dhaka, starting by not depending on humans to power itself, lobby Hasina or local government to allow it to drive there, and acquiring the mo
Re: (Score:2)
Definition of AI is that it can do something what computers currently can't do. First it was chess, then answering questions made in human language, then recognizing items in pictures, then driving a car, inventing Nobel-worthy stuff. AI can already do all of those, so now you want to invent new task for it.
But AI won't go there. Next frontier for AI will be replacing doctors (obviously at the same time it will replace easier tasks like counting fish and sorting cucumbers)..
Re: (Score:2)
> Almost any human with a life time of training will fail to beat Alpha Zero.
What do you mean "almost any"? It is not possible for human to win even a single game against it. Even if human somehow managed to win one game, if they play 100 games, Alpha Zero will win most of those games.
Re: (Score:2)
Re: (Score:3)
What does that mean? I would have thought computers are artificial.
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
I think she means service sold as "AI" driven is basically almost never fully artificial. Of course there are artificial steps in there.
Re: (Score:2)
Re: (Score:2)
Also, do you really think humans don't need to be guided and trained?
Re: (Score:3)
It's made by humans. That's LITERALLY the definition of something being artificial.
Babies are made by humans ... Just sayin'.
Re: (Score:2)
Re: (Score:2)
The score of 2 is the default starting point for people who aren't scared to post their personal opinions that can be tracked for history/consistency/a
Re: (Score:2)
*In general, Clones, etc notwithstanding.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Babies are made by eggs and sperm, which humans deliver by fucking.
We all know what you're trying to say, but no.
Re: (Score:3)
Yeah, I was reading the statement "I is neither artificial nor intelligent. It is made from natural resources and it is people who are performing the tasks to make the systems appear autonomous." and I don't even know what that means. Everything made by humans is made from natural or artificial resources. This distinction appears to be being made to create a pity soundbyte, not to express some deeper truth.
The way it's written, it sounds like all AI approaches are Mechanical Turks, but there's a clear dif
Re: (Score:2)
Well, it typically has some artificial parts, but yes, it is not intelligent. There are some people that can somehow not deal with that and they begin redefining "intelligence" as things that are really just mechanical steps executed without any understanding. Because of these morons, you sometimes gave to use AGI (Artificial General Intelligence) to make a statement, but intelligence and general intelligence are really just the same thing.
Re: (Score:2)
There are also those who understand the word "artificial" to mean "fake." So rather than redefining "Intelligent" they recognize that the term "artificial intelligence" is semantically equivalent to saying "not-actual intelligence," which is an accurate description of our modern computers (they aren't actually intelligent).
Re:Exactly (Score:5, Insightful)
How do we measure the intelligence of a thing, particularly a non-human thing? Say a lab rat, or a chimpanzee, or a bird?
And when a computer can perform tasks without having been expressly programmed to achieve that particular task, but instead has simply been programmed to do nothing more than find the most optimal path to whatever goal it is presented with, and the resulting outward behavior is outwardly indistinguishable from that of a creature that we would consider intelligent, how is the computer software that is achieving that goal not exhibiting intelligence?
Re: (Score:3)
How do we measure the intelligence of a thing, particularly a non-human thing? Say a lab rat, or a chimpanzee, or a bird?
One method I learned in Psychology 101: Take a wire fence. Put an animal on one side, nice food just opposite on the other side of the fence, and an opening in the fence five meters away. A dog is clever enough to run away from the food, through the opening, and then back to the food. A chicken can't do that, it tries to get through the fence on the direct way to the food and fails.
On the other hand, my dog is definitely totally incapable of identifying cars. She would run into traffic without hesitation
Re: (Score:2)
Re: (Score:2)
I would suggest that if the computer can "do nothing more than find the most optimal path" it is not intelligent as it is just finding an extremum. Computing gradient descent doesn't count as intelligent to me. A good test would be to change the problem domain and see if it can find the new optimal path. If it cannot recognise when it needs to change its algorithm, it is not intelligent.
Re: Exactly (Score:2)
They are not morons. You don't know that in the field of AI, "intelligence" does not mean what it means colloquially. It has a very specific, scientific meaning far removed (but inspired) from the meaning general public ascribes to the word "intelligence".
The field of AI has made lots of progress in the parts of AI other than "AGI" too. That part is quite a genuine, well defined, productive field of study.
Re: (Score:3)
It's like spinning straw into gold: algorithms can chew through piles of data that aren't worth paying humans to look at, then generate useful results. Those results don't have to be *great*, They don't even have to be provably *right*. They only have to be useful. So maybe it's more like spinning straw into brass.
Whether the results are good enough depend on who you ask. If you ask the developer, he'll be delighted with the job the software does. Same with the manager who decided to buy the applicat
Re: (Score:2)
Re: (Score:2)
Think for a second before you just blindly repeat somebody.
Re: (Score:2)
It has to be guided and trained by an actual human being to be worth anything but trash.
How is it not artificial?
Think for a second before you just blindly repeat somebody.
Well since you're snarky, the same way an Artificial Explorer is not artificial or an explorer, it's a robot on Mars. DUH.
TFA context for the reading challenged.
"Also, systems might seem automated but when we pull away the curtain we see large amounts of low paid labour, everything from crowd work categorizing data to the never-ending toil of shuffling Amazon boxes. AI is neither artificial nor intelligent. It is made from natural resources and it is people who are performing the tasks to make the systems a
Re: (Score:2)
How is a robot on Mars or anywhere else for that matter, not artificial?
What natural process produced it?
Obviously its components come from natural resources, but robots such as what is on Mars are artificially constructed by human beings, therefore they is artificial.
What we call A.I. is man-made, therefore it too is artificial.
As for whether or not it's intelligent, you need to come up with an unambiguous definition of intelligence. Unfortunately for A.I., the goalposts keep moving on that one.
Re: (Score:2)
Listen closely this time, dipshit. Some so-called "AI" products actually use humans, not computers, to solve problems. Mechanical Turk style
But we're talking about AI IN GENERAL. Listen, at all, you dipshit.
Re: (Score:2)
Which two humans fucked to produce the mars explorer robot?
Re: (Score:3)
Re: (Score:2)
It's a marketing term (Score:5, Informative)
Re: (Score:2)
No, it's literally the definition, from the people who coined the term.
Some fiction authors got creative with the idea and a bunch of nerds read their books and are now bitter that they haven't come true yet, but that's about as far as the abuse of the term goes.
Re: (Score:2, Insightful)
go, not so hard.
Yeah, things are normally not "so hard" after we've reliably conquered them. Doesn't mean it isn't actually hard.
Go forward 60 years. Will computers be smart enough to program themselves? Maybe 100 years.
AlphaZero is already part of that progress. It literally programmed itself to play Go, Chess, and Shogi.
Re: (Score:3)
Will computers be smart enough to program themselves? Maybe 100 years.
Then the computers will not need us any more.
According to this paper, as long as it's only us programming them, then computers cannot get more complex than we are:
http://www.math.utep.edu/Facul... [utep.edu]
So that still leave open the possibility of computers programming themselves. But only for the simple stuff -- the sort of stuff normal programmers do. Not an AI-based technological singularity. [wikipedia.org]
Re:AI is not very intelligent ... Yet (Score:5, Insightful)
First they said that it could not do symbolic algebra because it could only do numbers. And then it did do symbolic algebra better than most undergraduates.
Then they said it could not play top grade chess, because that required ingenuity. But then it became grand master.
Then they said it could not play go, because the search space was too large. And then it played go.
Then they said it could never operate in the real world because it is too complex. And then it started to drive cars.
Then they said ...
Then they said it could never perform unaided artificial intelligent research as well as top researchers can. And it cannot do that. Yet.
Re: (Score:2)
It has come a long, long way in the last 60 years, indeed in the last 20 years. Speech recognition is commonplace and not too bad. Likewise image analysis.
Grand masters of chess, go, not so hard. But also won Jeopardy!. Sure, using tricks, but it won.
Go forward 20 years. We will start to see lots of semi-autionomous robots. Self driving cars. Mining trucks already drive themselves.
Go forward 60 years. Will computers be smart enough to program themselves? Maybe 100 years.
Then the computers will not need us any more.
In the past there simply wasn't the processing power, memory or training data available to make things work, even where the same basic building blocks of techniques are being used. More processing power, etc., has then allowed better investigation of deeper networks. And with more training data and the ability to share, pre-built recognisers for features can be quickly built into deep networks to enhance the rapidity of research. Back in 'the day' you could be waiting weeks to optimise even the barest of ne
Finally! (Score:4, Interesting)
Re: (Score:2)
eradicate the resistance
That's Archibald Tuttle. Not Buttle.
Re: (Score:2)
Missed it by that much .... (Score:2)
But saying, "Hey, Alexa, order me some toilet rolls," ...
Or two tons of creamed corn [xkcd.com] ...
Garbage Story (Score:5, Insightful)
I don't understand why the editors keep choosing these stupid stories. This interviewee is not and AI researcher. She "studies the social and political implications of artificial intelligence" which is the polite way of saying she's another social studies quack trying to jump on the AI bandwagon. And does the interviewer talk about new developments in this fast moving field and the exciting new possibilities it's opening up? No they talk about racism and sexism, because of course they do. They can't talk about anything technical because neither the interviewer nor the interviewee actually knows what they are talking about.
Which is how we end up with brain dead quotes like "AI Is Neither Artificial Nor Intelligent". Really? You don't think AI, a piece of software is artificial?
Re: (Score:3)
Even the summary should have clued people in when mentioning "chains of extraction".
Re: (Score:2)
"chains of extraction"
Yeah. Everyone knows that the Internet is a series of tubes.
Re: (Score:3)
Even the summary should have clued people in when mentioning "chains of extraction".
"Training datasets used for machine learning software that casually categorize people into just one of two genders" was a big clue that we're not dealing with someone who lives in reality.
Re: (Score:2)
Re: (Score:2)
You really do not need to be an AI researcher to find out reliably that "AI" is not intelligent. It is quite enough to understand what it can do (or rather cannot do) in some application domains.
The "intelligence" in AI has a very specific definition compared to the layman definition of "intelligence".
You might not need to be an AI researcher, but you should still know the AI terminology and definitions before claiming what something is or is not.
Darwin will take care of bad AI (Score:2)
If your business is using AI and over-charging potentially lucrative customers, your profits will suffer. The company not discriminating will be the winner and all will be right with the world.
I don't understand all the fretting. The only caveat is that the government should ensure real competition and a level playing field. Don't fix the symptoms.
Well, duh! (Score:2)
At least on the "intelligent" part. "Artificial" seems to be pretty descriptive with regards to a piece of code running on some computer hardware. Of course, the other parts of a process like ordering that toiled roll may vary.
Except that's not what she MEANS (Score:5, Insightful)
I *agree* with her assertion that it's neither Artificial nor Intelligent. Full marks.
But as I read the article (IKR?) it's clear that what she's objecting to is that it doesn't give her the woke results she wants it to.
What happens when you manage to program a perfectly unbiased system and it tells you for example that Asians are intrinsically smarter than everyone else?
What if it says white people are dumber than everyone else?
What if it says black people are dumber than everyone else?
Were YOUR reactions to those two sentences different? An objective computer would parse them exactly the same.
Re: (Score:3)
What happens when you manage to program a perfectly unbiased system and it tells you for example that Asians are intrinsically smarter than everyone else?
It means you overestimated how perfectly unbiased your system is. Anyone who tells you they've perfected the statistics of their data analysis is lying.
Take your least socially controversial subject - I guarantee you there will still be debates around the statistics: the results as well as how the analysis methods are being used. How many wasted hours have Slashdotters bemoaned the accuracy or usefulness of "top 10 programming languages" list? People can't get that right, yet you want to believe that som
Re:Except that's not what she MEANS (Score:5, Insightful)
She makes a few good points, but I agree that she makes them very poorly, and the fact that she tries to shoehorn her woke ideas into every answer doesn’t help. And she offers no new insights in this interview: all of these concerns about AI have been raised years ago. I can only hope that her book is more insightful than what she tells us in this interview.
Re: (Score:2)
And she offers no new insights in this interview: all of these concerns about AI have been raised years ago.
I think there are some concerns that need to be raised loudly at least once a year. New or not. I think the USA decided in 1865 that white and black people have the same rights, and 155 years later we raise this matter again and again.
Now the point that _correlation_ must be removed from any decision making is really important. Even if a correlation was objectively there, it would be totally unfair to punish people for bad actions of others in a group, or reward them for good actions of others in a group
Re: (Score:2)
I think the USA decided in 1865 that white and black people have the same rights
When did blacks get the right to vote in the USA?
Darwin AI Awards! (Score:2)
now that sounds like a funny new category for the Darwin Awards.
If humans fail miserably, AI would fail at 10 times the speed and scale of humans.
I can only think of Microsofts chat bot turning into a xenophobic racist within a mere day of on the job training.
Very good article. I'm really amazed by one thing: (Score:2)
Some good points but the solution has an issue. (Score:2)
She makes some good points. But here solution has an issue:
What do you mean when you say we need to focus less on the ethics of AI and more on power? ... does it put power in the hands of the already powerful? ... these systems are empowering already powerful institutions -- corporations, militaries and police.
What's needed to make things better? ...
Much stronger regulatory regimes
So the solution to AI being primarily in the hands of (and abused by) the powerful is to appeal to the most powerful to pass,
Re: (Score:2)
The "do not call list" does at not blocking phone spam (while simultaneously killing most state laws that had let the phone spammed sue the phone spammers)
Some point in the 1980's I read a short story in a computer magazine, about a spam call going to someone's phone, and a duel starting between the spam caller's AI and the phone's AI with the spam AI trying to beat the phone AI into submission and let the call go to the human. (Both were operating at a level that no AI today could do).
How about curing cancer / colonizing Mars? (Score:2)
Sounds like the biggest aspiration of Microsoft is to automate wokeness rather than open new horizons for humanity. Even then they are insulting everyone else by implying that only straight white men can code and save everyone else from bigotry. If there is such an underutilized market for giving credit to women, why not a female owned bank to undercut Wells Fargo for billions of potential customers? If camera apps suck for beautiful dark skin selfies, why not make it an niche for an independent developer t
Ms Crawford sounds completely confused (Score:5, Insightful)
There are valid questions about AI but the ones she raises are not among them.
She is confusing what we use AI for with the technology itself. The clearest illustration is her use of the case of ordering a toilet roll through Alexa. The effects of ordering it are the same whether its done by a phone call to a person or through an automated shopping system or through surface mail. They have nothing to do, in either case, with the technology of placing the order.
If you object to the consequences of the wholesale use of toilet rolls, what you have to change is how we behave, the fact that we use them in the quantities we do, or maybe if its the shipping that bothers you, change where and of what they are made. There is zero point in blaming the AI that is serving as one of many ways of buying them.
A similar point can be made about credit scores. The point is not that credit algorithms deliberately give lower credit scores to some groups. The lower scores result from a series of individual decisions on individual limits, given those individuals' risk factors.
The question is whether the algorithms are successful. The test here is default rates. Is a given method of assessing credit delivering acceptable default rates? Is it doing better than an available alternative? If the result is lower average scores to some groups, is this simply because this is a consequence of minimize defaults by the individuals being processed?
If so, the fact that some groups come up with lower average credit limits is simply an outcome of the fact that they have on average higher default rates at the same credit limits as other groups. Average color, race, gender, age or geographical disparities result from correct and completely neutral credit decisions. This isn't discrimination or even due to AI. Its simply a consequence of rational allocation of credit limits on a case by case basis. You are always going to find some groups on average higher and lower, its because they have more or fewer high or lower risk individuals.
As soon as you start thinking seriously about this you realize the problem is not AI. The problem, if there is one, is how to process credit applications. You could argue that less restrictive credit limits will not increase default rates. Fine, try it and see. Don't blame AI if it fails, don't credit AI if it falls. All AI is doing is to implement the policies, it has nothing to do with whether these are correct and fit for purpose.
Then we encounter sex. The idea that there are only two sexes is thought to be wrong, and AI is accused of holding it. Well, its probably true that current AI systems are set up to categorize individuals into male and female, and that is indeed binary. And some people think there are, in humans, more than two sexes. Others think we should not be distinguishing between male and female at all. Some think we should be using the concept of gender instead of that of sex.
Is this an issue about AI? Certainly not. Once you decide how many sexes there are, and what the criteria are for deciding which an individual is, you can try and set up an AI system to sort cases into those buckets. But if you think the buckets are the wrong buckets in the first place, don't criticize AI. Its just implementing a policy which has been decided independently of it. Just as if you used a room full of humans sorting the cases into the buckets. If there are 2 buckets when there should be 4, its not down to the fact you are using people, these particular people, or an algorithm. Its down to your decision about something substantive, how many sexes there are for people to be sorted into.
Also don't blame the decision to sort on sex on the method being used to sort. Its completely independent of it.
The lady is deeply confused, and the fact that she is in a senior position at a major tech company, and is in the grip of such elementary confusions is perfectly extraordinary. What on earth is MS thinking of to put someone with this level of confusion in this position?
Re: Ms Crawford sounds completely confused (Score:2)
You are right about the toilet paper confusion.
About credit scores, there are risks. If credit score is only affected by the individual's own actions, there is nothing unfair about it. But AI systems, especially supervised learning systems, might base their classification on any aspect of an individual's data - this includes race, maybe deduced by name, address, college etc. The operators of that supervised learning system need to carefully separate out pieces of data from which race (or other aspects of on
idiot banner ? (Score:2)
I don't have time to RTFA, but the summary makes her look like a person who needs to attend Thinking 101 - desperately:
We aren't used to thinking about these systems in terms of the environmental costs. But saying, "Hey, Alexa, order me some toilet rolls," invokes into being this chain of extraction, which goes all around the planet... We've got a long way to go before this is green technology. Also, systems might seem automated but when we pull away the curtain we see large amounts of low paid labour, everything from crowd work categorizing data to the never-ending toil of shuffling Amazon boxes. AI is neither artificial nor intelligent. It is made from natural resources and it is people who are performing the tasks to make the systems appear autonomous.
Right. Because ordering things through Alexa is somehow diifferent then doing it online. She confuses so many things just in the first two sentences, it's amazing.
First, she considers Alexa an AI all-the-way-through, when it's more likely that there's some machine learning in the voice recognition and the search, but a large part of the "order a product for the Amazon account I have registe
Detecting Covid (Score:3)
It turns out their machine learning algorithm found out quickly that older patients have Covid more often, so it looks at the age instead of the X-Ray image... Can we call this Artificial Stupidity instead?
By the way, one of the first attempted AI applications tried to distinguish between Russian and American tanks on photos. They had a 100% success rate. Then someone pointed out that the Russian tank photos had all be taking in cloudy weather, and the American tank photos in sunshine. So the AI just looked at the brightness of the picture. Dark picture = Russian tank. Bright picture = American tank.
AI is a category, not a threshold (Score:2)
Re: (Score:2)
Black without saying black (Score:3)
If an AI cannot discriminate based on race, it will discriminate based on every single aspect associated with race instead. No father figure in your life? +1 black. Live in Detroit? +1 black. Listen to rap, R&B, or hip-hop? +1 black. Prior location was a BLM protest? +1 black. Suffer severe keloids? +1 black. If statistics prove that black people are less likely to pay back loans, and the AI cannot use their race as a factor, the AI will say "This man has no father figure, lives in Detroit, listens to Immortal Technique, and a knife injury caused a horrific keloid on his face, these factors all contribute to not paying back their loans."
In extreme cases this could have a chilling effect and economic implications. If an AI discriminates against blacks without targeting blacks (let's be honest here, having Dance with the Devil on your playlist is not a protected class), then people will not want to do things associated with black people and black culture. People might end up moving out of a black city (or refuse to move in), or no longer buy products that are enjoyed by black people such as various magazines, movies, and hair products.
The only way to stop this is to have a total ban on AI (and computers in general) from judging or profiling humans.
Re: (Score:2)
Oh, for the love of ... (Score:2)
Training datasets used for machine learning software that casually categorize people into just one of two genders
Translation: "It's a big problem that actual data doesn't value our mass hysteria the way it should!"
Does she not know what "artificial" means? (Score:2)
That nonsensical claim is, by itself, more than enough reason to consider her work suspect. That she goes on to express ignorance regarding the concept of ethics, claiming they are necessary but insufficient because of what amounts to a series of... poorl
Re: (Score:3)
[Citation Needed]
Exactly no media that anyone should listen to says that Hillary won. She did not. The states ratified their results. Congress ratified the results from the States. Trump was inaugurated, and "served" for 4 years before being shown the door by the electorate.
Saying that Hillary won is just as stupid as saying that Trump won in 2020 - it's pure fiction, and fails a test that is regularly given to people that have recently had an impact to their head.
Saying that "Media" says something with
Re: (Score:3)
Winning by 2.86 million is winning. Losing by 2.86 million is losing. Anything else is minority rule, dictatorship by definition.
It's more complex than that in the US. Hillary Clinton won the popular vote, but due to how the votes are spread you got a contender for the top spot on "worst US presidents" (possibly beaten by Buchanan) as the most corrupt, lying, fact-ignoring, incompetent, divisive and dictator-admiring president you've had.
Re: (Score:2)
Re: (Score:2)
Or does democracy only work for you when your side wins?
Dehumanizing the people who vote against your political party does not mean they do not get to vote.
Your whole country is built on equal rights and you are trying to remove the rights of people who don't have a high enough IQ.
What's next, "they should stay in the fields and pick cotton, WE are their superiors."
Idiot.
What if the shoe was on the other foot and they said "Intellectuals have their noses stuck
Re: (Score:2)
And that's why we have a Federation of states that elect our leader - straight popular vote has 5 or 6 most populous states electing a leader and 40+ states going along for the ride.
People like to bitch about the electoral college, but it exists for a reason - it gives voice to less populous states that are also in this union when it comes to selecting who represents them in the Federal government.
Re: (Score:2)
So how about instead of removing their rights, we try to get a more informed electorate? As it turns out, when people actually know what the hell they're voting for, they make better decisions.
Re: (Score:2)