The Dangers of 'Black Box' AI (pcmag.com) 72
PC Magazine recently interviewed Janelle Shane, the optics research scientist and AI experimenter who authored the new book "You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It's Making the World a Weirder Place."
At one point Shane explains why any "black box" AI can be a problem: I think ethics in AI does have to include some recognition that AIs generally don't tell us when they've arrived at their answers via problematic methods. Usually, all we see is the final decision, and some people have been tempted to take the decision as unbiased just because a machine was involved. I think ethical use of AI is going to have to involve examining AI's decisions. If we can't look inside the black box, at least we can run statistics on the AI's decisions and look for systematic problems or weird glitches... There are some researchers already running statistics on some high-profile algorithms, but the people who build these algorithms have the responsibility to do some due diligence on their own work. This is in addition to being more ethical about whether a particular algorithm should be built at all...
[T]here are applications where we want weird, non-human behavior. And then there are applications where we would really rather avoid weirdness. Unfortunately, when you use machine-learning algorithms, where you don't tell them exactly how to solve a particular problem, there can be weird quirks buried in the strategies they choose.
Describing a kind of worst-case scenario, Shane contributed to the New York Times "Op-Eds From the Future" series, channeling a behavioral ecologist in the year 2031 defending "the feral scooters of Central Park" that humanity had been co-existing with for a decade.
But in the interview, she remains skeptical that we'll ever acheive real and fully-autonomous self-driving vehicles: It's much easier to make an AI that follows roads and obeys traffic rules than it is to make an AI that avoids weird glitches. It's exactly that problem -- that there's so much variety in the real world, and so many strange things that happen, that AIs can't have seen it all during training. Humans are relatively good at using their knowledge of the world to adapt to new circumstances, but AIs are much more limited, and tend to be terrible at it.
On the other hand, AIs are much better at driving consistently than humans are. Will there be some point at which AI consistency outweighs the weird glitches, and our insurance companies start incentivizing us to use self-driving cars? Or will the thought of the glitches be too scary? I'm not sure.
Shane trained a neural network on 162,000 Slashdot headlines back in 2017, coming up with alternate reality-style headlines like "Microsoft To Develop Programming Law" and "More Pong Users for Kernel Project." Reached for comment this week, Shane described what may be the greatest danger from AI today. "For the foreseeable future, we don't have to worry about AI being smart enough to have its own thoughts and goals.
"Instead, the danger is that we think that AI is smarter than it is, and put too much trust in its decisions."
At one point Shane explains why any "black box" AI can be a problem: I think ethics in AI does have to include some recognition that AIs generally don't tell us when they've arrived at their answers via problematic methods. Usually, all we see is the final decision, and some people have been tempted to take the decision as unbiased just because a machine was involved. I think ethical use of AI is going to have to involve examining AI's decisions. If we can't look inside the black box, at least we can run statistics on the AI's decisions and look for systematic problems or weird glitches... There are some researchers already running statistics on some high-profile algorithms, but the people who build these algorithms have the responsibility to do some due diligence on their own work. This is in addition to being more ethical about whether a particular algorithm should be built at all...
[T]here are applications where we want weird, non-human behavior. And then there are applications where we would really rather avoid weirdness. Unfortunately, when you use machine-learning algorithms, where you don't tell them exactly how to solve a particular problem, there can be weird quirks buried in the strategies they choose.
Describing a kind of worst-case scenario, Shane contributed to the New York Times "Op-Eds From the Future" series, channeling a behavioral ecologist in the year 2031 defending "the feral scooters of Central Park" that humanity had been co-existing with for a decade.
But in the interview, she remains skeptical that we'll ever acheive real and fully-autonomous self-driving vehicles: It's much easier to make an AI that follows roads and obeys traffic rules than it is to make an AI that avoids weird glitches. It's exactly that problem -- that there's so much variety in the real world, and so many strange things that happen, that AIs can't have seen it all during training. Humans are relatively good at using their knowledge of the world to adapt to new circumstances, but AIs are much more limited, and tend to be terrible at it.
On the other hand, AIs are much better at driving consistently than humans are. Will there be some point at which AI consistency outweighs the weird glitches, and our insurance companies start incentivizing us to use self-driving cars? Or will the thought of the glitches be too scary? I'm not sure.
Shane trained a neural network on 162,000 Slashdot headlines back in 2017, coming up with alternate reality-style headlines like "Microsoft To Develop Programming Law" and "More Pong Users for Kernel Project." Reached for comment this week, Shane described what may be the greatest danger from AI today. "For the foreseeable future, we don't have to worry about AI being smart enough to have its own thoughts and goals.
"Instead, the danger is that we think that AI is smarter than it is, and put too much trust in its decisions."
Re: (Score:3)
Yeah, have we fully lost the war of words yet such that AI now means either expert-system, machine-learning, or neural-net?
Are we going to need to invent a new term to replace AI with, like Intelligent Agent, or Conscious Computer, or something?
Has there ever been a successful case of the media taking a word or phrase and misusing it and it eventually being reclaimed properly? I'm looking at you, Hacker...
Re:The problem is calling machine learning AI... (Score:5, Insightful)
Cars are going to be autonomous. Why? Because statistically they are safer than humans and reward the driver with their concentration back. The driver can do other tasks, like talk on the phone while the car watches out for pedestrians. Will it be perfect? No. Will it save lives overall because its better than people, statistically? Absolutely. Unquestionably. Computers do work in place of people, freeing up people. Thats a wonderful thing. People shouldn't need to work at things machines can do and do cheaper. Every time I have ever read about automating factories and protests around that, I want to know: how many people get hurt on assembly lines? Wouldn't it be less if we put less people on assembly lines? Of course. Those people can now do something else, and the cost of the products that are now produced by computers is lower, meaning more can afford them. There is more supply for everyone, overall. Wonderful. The black box works because its complexity is getting beyond what we can comprehend, the system just becomes to tedious to understand how it arrives at decisions with all the detail involved.
Re: (Score:2)
Cannot Slashdot train some algorithms to find this asshole's posts and disallow them?
Re: (Score:1)
Re: (Score:2)
Yeah, have we fully lost the war of words yet such that AI now means either expert-system, machine-learning, or neural-net? Are we going to need to invent a new term to replace AI with, like Intelligent Agent, or Conscious Computer, or something?
You have it backward.
AI was originally used by researchers to refer to machine-learning, neural-nets, etc. You know, stuff they were actually working on. When they wanted to refer to human-level artificial intelligence, they would qualify the term by saying "general AI" or "hard AI".
It was the media and Hollywood that hijacked the term to only mean "conscious computer" and human level robots.
Re: (Score:2)
Not exactly [phrases.org.uk]. All those areas are a subset of AI in that they were considered ways of working towards true AI.
In terms of "originally", the term was coined in A PROPOSAL FOR THE DARTMOUTH SUMMER RESEARCH PROJECT ON ARTIFICIAL INTELLIGENCE [stanford.edu]. To wit:
Re: (Score:2)
"Are we going to need to invent a new term to replace AI with, like Intelligent Agent, or Conscious Computer, or something?"
How about Artificial Consciousness? It's a better description of what we used to mean when we said AI anyway, and as a side bonus it's the same initialization as Anonymous Coward, most of whom could be replaced by very small shell scripts.
Re: (Score:2)
I thought we lost the war of words when it stopped meaning an expert system. If we don't actually spend the time to program the details of the intelligence, it probably won't have any.
All we can say for sure is that given a set of inputs, some trained model outputs the desired value. But if you don't know how it works inside, you can't say that it will give correct results for any other different inputs. And if you're restricted to values you double-checked with a slower algorithm, then you can just use a l
Re:The problem is calling machine learning AI... (Score:5, Insightful)
but unti we have a good model and understanding of how brains actually work and a theory behind it ...
There is no reason that we have to completely understand a biological system in order to build an alternative to it.
Aircraft and birds use very different principles (ailerons and flaps vs wing flexing), and a 747 is a lot different from a hummingbird, but that doesn't mean the 747 can't fly.
Artificial neural networks were inspired by biological brains, but they use different techniques. For instance, there is no evidence that biological neurons use differential backpropagation. But that doesn't mean back-prop is an inferior method (at least for vision processing, it appears to be a superior method) or that it can't be part of a general-AI.
Re: (Score:2)
Except the humming bird can make more hummy birds, the airplane cannot make more airplanes. Flight and intelligence aren't in the same league.
The problem with intelligence is figuring out whether you have the right model or set of observations and not getting stuck. You need to be able to understand what you don't understand. Right now ML and many so called AI techniques have no awareness of "their understanding" they only basically can do what fits their model or training data.
Re: (Score:3)
Except the humming bird can make more hummy birds, the airplane cannot make more airplanes. Flight and intelligence aren't in the same league.
Reproduction has nothing to do with intelligence. Bacteria can reproduce. So can fire.
Re: (Score:1)
Except the humming bird can make more hummy birds, the airplane cannot make more airplanes. Flight and intelligence aren't in the same league.
Reproduction has nothing to do with intelligence. Bacteria can reproduce. So can fire.
You're missing the point, my point of humming birds being able to reproduce themselves was we were only copying the low hanging fruit of the humming birds true complexity. AKA to fully copy the complex flight model of the humming bird hasn't been fully done yet. We only crudely copied some aspects of humming bird flight, not the full complexity.
Re: (Score:2)
We only crudely copied some aspects of hummingbird flight, not the full complexity.
Can a hummingbird fly 520 pax from Chicago to London in 7 hours?
A hummingbird and a 747 fly differently. But whether one is "better" than the other depends on what you are trying to accomplish. As we improve our aircraft, progress is more likely to come from studying jet engines, material science, and high-speed turbulence, than from a better understanding of hummingbirds.
Likewise, there are a few things that ANNs already do better than BNNs. But as ANNs improve, it is likely to come from improvements to
Re: (Score:1)
Can a hummingbird fly 520 pax from Chicago to London in 7 hours?
You missed the point completely. Are modern planes as agile as birds? We've copied the low hanging fruit of aviation. There is a lot that we haven't been able to fully replicate. I'm saying we're at the low hanging fruit stage of AI development and while many aspects of machine learning are interesting. They are like any other tool. I'll be impressed when AI can get a job.
Re:The problem is calling machine learning AI... (Score:4, Funny)
You missed the point completely. Are modern planes as agile as birds?
Was that ever a requirement ?
Modern planes can carry coconuts. Show me a swallow that can do that.
Re: (Score:1)
You missed the point completely. Are modern planes as agile as birds?
Was that ever a requirement ?
Modern planes can carry coconuts. Show me a swallow that can do that.
Show me a plane with wings as agile as bats.
https://www.youtube.com/watch?... [youtube.com]
Re: (Score:2)
Re: (Score:1)
Modern planes can carry coconuts. Show me a swallow that can do that.
An African or a European swallow?
Re:The problem is calling machine learning AI... (Score:4, Insightful)
I'll be impressed when AI can get a job.
You mean, you'll be impressed when they can perform well in a job interview. They already got the jobs.
Re: (Score:2)
Re: (Score:2)
The problem with intelligence is figuring out whether you have the right model or set of observations and not getting stuck.
You don't have the right model, and you're stuck. Now, go ahead and get yourself out of there.
Re: (Score:2)
The question (when we are trying to answer whether it's AI) is whether it's intelligent.
Re:The problem is calling machine learning AI... (Score:4, Insightful)
"Submarines are useful, but we're trying to answer whether they can really swim"
Re: (Score:2)
Re: (Score:2)
It's a paraphrase of Dijkstra.
Re: (Score:2)
They can't swim, they are stubbornly refusing to learn how to sink.
Re: (Score:2)
For instance, there is no evidence that biological neurons use differential backpropagation. But that doesn't mean back-prop is an inferior method (at least for vision processing, it appears to be a superior method)
I don't believe there's any evidence yet that it's a superior method. Don't for a moment believe those papers which claim superhuman performance on ImageNet for example, the claim is utter bullshit.
Re: (Score:2)
Don't for a moment believe those papers which claim superhuman performance on ImageNet for example
The superhuman performance has been repeatedly demonstrated under control conditions during competitions.
ImageNet Annual Challange [wikipedia.org]
There are also many other controlled contests.
Re: (Score:2)
The superhuman performance has been repeatedly demonstrated under control conditions during competitions.
No it hasn't. And if you believe it has then I have a bridge to sell you.
Don't believe me? Take one of the ImageNet winners (many are open source) and try it on some non imagenet images, just random photos of crap that you took yourself, not being especially careful. The you will see just how superhuman it is.
It isn't.
Also you might like to look into the conditions etc for the claim of superhuman perform
Re: (Score:2)
They fall under the blanket term of AI because of that, not because they have any actual intelligence or sentience, artificial or otherwise.
We can make computer systems that can be compared to a worms brain, but it's still not as capable as a worm when it comes to actual "thinking".
Of course that's because we still don't understand the basic functionality of the mind, but it's apparently not something as simple as binary.
There has been some research that has i
Re: (Score:2)
We can make computer systems that can be compared to a worms brain, but it's still not as capable as a worm when it comes to actual "thinking".
How can you tell ? And how can you tell when we have reached that point ?
Of course that's because we still don't understand the basic functionality of the mind
No, it's the other way around. You first have to identify a method of telling apart thinking and non-thinking systems. Once you have that method, you can start understanding.
Re: (Score:2)
You first have to identify a method of telling apart thinking and non-thinking systems. Once you have that method, you can start understanding.
It's an iterative process that we have already begun. We cannot distinguish between thinking and non-thinking systems because we currently have only one example of a class of thinking systems.
The definition of "thinking" and the methods for detecting it will change as we build and discover more examples that test the definition.
And yes, this is why Pluto is no longer a planet.
Re: (Score:2)
Understanding doesn't help. The test must be done based on input/output behavior, which we can already observe right now.
If an artificial brain behaves in exactly identical ways as a real brain (human or animal) then they must both be "thinking". Any definition of "thinking" that invokes properties of the system that do not change its behavior is pointless, because there's no way you can identify the relevant properties if they don't influence behavior.
Re: (Score:3)
They don't, therefore they aren't.
Re: (Score:2)
Not right now, but that's not the point.
It's about establishing that the only sensible criterion is input/output behavior (i.e. black box testing). That's how our brains were tested during evolution, by their power to increase chances of survival and reproduction by guiding the person's behavior.
Re: (Score:2)
Re: (Score:2)
Just like a guinea pig is still a pig from Africa.
Re: (Score:2)
Re:The problem is calling machine learning AI... (Score:5, Insightful)
"I just had a gut feeling"
Re: (Score:2)
True, and often when they provide reasoning, it's just made up after the initial choice in order to form a compelling narrative.
There have been some interesting experiments with people with severed corpus callosum (the bundle of nerves that connect the two brain hemispheres). In one experiment they flash a sign that says "get up and get some water" to one brain half. The subject does as instructed. Then they ask the other brain half why they got up and got the water, and they answer "because I was thirsty".
Re: (Score:2)
Re:The problem is calling machine learning AI... (Score:4, Insightful)
But in a lot of cases, they can.
And those cases where they can't? Are the people no longer intelligent?
What if the "black box AI" just told you what you wanted to hear. What If it was also trained to make up an explanation that matched its initial result, after the fact.
You would think it was intelligent. But how would you know that was really its reasoning when making the decision?
What if we make a completely separate AI just for producing justifications.
The first AI tells you the answer. The second AI just tells you an explanation. Kind of like parallel construction.
Which one is Intelligent by your definition? The first one can't explain. The second one didn't decide. Yet together they are 'intelligent' by your definition.
Re: (Score:2)
"Please explain how you arrived at a decision to eat a candy bar even though you were on a diet"
Re: (Score:2)
Not on a diet, just providing an explanation I've heard.
Re: (Score:2)
But that's not the correct explanation. That's a post-fact rationalization of unconscious events that led to grabbing the candy bar. The real reason is that there's a more primitive part of the brain that wasn't interested in dieting at all. That primitive part made the decision to grab the candy bar. The modern part of the brain then came up with a narrative to explain why this happened, maintaining a sense of control with a lousy excuse.
kind of behind the times (Score:2)
"I think ethical use of AI is going to have to involve examining AI's decisions. If we can't look inside the black box, at least we can run statistics on the AI's decisions and look for systematic problems or weird glitches... There are some researchers already running statistics on some high-profile algorithms, but the people who build these algorithms have the responsibility to do some due diligence on their own work."
A lot of machine learning companies are already using statistical testing methods to ensure that minorities aren't discriminated against (or of course, to ensure the new algorithm doesn't lose money because of some weird edge case). This person is behind the times, instead of reading their book, watching this series would be a better use of your time [youtube.com].
Westworld. (Score:1)
"Instead, the danger is that we think that AI is smarter than it is, and put too much trust in its decisions."
So you buy a Cherry 2000, it automatically upgrades it's AI, decides it doesn't love you anymore, and leaves you for the toaster. Does your medical insurance cover your depression since even a robot can't love you?
Actually, we're just going to keep giving it (AI / them) more and more power. Incrementally, over time, by "smart" businesses. Not that they're physically doing things, but deciding things instead with a human group fronting it's decision. (FLASHBACK: In the 1970, computer output printed on g
Re: (Score:3)
"Instead, the danger is that we think that AI is smarter than it is, and put too much trust in its decisions."
So you buy a Cherry 2000, it automatically upgrades it's AI, decides it doesn't love you anymore, and leaves you for the toaster. Does your medical insurance cover your depression since even a robot can't love you?
Dude, you can't buy a Cherry 2000 [wikipedia.org], they're past EOL. The only remaining ones are in a defunct factory in "Zone 7".
Re:Westworld. (Score:5, Insightful)
In 2017, the United States has fragmented into post-apocalyptic wastelands and limited civilized areas. One of the effects of the economic crisis is a decline in manufacturing, and heavy emphasis on recycling aging 20th-century mechanical equipment. Society has become increasingly bureaucratic and hypersexualized, with the declining number of human sexual encounters requiring contracts drawn up by lawyers prior to sexual activity. At the same time, robotic technology has made tremendous developments, and female Androids (more properly, Gynoids) are used as substitutes for wives
Eh, I'll give it a 30%.
Society increasingly bureaucratic
Increasingly hypersexualized
Scavaging manufactuing assembly lines...but to ship to China and elsewhere
Declining sexual encounters
Sexual encounters only half-jokingly needing contracts, but not for the reason they are guessing
X-ray the pipes (Score:4, Interesting)
Factor tables [github.com] are a possible alternative to neural nets. You still get the "array of voters" that neural nets offer, but it's more approachable to and dissect-able by analysts with 4 year degrees and those used to spreadsheets, relational databases, and statistical packages. Factor tables can make AI more like regular office work, including dividing up teams to focus on sub-parts of a problem.
You can run statistics to see which factor(s) or mask layers are giving you the incorrect results using more or less the same techniques a product sales analyst would use.
It probably takes longer than neural nets to get initial results, but the trade-off is a manageable and traceable system. The focus has to shift to AI projects being easier to manage than on raw initial benchmarks. Yes, it may make AI work a bit more boring, but that's typical of any technology going mainstream and maturing.
Exactly this. (Score:1)
From the headline: "Instead, the danger is that we think that AI is smarter than it is, and put too much trust in its decisions."
The vast majority of people have no idea how these machines work or how flawed they are and are influenced by media, movies, and TV, thinking fantasy is reality, leading to far, far too much trust in unproven and unreliable technology being pushed on us way, way too fast, especially so-called 'self driving cars'.
Re: (Score:2)
The vast majority of people have no idea how these machines work
The vast majority of people have no idea how an internal combustion engine works either.
too much trust in unproven and unreliable technology being pushed on us way, way too fast, especially so-called 'self-driving cars'.
Nonsense. Cars in self-driving mode have driven 1.3 billion miles and killed six people [wikipedia.org]. That is significantly better than humans. Many of these deaths led to bug fixes, so SDC death rates are very likely to be even lower in the future.
SDCs are not being introduced too quickly, but too slowly. Every delay means more unnecessary deaths.
Re: (Score:3)
You could be right. But I wouldn't jump to that conclusion too quickly. Were those miles driven in comparable conditions to the human control group? Snow, ice, fog, etc.
Your examples show even at the highest level, level 3.
A Level 3 autonomous driving system would occasionally expect a driver to take over control.
So even the makers admit they are only driving the easy bits.
Re: (Score:2)
Autonmous driving solutions are really really good at driving on limited access highways in good weather and road conditions, and without any unexpected situation (construction zone, cop directing traffic, detour on the route, etc)
Most tests bragging about self driving car capability are done in those conditions, ie. they will emphasize that the car could drive thousands of kilometers across an entire continent, without mentioning that it never had to drive through the city, only on the highway.
Autonomous d
Re: (Score:2)
1. I'll just leave this here: https://www.youtube.com/watch?... [youtube.com]
2. All AI has to be is better than humans. Don't let perfection become the enemy of good or better.
Re: (Score:2)
USPTO RFC Response to AI Inventions (Score:2)
A very recent response to the extended deadline (to Nov 8th) USPTO RFC http://abstractionphysics.net/... [abstractionphysics.net]
Blame O-O (Score:2)
It mirrors politics.
Welcome to totalitarian takeover by fiat, intellectual force of arms.
A losing battle (Score:2)
To Know the Heart and Mind.... (Score:2)
It's been said that "nobody can know the heart and minds of man". We make decisions every waking moment. Sometimes, facts are presented upon which we base our decision. Sometimes, decisions are made, seeming, out of the blue. Yet, few are interested in dissecting a human being's brain to understand exactly "How?" and "Why?" we arrive at our decision.
So, why are we so concerned about a Black Box AI making decisions we can't understand? Why should we insist on the ability to analyze their statistical compu
Re: (Score:2)
Do we deem ourselves superior because we have been "created" by our "God"?
No, because our brains have evolved over billions of years, v.s. about 30 years of AI development.
You don't say... (Score:2)
As can be witnessed here on /. whenever the topic of AI comes up.
Daniel Dennett said it years ago (Score:2)
"The real danger, then, is not machines that are more intelligent than we are usurping our role as captains of our destinies. The real danger is basically clueless machines being ceded authority far beyond their competence."- Daniel Denett
https://www.edge.org/response-... [edge.org]
Re: (Score:2)
I sort-of agree, but I also disagree.
We humans are scientists and experimenters. In general, a subset of us will ALWAYS test how well something new works. If I'm a gambler and I set the AI up to pick horses, I'm going to make my predictions first and write them down, then let the AI do theirs. Then I'll compare. And I'll do that for weeks or months until I see just how the AI compares to me.
The same goes for a whole lot of critical industries. As long as regulation gets updated to say, "You can use AI, but
Re: (Score:2)
weirdness positioned weirdly (Score:2)
As just about every religion in human history has repeatedly demonstrated, just about any weirdness becomes relatively unremarkable once sufficiently embedded into normative narrative.
Consider, for example, the moral algorithm of Papal infallibility [wikipedia.org] wherein the black box is an invisible connection to the furtive man upstairs—still with fewer recorded public sightings than the Sasquatch. I think people forget just how weird digital algorithms appeared to the general public when they first appeared. Her