Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Books Transportation

The Dangers of 'Black Box' AI (pcmag.com) 72

PC Magazine recently interviewed Janelle Shane, the optics research scientist and AI experimenter who authored the new book "You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It's Making the World a Weirder Place."

At one point Shane explains why any "black box" AI can be a problem: I think ethics in AI does have to include some recognition that AIs generally don't tell us when they've arrived at their answers via problematic methods. Usually, all we see is the final decision, and some people have been tempted to take the decision as unbiased just because a machine was involved. I think ethical use of AI is going to have to involve examining AI's decisions. If we can't look inside the black box, at least we can run statistics on the AI's decisions and look for systematic problems or weird glitches... There are some researchers already running statistics on some high-profile algorithms, but the people who build these algorithms have the responsibility to do some due diligence on their own work. This is in addition to being more ethical about whether a particular algorithm should be built at all...

[T]here are applications where we want weird, non-human behavior. And then there are applications where we would really rather avoid weirdness. Unfortunately, when you use machine-learning algorithms, where you don't tell them exactly how to solve a particular problem, there can be weird quirks buried in the strategies they choose.

Describing a kind of worst-case scenario, Shane contributed to the New York Times "Op-Eds From the Future" series, channeling a behavioral ecologist in the year 2031 defending "the feral scooters of Central Park" that humanity had been co-existing with for a decade.

But in the interview, she remains skeptical that we'll ever acheive real and fully-autonomous self-driving vehicles: It's much easier to make an AI that follows roads and obeys traffic rules than it is to make an AI that avoids weird glitches. It's exactly that problem -- that there's so much variety in the real world, and so many strange things that happen, that AIs can't have seen it all during training. Humans are relatively good at using their knowledge of the world to adapt to new circumstances, but AIs are much more limited, and tend to be terrible at it.

On the other hand, AIs are much better at driving consistently than humans are. Will there be some point at which AI consistency outweighs the weird glitches, and our insurance companies start incentivizing us to use self-driving cars? Or will the thought of the glitches be too scary? I'm not sure.

Shane trained a neural network on 162,000 Slashdot headlines back in 2017, coming up with alternate reality-style headlines like "Microsoft To Develop Programming Law" and "More Pong Users for Kernel Project." Reached for comment this week, Shane described what may be the greatest danger from AI today. "For the foreseeable future, we don't have to worry about AI being smart enough to have its own thoughts and goals.

"Instead, the danger is that we think that AI is smarter than it is, and put too much trust in its decisions."
This discussion has been archived. No new comments can be posted.

The Dangers of 'Black Box' AI

Comments Filter:
  • "I think ethical use of AI is going to have to involve examining AI's decisions. If we can't look inside the black box, at least we can run statistics on the AI's decisions and look for systematic problems or weird glitches... There are some researchers already running statistics on some high-profile algorithms, but the people who build these algorithms have the responsibility to do some due diligence on their own work."

    A lot of machine learning companies are already using statistical testing methods to ensure that minorities aren't discriminated against (or of course, to ensure the new algorithm doesn't lose money because of some weird edge case). This person is behind the times, instead of reading their book, watching this series would be a better use of your time [youtube.com].

  • "Instead, the danger is that we think that AI is smarter than it is, and put too much trust in its decisions."

    So you buy a Cherry 2000, it automatically upgrades it's AI, decides it doesn't love you anymore, and leaves you for the toaster. Does your medical insurance cover your depression since even a robot can't love you?

    Actually, we're just going to keep giving it (AI / them) more and more power. Incrementally, over time, by "smart" businesses. Not that they're physically doing things, but deciding things instead with a human group fronting it's decision. (FLASHBACK: In the 1970, computer output printed on g

    • "Instead, the danger is that we think that AI is smarter than it is, and put too much trust in its decisions."

      So you buy a Cherry 2000, it automatically upgrades it's AI, decides it doesn't love you anymore, and leaves you for the toaster. Does your medical insurance cover your depression since even a robot can't love you?

      Dude, you can't buy a Cherry 2000 [wikipedia.org], they're past EOL. The only remaining ones are in a defunct factory in "Zone 7".

      • Re:Westworld. (Score:5, Insightful)

        by Impy the Impiuos Imp ( 442658 ) on Sunday November 10, 2019 @08:37AM (#59399330) Journal

        In 2017, the United States has fragmented into post-apocalyptic wastelands and limited civilized areas. One of the effects of the economic crisis is a decline in manufacturing, and heavy emphasis on recycling aging 20th-century mechanical equipment. Society has become increasingly bureaucratic and hypersexualized, with the declining number of human sexual encounters requiring contracts drawn up by lawyers prior to sexual activity. At the same time, robotic technology has made tremendous developments, and female Androids (more properly, Gynoids) are used as substitutes for wives

        Eh, I'll give it a 30%.

        Society increasingly bureaucratic
        Increasingly hypersexualized
        Scavaging manufactuing assembly lines...but to ship to China and elsewhere
        Declining sexual encounters
        Sexual encounters only half-jokingly needing contracts, but not for the reason they are guessing

  • X-ray the pipes (Score:4, Interesting)

    by Tablizer ( 95088 ) on Sunday November 10, 2019 @03:03AM (#59399044) Journal

    ethical use of AI is going to have to involve examining AI's decisions. If we can't look inside the black box, at least we can run statistics on the AI's decisions and look for systematic problems or weird glitches

    Factor tables [github.com] are a possible alternative to neural nets. You still get the "array of voters" that neural nets offer, but it's more approachable to and dissect-able by analysts with 4 year degrees and those used to spreadsheets, relational databases, and statistical packages. Factor tables can make AI more like regular office work, including dividing up teams to focus on sub-parts of a problem.

    You can run statistics to see which factor(s) or mask layers are giving you the incorrect results using more or less the same techniques a product sales analyst would use.

    It probably takes longer than neural nets to get initial results, but the trade-off is a manageable and traceable system. The focus has to shift to AI projects being easier to manage than on raw initial benchmarks. Yes, it may make AI work a bit more boring, but that's typical of any technology going mainstream and maturing.

  • From the headline: "Instead, the danger is that we think that AI is smarter than it is, and put too much trust in its decisions."

    The vast majority of people have no idea how these machines work or how flawed they are and are influenced by media, movies, and TV, thinking fantasy is reality, leading to far, far too much trust in unproven and unreliable technology being pushed on us way, way too fast, especially so-called 'self driving cars'.

    • The vast majority of people have no idea how these machines work

      The vast majority of people have no idea how an internal combustion engine works either.

      too much trust in unproven and unreliable technology being pushed on us way, way too fast, especially so-called 'self-driving cars'.

      Nonsense. Cars in self-driving mode have driven 1.3 billion miles and killed six people [wikipedia.org]. That is significantly better than humans. Many of these deaths led to bug fixes, so SDC death rates are very likely to be even lower in the future.

      SDCs are not being introduced too quickly, but too slowly. Every delay means more unnecessary deaths.

      • You could be right. But I wouldn't jump to that conclusion too quickly. Were those miles driven in comparable conditions to the human control group? Snow, ice, fog, etc.

        Your examples show even at the highest level, level 3.

        A Level 3 autonomous driving system would occasionally expect a driver to take over control.

        So even the makers admit they are only driving the easy bits.

        • by green1 ( 322787 )

          Autonmous driving solutions are really really good at driving on limited access highways in good weather and road conditions, and without any unexpected situation (construction zone, cop directing traffic, detour on the route, etc)

          Most tests bragging about self driving car capability are done in those conditions, ie. they will emphasize that the car could drive thousands of kilometers across an entire continent, without mentioning that it never had to drive through the city, only on the highway.

          Autonomous d

          • 1. I'll just leave this here: https://www.youtube.com/watch?... [youtube.com]

            2. All AI has to be is better than humans. Don't let perfection become the enemy of good or better.

          • Exactly. For once someone else says it all and I don't have to. So many millions have been invested in this 'technology' only to find it doesn't quite cross the finish line, it wasn't Just Another R&D Cycle like the bosses thought it would be, and now they're desperately trying to recoup the investment and show a profit before investors and stockholders revolt and call for their heads; it's being rushed to market, and the average unsuspecting citizen is going to pay the price, perhaps with their lives,
  • A very recent response to the extended deadline (to Nov 8th) USPTO RFC http://abstractionphysics.net/... [abstractionphysics.net]

  • It mirrors politics.

    Welcome to totalitarian takeover by fiat, intellectual force of arms.

  • What is profitable and convenient always triumphs. People will just stop worrying about this eventually and conform their human day to the flawed AI world and its decisions based on that world and accept the weirdness and death in it for the convenience and profit it brings. We as humans like to think we can change the flow of history consciously, but we can't. What will be, will be.
  • It's been said that "nobody can know the heart and minds of man". We make decisions every waking moment. Sometimes, facts are presented upon which we base our decision. Sometimes, decisions are made, seeming, out of the blue. Yet, few are interested in dissecting a human being's brain to understand exactly "How?" and "Why?" we arrive at our decision.

    So, why are we so concerned about a Black Box AI making decisions we can't understand? Why should we insist on the ability to analyze their statistical compu

    • Do we deem ourselves superior because we have been "created" by our "God"?

      No, because our brains have evolved over billions of years, v.s. about 30 years of AI development.

  • Instead, the danger is that we think that AI is smarter than it is, and put too much trust in its decisions.

    As can be witnessed here on /. whenever the topic of AI comes up.

  • "The real danger, then, is not machines that are more intelligent than we are usurping our role as captains of our destinies. The real danger is basically clueless machines being ceded authority far beyond their competence."- Daniel Denett

    https://www.edge.org/response-... [edge.org]

    • I sort-of agree, but I also disagree.

      We humans are scientists and experimenters. In general, a subset of us will ALWAYS test how well something new works. If I'm a gambler and I set the AI up to pick horses, I'm going to make my predictions first and write them down, then let the AI do theirs. Then I'll compare. And I'll do that for weeks or months until I see just how the AI compares to me.

      The same goes for a whole lot of critical industries. As long as regulation gets updated to say, "You can use AI, but

  • As just about every religion in human history has repeatedly demonstrated, just about any weirdness becomes relatively unremarkable once sufficiently embedded into normative narrative.

    Consider, for example, the moral algorithm of Papal infallibility [wikipedia.org] wherein the black box is an invisible connection to the furtive man upstairs—still with fewer recorded public sightings than the Sasquatch. I think people forget just how weird digital algorithms appeared to the general public when they first appeared. Her

Without life, Biology itself would be impossible.

Working...