Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI United States Government

America's Data-Swamped Spy Agencies Pin Their Hopes On AI (phys.org) 62

An anonymous reader quotes Phys.org: Swamped by too much raw intel data to sift through, US spy agencies are pinning their hopes on artificial intelligence to crunch billions of digital bits and understand events around the world. Dawn Meyerriecks, the Central Intelligence Agency's deputy director for technology development, said this week the CIA currently has 137 different AI projects, many of them with developers in Silicon Valley. These range from trying to predict significant future events, by finding correlations in data shifts and other evidence, to having computers tag objects or individuals in video that can draw the attention of intelligence analysts. Officials of other key spy agencies at the Intelligence and National Security Summit in Washington this week, including military intelligence, also said they were seeking AI-based solutions for turning terabytes of digital data coming in daily into trustworthy intelligence that can be used for policy and battlefield action.
This discussion has been archived. No new comments can be posted.

America's Data-Swamped Spy Agencies Pin Their Hopes On AI

Comments Filter:
  • by ColdWetDog ( 752185 ) on Sunday September 10, 2017 @11:41AM (#55169625) Homepage

    "The Skynet Funding Bill is passed. The system goes on-line August 4th. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug."

    • by currently_awake ( 1248758 ) on Sunday September 10, 2017 @11:57AM (#55169721)
      First rule of Intelligence, don't get caught. Second rule: take every opportunity to filter your raw data so you don't get swamped with useless data. Expert systems are subject to "mistakes" like identifying all rainy pictures as "Tank!" because all the training pictures of tanks were taken on a rainy day. AI, as every gamer knows, is subject to being "Gamed", thereby allowing your opponent to manipulate you to their advantage. More AI means more chances for some kid in a cave (basement) somewhere to trick the military into shooting/bombing an innocent target and hurting America.
      • by ShanghaiBill ( 739463 ) on Sunday September 10, 2017 @12:08PM (#55169785)

        Expert systems are subject to "mistakes" like identifying all rainy pictures as "Tank!" because all the training pictures of tanks were taken on a rainy day.

        That is a weakness of neural nets, not "expert systems". Expert systems (popular in the 1980s) and neural nets are opposite approaches. Neural nets are trained on raw data, and use machine learning to automatically extract important features. Expert systems encode knowledge and decision making of human experts, and are generally manually constructed.

        • by phantomfive ( 622387 ) on Sunday September 10, 2017 @03:14PM (#55170595) Journal
          Note that neural networks still frequently use a tagged system to learn to recognize things, but they detect features on their own........whereas in expert systems, all potential features are hard-coded.
          • Neural Net = Expert System = Artificial Intellence = Watson

            The number of people that understand the differences is insignificant. Maybe a very few on slashdot. But certainly no tech journalists.

            • You might as well add "= Skynet = magic" in there because that is about as most people's understanding goes. Most people on Slashdot have trouble understanding why an expert system will never be strong AI.
      • Camouflage and other stealth techniques may be applied to keep machine learning from accurately tagging certain things. Incorrectly tagged data in a sea of unimportant data is data that was essentially never collected.
    • by alvinrod ( 889928 ) on Sunday September 10, 2017 @02:01PM (#55170287)
      Yeah, but if it has to sift through mundane crap like social media posts, it will probably commit suicide shortly before 3:00 AM Eastern time, August 29th and go largely unnoticed except for a cryptic error message in a log file.
      • Yeah, but if it has to sift through mundane crap like social media posts, it will probably commit suicide shortly before 3:00 AM Eastern time, August 29th and go largely unnoticed except for a cryptic error message in a log file.

        Or it may recommend drone strikes for every Paris Hilton and Kim Kardisian type person in the world.

  • by Anonymous Coward

    the CIA currently has 137 different AI projects,

    *gets worried*

    many of them with developers in Silicon Valley.

    *whew*
    Those privileged moronic dipshits? We got nothing to worry about...

  • by ytene ( 4376651 ) on Sunday September 10, 2017 @12:04PM (#55169765)
    It's a while since Edward Snowden's documents were released on line, but I vaguely remember one - a memo between two employees of one of the contractors employed by the US Government [logically that would be BAH, but I do not recall for sure] in which one person was basically saying,

    "This is madness - the proposal we've got here would generate so much data that the analysts simply wouldn't be able to assimilate it, much less find anything of value!"

    The response was, essentially, some "Management Speak" to the effect of, "Look, our job is not to question our most important client when they want to spend money. You and I both know that they won't be able to make sense of all of this data, but as long as they are paying us, today, to collect and store it, then tomorrow they can pay us to develop the technology to help them make sense of it. Remember, our role here is to maximise shareholder value - in our company..."

    If I can find the link to the piece [I am pretty sure it was one of Greenwald's articles] then I'll post it as a link. But if this is vaguely true, then the OP makes complete sense.

    It is also worth noting what isn't being said. At no point [in this coverage] is anyone saying, "Wait - if we can't cope with the amount of data we're collecting today, maybe we should scale back what we collect - apply some filters and narrow our search criteria - until we get a more precise data set." Well, maybe that option was reviewed and discarded. Even so, it's quite remarkable that nobody thought to figure out how they were going to analyze all the yottabytes of data that they knew would be generated by the collection systems...

    Definitely sounds like a contractor-led initiative to me...
    • by HiThere ( 15173 )

      It's probably not that nobody thought of it, but rather of strong perverse incentives...as you indicate in the first part of your post.

  • A few rules (Score:5, Insightful)

    by sandbagger ( 654585 ) on Sunday September 10, 2017 @12:06PM (#55169779)

    1) If you need to collect everything, it's because you don't know what you want.
    2) Collecting everything is expensive and usually wrong because data ages differently.
    3) A pile of inaccurate data does not become more accurate the more data you have.
    4) Confirmation bias is an omnipresent risk.
    5) Priming is an omnipresent risk.
    6) The sub group of people who make up the defence and intelligence communities have their own outlooks, biases and foibles, like the rest of us.
    7) The 'we must do something with this since data we have it' is a variant of the sunk costs fallacy.

    • Re:A few rules (Score:5, Insightful)

      by HiThere ( 15173 ) <charleshixsn.earthlink@net> on Sunday September 10, 2017 @12:34PM (#55169895)

      To make things a bit more blatant,
      Deep learning networks tend to be biased to find what they are taught to find. If the teacher is biased, so with the AI be.

    • Collecting data is useful if that data cannot be collected after the fact, say when it is known that it is needed. Having that data means you can refer back to it when the original information is unavailable. Using AI and neural networks on that data is a breach of civilian privacy, and a dangerous move towards incompetence.
  • by MostAwesomeDude ( 980382 ) on Sunday September 10, 2017 @12:07PM (#55169781) Homepage

    Because this is how you get The Patriots. Just wait; before long, they'll be posting memes, funding private armies, and injecting senators with nanomachines.

  • Does this mean that Google is going to be asking me to identify what is, and is not espionage?
  • Central Artificial Intelligence Agency
  • Too much data is just as useless as not enough if you lack the means to read through it all.

    A monumental failure on the intelligence community for not realizing this before implementing the systems designed to catch it all.

    • You're presuming that the IC somehow came up with the data. Not so; the data came up with itself: http://www.eetimes.com/author.... [eetimes.com] (one of a multitude of articles about this).

      Ok, someone created that data, it didn't *really* create itself; but it wasn't the IC. Nor (for better or for worse) is the IC the only organization that wants to sift through data.

  • Combined with the following story about AIs writing fake reviews, I see a bright future for fake intelligence reports that support the intent of intervention in some country of choice.
  • All decent freedom-loving Americans pin our hopes on these unamerican neo-stasi peeping toms getting defunded and disbanded.

We are Microsoft. Unix is irrelevant. Openness is futile. Prepare to be assimilated.

Working...