Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Google Open Source AI Network Software

Google Open-Sources GPipe, a Library For Training Large Deep Neural Networks (venturebeat.com) 22

An anonymous reader quotes a report from VentureBeat: Google's AI research division today open-sourced GPipe, a library for "efficiently" training deep neural networks (layered functions modeled after neurons) under Lingvo, a TensorFlow framework for sequence modeling. It's applicable to any network consisting of multiple sequential layers, Google AI software engineer Yanping Huang said in a blog post, and allows researchers to "easily" scale performance. As Huang and colleagues explain in an accompanying paper ("GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism"), GPipe implements two nifty AI training techniques. One is synchronous stochastic gradient descent, an optimization algorithm used to update a given AI model's parameters, and the other is pipeline parallelism, a task execution system in which one step's output is streamed as input to the next step.

Most of GPipe's performance gains come from better memory allocation for AI models. On second-generation Google Cloud tensor processing units (TPUs), each of which contains eight processor cores and 64 GB memory (8 GB per core), GPipe reduced intermediate memory usage from 6.26 GB to 3.46GB, enabling 318 million parameters on a single accelerator core. Without GPipe, Huang says, a single core can only train up to 82 million model parameters. That's not GPipe's only advantage. It partitions models across different accelerators and automatically splits miniature batches (i.e., "mini-batches") of training examples into smaller "micro-batches," and it pipelines execution across the micro-batches. This enables cores to operate in parallel, and furthermore accumulate gradients across the micro-batches, thereby preventing the partitions from affecting model quality.

This discussion has been archived. No new comments can be posted.

Google Open-Sources GPipe, a Library For Training Large Deep Neural Networks

Comments Filter:
  • by JMZero ( 449047 ) on Tuesday March 05, 2019 @07:02PM (#58222004) Homepage

    This software might be very useful for developing computer systems to solve difficult tasks. Sure, whatever... but we can't let that blind us to what's really important here.

    What's truly important here is dissecting exactly what to call these systems. Using the term "neural network" makes sense on pretty much every level, and it would allow us to communicate clearly about what type of algorithms are being used, but that term (and especially "deep neural network") make all my personal bugaboos about AI flare up. These are computers not brains, so there's no neurons, so we can't use that word.

    And yeah, obviously I recognize that, for most people, Google's efforts here are exactly what people mean when they talk about AI - but that makes me angry so nobody should do it. I've decided, for no good reason, that the term AI should only be used when describing an intelligence that works just the same as a human. I have weird quasi-spiritual hangups about all this, which I think you're all obliged to respect.

    PS: Also, just a reminder, Google has accomplished nothing, these systems are useless and aren't improving. All just unimpressive hype. I hate technology and change, please stop. Thanks in advance for never mentioning something like this again.

    • by epine ( 68316 )

      I've decided, for no good reason, that the term AI should only be used when describing an intelligence that works just the same as a human.

      Unfortunately, human intelligence is insufficient (so far as we can tell) to evaluate the predicate "just the same as a human". Which makes things very simple, until we have actual wetware clones that not even Blade Runner can tell apart from the "real" thing.

      I've refused to use the term "AI" in my own notes since the 1980s. I've been using "AC" instead (for artificial c

  • enabling 318 million parameters on a single accelerator core. Without GPipe, Huang says, a single core can only train up to 82 million model parameters

    Is that what's really going on in the human brain -- hundreds of millions of "model parameters" are getting trained up?

    I doubt it. With this approach, AI researchers are following a road to something completely different from human intelligence. And I'll bet that something will be far more limited than human intelligence.

    • With this approach, AI researchers are following a road to something completely different from human intelligence.

      Which is perfectly fine. If you can disregard the debate around the term AI, this road can lead to (and already has) *better* solutions than humans can provide unassisted - for specific problems and quality definitions.

      I don't believe that having one machine, one algorithm to solve all different problems is the right aim at all. And I definitely don't think that a human brain would have been the best target model if that was the aim.

      Making a proper synthetic brain implementation is an interesting endeavour

    • I doubt it.

      I too.

The truth of a proposition has nothing to do with its credibility. And vice versa.

Working...