Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Google Businesses Communications Network Networking Software The Internet News Science Technology

Google's New Translation Software Powered By Brainlike Artificial Intelligence (sciencemag.org) 88

sciencehabit quotes a report from Science Magazine: Today, Google rolled out a new translation system that uses massive amounts of data and increased processing power to build more accurate translations. The new system, a deep learning model known as neural machine translation, effectively trains itself -- and reduces translation errors by up to 87%. When compared with Google's previous system, the neural machine translation system scores well with human reviewers. It was 58% more accurate at translating English into Chinese, and 87% more accurate at translating English into Spanish. As a result, the company is planning to slowly replace the system underlying all of its translation work -- one language at a time. The report adds: "The new method, reported today on the preprint server arXiv, uses a total of 16 processors to first transform words into a value known as a vector. What is a vector? 'We don't know exactly,' [Quoc Le, a Google research scientist in Mountain View, California, says.] But it represents how related one word is to every other word in the vast dictionary of training materials (2.5 billion sentence pairs for English and French; 500 million for English and Chinese). For example, 'dog' is more closely related to 'cat' than 'car,' and the name 'Barack Obama' is more closely related to 'Hillary Clinton' than the name for the country 'Vietnam.' The system uses vectors from the input language to come up with a list of possible translations that are ranked based on their probability of occurrence. Other features include a system of cross-checks that further increases accuracy and a special set of computations that speeds up processing time."
This discussion has been archived. No new comments can be posted.

Google's New Translation Software Powered By Brainlike Artificial Intelligence

Comments Filter:
  • by Anonymous Coward

    We just don't know

  • by Anonymous Coward

    Take a reasonably complex document and translate it back and forth between two languages like 5 times. When/If the resulting document is still readable and preserves the content from the original document, I'll consider it their new system a success. Until then, automated translation is a pipe dream.

    The current version of google translate (and all other systems I've tried) fails spectacularly when doing this.

    • by Anonymous Coward
      Do think that a chain of human translators would even be able to pass this kind of test?
      • by Godwin O'Hitler ( 205945 ) on Tuesday September 27, 2016 @06:33PM (#52973003) Journal

        A real professional, conscientious translator will make sure their translation is unambiguous, even if the original isn't. We don't have the right to practise GIGO.
        So provided there were only conscientious professional translators in the chain, yes, they'd pass the test easily.
        Having said that, I don't believe the bouncy translation method is a good yardstick at all. A good translation isn't judged on its repeatability.

      • Yes, they will. We used to do this for translating medical papers. The paper would go from English to Chinese and then that would be sent to a different translation company to be re-translated into English. The derived English would then be subject to the same level of clinical and copy editor review as the original. If it didn't pass the Chinese translation would be rejected.

        It's extremely expensive, but it can be done. We did this with millions of words, and it cost millions of pounds. No automatic transl

        • Okay, but that is just a single pass from English->Chinese->English, and you admit that it is not always successful and might get rejected. The original proposal was FIVE TIMES, without the crib of being able to compare back to the original source at each step. Surely you will agree based on your experience with a "single pass" of this translation that if they went back and forth five times, never being able to reference back to an earlier version, that things would go off the rails.
    • Electric translition of exciting product of our company google writes true word all times.

  • to find out what it is
  • by deathcloset ( 626704 ) on Tuesday September 27, 2016 @05:00PM (#52972487) Journal
    Sounds an awful lot like the WordNet similarity vector which is commonly used in semantic analysis and is a measure of the 'relatedness' of words - http://search.cpan.org/dist/Wo... [cpan.org]
    • The "we don't know exactly" quote wreaks of bad journalism. It was probably taken way out of context because the author didn't understand what they were being told or they asked a slightly different question than what they wrote.

    • Yup. We were doing this at PointCast way back in 1996. The answer to the question, "What is a vector?" Shows an astonishing amount of ignorance on the part of the PR person paid to talk to the reporter.
    • It sounds like WordNet, Word2Vec and bunch of other stuff in the area. It is actually described in quite detail in the article linked from the page in the summary: http://arxiv.org/pdf/1609.0814... [arxiv.org]

    • The summary is complete gibberish. For anyone interested, Google's own paper describing their NMT architecture is here:

      http://arxiv.org/abs/1609.08144 [arxiv.org]

      and a Google Reseach blog entry describing it's production rollout (initially for Chinese-English) is here:

      https://research.googleblog.com/2016/09/a-neural-network-for-machine.html [googleblog.com]

      The executive summary is that this is a "seq2seq" artificial neural net model using an 8-layer LSTM (variety of recurrent neural network) to encode the source language into a represe

    • technically it's an eigenvector, journalists don't get that kind of training or exposure...
  • This AI hype has to stop. Neural networks are nothing like how the brain works. We have known that since 1975 at least! The only thing more annoying than a space nutter is an AI nutter.
    • This AI hype has to stop. Neural networks are nothing like how the brain works. We have known that since 1975 at least! The only thing more annoying than a space nutter is an AI nutter.

      There you go again.

      AI-assisted translation is only going to get better and better and better as time goes on. It won't happen tomorrow or next week or next month, but come back in 5 years and I'd bet that it'll be a whole different ball game.

      As someone who saw the idea of a portable phone go from "pipe dream" to "something you can buy for $9.95 at Walmart", I've learned not to scoff or say stuff like this can't be done. It will be done, just not at the breathless pace the press releases would like you to b

      • Imagine I started calling a blender an "artificial digestive system" that mimics human digestion. Would you buy that? Not if you're a biologist. Where are the enzymes? Where are the biochemical pathways? Where is the nutrient separation and distribution network? Where, indeed, is the anus?

        Yet my blender claim is more accurate, by far, than the claim that Artificial Intelligence mimics biological intelligence. The operative word here is "intelligence." We're talking actual cognition, not pre-programmed re
        • The operative word here is "intelligence."

          Yes, defining "intelligence" is a key item here. What does it actually mean, and how can we say whether something is "intelligent" or not? It's a bit of a fuzzy area to say the least. Without a clear definition of what "intelligence" means, we're all just guessing.

          A couple of things:

          First, I think that a sufficiently sophisticated system could mimic intelligence even though it wouldn't actually be intelligent (whatever that means). No, it wouldn't be truly intelligent or genuine "AI", but it could be good e

          • By "mimic intelligence" I meant to operate in the same way biological intelligence does. Perhaps I should have said "replicate intelligence" to be totally clear. And you're right, we can't replicate something we can't take apart and explain.

            Your belief that we will eventually develop genuine AI seems premature, since we don't yet understand intelligence. What if, for example, the brain is just a transceiver that communicates with the true seat of intelligence, which happens to be in another dimension tha
            • I think we should focus on defining intelligence rather than jumping to the end game of creating one.

              I agree.

              At the same time, though, by attempting to mimic it or create it we may discover something along the way that helps us define it or understand it. If we needed to completely understand something before we tried to create it we'd be way behind where we are now in all sorts of fields. Sometimes the failures teach things that lead to successes.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      To be fair "deep learning" is a concept that could potentially spell the doom of traditional AI programs. The first problem for AI is that these learning networks don't use internal representations to do the "thinking", they perform analog computations which are just as mystifying as biological brains, hence the "what is a vector? We don't know" comment. The second problem is that in order to train one of the networks how to do something, you have to create the lessons that teach the subject you want it to

      • To be fair, symbol-processing human intelligence is a dying dream -- just one of the mistakes that Piaget made.
  • tldr (Score:2, Informative)

    by Anonymous Coward

    They're doing context-aware translation based on massive well organized training sets and clever search algorithms. Woo.

  • by ffkom ( 3519199 ) on Tuesday September 27, 2016 @05:09PM (#52972565)
    Neither the Slashdot summary nor TFA contains a URL to where we can try this now rolled out new translator. Does this imply it's already used by translate.google.com? If so, I didn't notice any improvements, yet.
    • The first link in the summary is to Goole's blog. The first translation roolout is Chinese to English -- all such translations from now on will be carried out by our new connectionist translation overlords, and I for one welcome them. What I'm curious about is how it will handle languages than had insufficient data for old-style Google Translate's statistical translation engine. Will this do better, or will it be even more sensitive to small datasets?
    • by Fwipp ( 1473271 )

      It's rolled out for Chinese->English, with more on the way.

      The Google Translate mobile and web apps are now using GNMT for 100% of machine translations from Chinese to English—about 18 million translations per day. The production deployment of GNMT was made possible by use of our publicly available machine learning toolkit TensorFlow and our Tensor Processing Units (TPUs), which provide sufficient computational power to deploy these powerful GNMT models while meeting the stringent latency requirements of the Google Translate product. Translating from Chinese to English is one of the more than 10,000 language pairs supported by Google Translate, and we will be working to roll out GNMT to many more of these over the coming months.

  • Hopefully when Google's network becomes sentient, it will follow their "don't be evil" motto a little more closely then the humans running things.

  • The test of the Lion-Eating Poet of the Stone Den?

  • Google Translate the following:

    "I ate steak at John's place" -> Chinese -> Russian -> French -> German -> Japanese -> Italian -> English

    "I ate the steak instead of John"

    Good enough from not getting eaten.

  • by mbeckman ( 645148 ) on Tuesday September 27, 2016 @09:16PM (#52973675)
    How can this translation software be "brainlike"? Let's see... It doesn't translate the way human brains do...it produces results a small fraction of the quality a human brain produces...and, it can be fooled by trivial procedures like reverse-then-forward translation, where human brains are not fooled.

    I know brains, and those ain't no brains.
  • Maybe 25 years ago, the break through in machine translation was to use statistical techniques. The United Nation provided a nice, accessible corpus of texts manually translated to different languages for initial learning.

    Statistics is the old word for learning -- it is all about learning patterns from data.

    Maybe the the new version of Google is better, and maybe somewhere within it it actually uses an Artificial Neural Network, although tat would seem an odd use of that particular machine learning technol

  • ... with 'would you like to translate this page?' on every page.

  • Zee Google search for your business and your business is a brain

To stay youthful, stay useful.

Working...