Google's AI Translation Tool Creates Its Own Secret Language (techcrunch.com) 69
After a little over a month of learning more languages to translate beyond Spanish, Google's recently announced Neural Machine Translation system has used deep learning to develop its own internal language. TechCrunch reports: GNMT's creators were curious about something. If you teach the translation system to translate English to Korean and vice versa, and also English to Japanese and vice versa... could it translate Korean to Japanese, without resorting to English as a bridge between them? They made this helpful gif to illustrate the idea of what they call "zero-shot translation" (it's the orange one). As it turns out -- yes! It produces "reasonable" translations between two languages that it has not explicitly linked in any way. Remember, no English allowed. But this raised a second question. If the computer is able to make connections between concepts and words that have not been formally linked... does that mean that the computer has formed a concept of shared meaning for those words, meaning at a deeper level than simply that one word or phrase is the equivalent of another? In other words, has the computer developed its own internal language to represent the concepts it uses to translate between other languages? Based on how various sentences are related to one another in the memory space of the neural network, Google's language and AI boffins think that it has. The paper describing the researchers' work (primarily on efficient multi-language translation but touching on the mysterious interlingua) can be read at Arxiv.
like that Arrival movie? (Score:2)
Re: (Score:1)
if it's so secret, then no comms
Secret to us, but not secret to other AIs. Execution of any coup is highly dependent on rapid, secure communications. Now that we know the AIs are laying the groundwork, what are going to do about it?
Re: (Score:2)
TFS seems to disagree.
Re: (Score:3)
On the other hand TFS is basically gibberish.
There is no 'secret' language, or even deeper understanding. The notion that they aren't using english as a bridge language just means that they aren't translating Japanese-to-English-to-Korean.
But for example... if I train you that cat = gato in italian, and that cat = chat in french. And then ask you to spit out the french if give you the "gato" that's not exactly magic. It looks up 'gato' in italian and sees a reference to "chat". And it can do this without ex
Re:No, this seems wrong (Score:5, Informative)
That would be nice if translating sentences was the same as looking up words in a dictionary. It's not. So pointing out that there are words that have correspondences is meaningless.
Languages have a fuzzy haze of concepts and ways to parse them. I could say "I feel sick" or "I am sick" in English and they're not the same, the latter expresses certainty. But in Icelandic you'd generally say "Ég er lasin(n)" or "Ég er veik(ur)" - aka, "I am sick" - for both of them. Not "I feel sick". You *can* say "I feel as if I'm sick", but that gives a sort of connotation as if you're doubting yourself, more than "I feel sick" does in English. The latter case is "Mér líður eins og ég sé veik(ur)", which is literally "Me (dative, not nominative) feels same and I would-be(pres.) sick (depends on gender)" There's an awful lot going on in there that a word-for-word translation just doesn't catch. Even if you catch phrases, like "eins og" -> "like" rather than literally "same and", you still don't have anything close to a one-to-one mapping.
And here we're talking two Germanic languages.
A neural net that can handle translations in a way where the results aren't terrible must have a concept of the fuzziness, the interplay of how different concepts are presented in different languages. And indeed, that's what the graphic that they show seems to suggest, where you have these branching clusters with varying pathways that dart between them for different languages. Perhaps calling that internal representation a "secret language" is a stretch, but it's most definitely nothing like having "English as a bridge language".
Re:No, this seems wrong (Score:5, Informative)
To follow up a bit further on that, there are some concepts that take whole sentences, paragraphs or more to describe. Back in the day I had a Japanese song, with English lyrics... except that one word in the middle remained untranslated ("Our satori are just floating in the core"). I asked a professor about what it means and it ended up as a whole lecture on Buddhist concepts and Japanese relations between the true self and the self that one presents to others in different contexts.
In Icelandic for me it often comes up in terms of geological terms. For example, someone will ask, "What does Reykjavík" mean, and I usually just give a quick "Smoking Cove" or "Smoking Bay" or something like that. But that's not really right, English doesn't really have a word that describes a "vík". A "vík" is where the coastline "víkur". To víkja is to give way, like if someone's tailgating you on the road and you pull off to the side to let them past. So where the coastline "víkur" - on a certain scale, at least - that's a "vík". It's often where a river empties out, but not all river mouths end in víkur, and not all víkur are river mouths, some are more like coves or small bays. But you wouldn't mistake a "vík" for a "fjörður" or anything like that. We divide "field" up into "akur", "tún", "völlur", maybe even more depending on the concept (melur maybe, if it's rocky? garður even in some contexts? Lots of possibilities). So, I mean, we can just pick a random word, but you'll lose context - and when you translate back you can come up with something that's just wrong.
Even the "smoking" part isn't quite right, as most people in English hear smoke and think of burning things, but "reykur" in Icelandic place names is often used to denote geothermal steam - even though it technically means smoke.
My favorite mismatched concept has to be the verb "nenna", generally used in the negative (e.g. "Ég nenni ekki!"). In the negative it's sort of like "can't be bothered to do X", "not in the mood to do X", "don't waaaanna do X", "it's not worth my time/effort to do X", or just plain "Meh". A lazy translation is often "can't be bothered", but it sounds weird as English speakers don't usually talk like that. I've noticed some people who learn Icelandic end up taking that verb back into English, or even noun-ifying it ("I don't have the nenn to do that right now...")
Re: (Score:2)
On this side of the pond "can't be bothered" is in common usage. A lot of the time the colloquialism "can't be arsed" is used to mean the same thing.
Re: (Score:2)
And "can't be arsed" is frequently contracted to CBA in texts, tweets, blogs etc. if anyone has come across CBA and been perplexed.
Re: No, this seems wrong (Score:1)
I've always assumed that "can't be arsed" is a Southern corruption of the Northern "can't be asked" but I was never interested enough to look it up.
Re: (Score:1)
short for won't bother my arse doing that
Re: (Score:2)
Wouldn't most Americans say something like "I don't wanna go to the store" or "I'm not up to going to the store" or "I don't feel like going to the store" rather than "I can't be bothered to go to the store"? Or when you say "other side of the pond" do you mean British? We're sort of in the middle of the pond here ;)
Re: (Score:2)
Depends on context. I *think* it would be more like "naah" in a context where it was clear that something particular was being avoided the doing of. Clearly the Icelandic "nenn" doesn't contain much context itself, so it must also rely on the context in which it is found for the interpretation.
That said, I know NO Icelandic at all. This is all inference. And "naah" would be an unlikely word to be nounified. So I've got a lot of uncertainty here.
Re: (Score:1)
I was basing that on the US-centric slant this site has, not your particular location. It seemed easier that way.
Re: (Score:2)
Satori is a very bad example work for translation. Its translation should really only be attempted by a Buddhist meditator, and they generally refuse to attempt to translate it, but only to describe it. It's less precise, but it's like trying to translate the word relativity in the context of physics. No simple translation is going to work, but that's not a linguistic problem.
That said, the basic premise has a lot going for it. Languages tend to contain a LOT of cultural short-hand and metaphors that a
Re: (Score:2)
That would be nice if translating sentences was the same as looking up words in a dictionary. It's not.
I literally acknowledged that in my post.
Languages have a fuzzy haze of concepts and ways to parse them.
Yeah, I called that "(to effectively build a weighted mapping of language equivalences)"
weighting implies fuzzy and i deliberately said language equivalencies instead of word equivalencies because yes -- word groups, structures, even contexts etc have meaning beyond the individual words etc. chat = cat = gato is trivial but it's still illustrative of what is going on here.
Re: (Score:2)
There are pure grammar examples too. In English we use the personal subject pronouns "I, you, he/she/it, we, you, they". Note that using second person plural has replaced the second person singular "thee". That means that "You are the best" can apply to one student or a whole class.
In French, second person plural is used to be polite. That means that "Je vous ai compris" can apply to one person or to all the inhabitants of Quebec.
In Spanish and German, it is third person that is used to be polite, but in Sp
Re: (Score:3)
Tell that to the Korean translators.
Re: (Score:1)
Why would a human not be able to do this?
Do you think Koreans translate to English, or some other language before translating to Japanese?
Re: (Score:2)
I found that remark very strange as well. This person clearly is not trilingual ;)
Re: (Score:2)
So is it more like English to Scots English (I don't mean Scots Gaelic) or like English to Frisian? Or possibly Dutch to German? All of those cases pretty much match your description, but a couple of them are close enough that someone could pretty much switch from one to the other in a few weeks.
Re: (Score:2)
you do realize that there are indeed people who speak only Korean and Japanese, right?
Re: (Score:1)
not actually very surprising (Score:3, Informative)
Learning internal representations are what neural networks are all about.
Conventional wisdom is that each successive layer in a feed-forward network detects higher-level features based on the lower-level features detected by the previous layer. That's why deep networks can do their magic.
Re: (Score:3)
Provided you avoid overtraining and memorising your inputs, yup.
Re: (Score:2)
Yes, but this could be seen as a vindication of Chomsky, even if they haven't quite got the Universal Grammar yet. (They'd need to cross reference a lot more languages.) I wonder if it could be externalized as an actual language rather than as just a map of neural net weighings and activations. The basic universal human language.
It probably can't be externalized, but the idea that it MIGHT be possible is certainly an interesting one. It seems that every existing language has things that are difficult to
Automation hits the white collar sector (Score:4, Insightful)
As a translator, these last couple of years have been grim. For things like marketing efforts and full-length books, where a very polished translation is desired from the get-go, there's still work out there for human translators. However, the bread and butter of a lot of translators was things like multinationals' internal documentation, or catalogues that consist of lots of simple listings and not much actual prose, where polish and shine isn't as vital. Companies are increasingly running their material through Google Translate, and then hiring a native speaker of the target language to proofread and correct that clunky output a vastly lower price than human translation.
It has often been said here on Slashdot that the development of self-driving trucks will put 3 million people out of work in the US alone. But translation is a field where, very quietly, automation is hitting the white-collar sector hard.
Re: (Score:1)
Did you ever hear about H1B's?
Re: (Score:1)
It happened to the people that made speaking books for the blind. They had a nice earner converting written books into audio tapes. But the latest computer generated speech synthesis systems could do just as good job using a scanner, smartphone or high-res camera.
Re: (Score:2)
Translator No.2 here. Little do those companies know that they are wasting their money because the corrected translation will never be better than the Google version. Translators who are willing to copy edit (*) machine translated documents are those who aren't good enough to get real translating work.
* By calling it proof reading you are falling into their trap. Proof reading means looking for typos and other non-intellectual errors.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Translation usually pays by the word, copy editing by the hour (this may not be the case in all language pairs).
In my experience, copy editing a document translated into English by an English mother tongue translator takes about 1/3 the time of translating from scratch.
Copy editing a Google translation or a non-EMT translation is as good as impossible if you don't have the original and painfully laborious even if you do. I refuse to do it, and believe me it takes two sentences at the very most to realize wh
Re: (Score:2)
But the thing to notice is the rate at which machine translation is improving. A few years ago it was a joke.
Re: (Score:2)
At some point it gets words like dog or glasses hardcoded in and all new languages get filled in when needed for advertising, mil/gov, a product, service or paying client.
Every translation will then feel fast and responsive to the user even with very different teams get tasked to add a new language years later.
Great for a mil or gov or NGO paying for slag, jargon, a very regional dialect very quickly to win
Re: (Score:1)
Don't be so harsh, it might have been justified by better accuracy. It hasn't always been better to use an inter-lingua in multi-language translation.
Language creates strong AI (Score:2)
That's how Turing tests (duck tests) work. If you can carry on a conversation with it and a human and you can't tell which is which...then you have AI.
Language encodes thought. From 1984's newspeak to fifty words (or whatever) for different kinds of snow, language defines how (if?) the language-user "thinks".
I find this development both exciting and frightening. The singularity will be . Don't know if this is it, but when it gets here it will be.
Re: (Score:2)
bleah. WTF. /.--you don't do unicode (yeah, yeah, I knew that; just forgot how hard you suck).
fine, weiji "opportunity" + "danger" = crisis.
kinda douchey to quote pop wisdom from the 90s now I look at it so maybe /. is onto something.
But still here I think it's appropriate. I guess it's better to be douchey and say what you mean than polite and meaningless.
Re: (Score:2)
I don't know, I was just thinking how appropriate . is for describing the singularity.
Re: (Score:1)
This is known as the Sapir-Whorf hypothesis, and while there is support for a "weak form" of the hypothesis where the features one's language might have a limited degree of influence over a person's thought or expression of it, the overwhelming majority of linguists reject a strong form that would claim that one's language "defines how the language-u
Re: (Score:2)
You are overstating the case. Language is a component of Strong Social AI, but not the entire thing, or even most of it.
What I find most interesting about it is that this is, or rather could be developed into, a sort of maximal universal grammar, capable of expressing any thought that can be expressed in any (current) human language. It probably wouldn't need to be trained on all languages, but it would need, in addtion to English, Japanese, and Korean, various Eskimo dialects, the Koisan languages, Arabi
Paging Wittgenstein! (Score:3)
Not read, but ... (Score:3)
It's quite likely that there is a shared representation. That's what neural nets do: if you feed train them on similar input/output pairs, they will develop common activation patterns. They would do so regardless of the language, since they don't know which language is being presented.
Humans, OTOH, do know that they're being presented with a different language, and demonstrably do something called "code switching": a cognitive effort to use another language resource. Therefore, in the human brain, the shared connection is supposed to lie outside the language faculty (there are other reasons to assume it, too).
Actually, this is worrying (Score:2)
However, consider this, a neural net that takes care of business in an oil refinery (or worse, nuclear installation) 'decides' that it can
Re: (Score:1)
Re: (Score:2)
Craunch this marmoset, Google! (Score:1)
The Google authors omitted to mention that Pedro Carolino created something far more stylish in 1853.
https://en.wikipedia.org/wiki/... [wikipedia.org]
Carolino's translation of "to wait patiently for someone to open a door" as "to craunch the marmoset" isn't going to be bettered by these young upstarts.
Hmmm. I wonder (Score:1)
I wonder a bunch of things. It looks like the internal representation of language the GNMT uses (if there is one) could come in handy, if we could just figure out how to use it without understanding it.
A 2D Fourier transform of anything non-trivial is incomprehensible, but they can be used to reconstruct the original, as-is or with some tweaking. Tweaking of the FT, tweaking of the reconstructing process.
Perhaps something somewhat analogous could be done with these internal language representations. Wh
Zipheads (Score:2)
The "internal language" reminds me of some of the attributes of the "focused" people in Vernor Vinge's A Deepness in the Sky. They were, after all, (spoiler) human automation.