Google Translate vs. Human: 0:1

Mar 26, 2010 by

In a recent New York Times Op-Ed, David Bellos sings praises for machine translation tools such as the newest major entrant in the field, Google Translate. Unlike human translators (and interpreters!), computers are never “underpaid and overworked” (by the way, why is it that translators are paid so little, I wonder?!), and machine translation system can be made available very quickly, as was the case with the system developed for Haitian Creole “in little more than a long weekend” in the aftermath of the terrible earthquake on Haiti.

Unlike earlier machine translation tools that were based on the idea that language consists of words (“the lexicon”) and rules for putting these words together (“the grammar”), Google Translate is a statistical system that does not try to break down a sentence in one language and then reconstruct it in another. Instead, Google Translate trawls the web in search of “similar sentences in already translated texts somewhere out there on the Web” and then copies them. Of course, this works best for repeated formulas and for language pairings for which there exists a considerable body of human-translated electronic texts. Anything that’s not routine, that is creative or unusual, or even just less likely to be found in existing parallel translations will trip this “electronic magpie”.

While it is clear that Google Translate and similar systems can be useful in some domains, their limitations are likewise clear. Nor do I expect that machine translation tools – Google Translate or older “deconstruct-and-rebuild” systems – will make the same types of mistakes as human translators. Hence, I disagree with David Bellos who claims that machine translation’s “legendary bloopers are often no worse than the errors made by hard-pressed humans”. These bloopers are indeed worse in that they are not easily edited out and often require retranslation of the text. Nor is David Bellos right in declaring that Google Translate “simulates — but only simulates — what we suppose goes on in a translator’s head”. It is not even close! We human translators do not think in terms of statistical patterns or previously translated texts. Instead, we deconstruct the text, visualize what it says and then reconstruct that meaning in another language. In that sense, the older machine translation systems were closer to imitating what a human translator does. The problem they stumbled on is the computer’s inability to understand things in context (even new computational systems being developed for “understanding” inferences and so on do not come close to human abilities in that respect). Google Translate avoids the problem of contextualizing meaning by not really translating but rather by “plagiarizing” already made translations.

Thus, despite the challenges presented by translation – literary or otherwise – humans are still way ahead of computers. Maybe it’s because language is what makes us human?

Subscribe For Updates

We would love to have you back on Languages Of The World in the future. If you would like to receive updates of our newest posts, feel free to do so using any of your favorite methods below: