In the Hitchhiker’s Guide to The Galaxy, Douglas Adams’s seminal 1978 BBC broadcast (then book, feature film and now cultural icon), one of the many technology predictions was the Babel Fish. This tiny yellow life-form, inserted into the human ear and fed by brain energy, was able to translate to and from any language.
Web giant Google have now seemingly developed their own version of the Babel Fish, called Pixel Buds. These wireless earbuds make use of Google Assistant, a smart application which can speak to, understand and assist the wearer. One of the headline abilities is support for Google Translate which is said to be able to translate up to 40 different languages. Impressive technology for under US$200.
So how does it work?
Real-time speech translation consists of a chain of several distinct technologies – each of which have experienced rapid degrees of improvement over recent years. The chain, from input to output, goes like this:
So now we have the five blocks of technology in the chain, let’s see how the system would work in practice to translate between languages such as Chinese and English.
Once ready to translate, the earbuds first record an utterance, using a VAD to identify when the speech starts and ends. Background noise can be partially removed within the earbuds themselves, or once the recording has been transferred by Bluetooth to a smartphone. It is then compressed to occupy a much smaller amount of data, then conveyed over WiFi, 3G or 4G to Google’s speech servers.
Google’s servers, operating as a cloud, will accept the recording, decompress it, and use LID technology to determine whether the speech is in Chinese or in English.
The speech will then be passed to an ASR system for Chinese, then to an NLP machine translator setup to map from Chinese to English. The output of this will finally be sent to TTS software for English, producing a compressed recording of the output. This is sent back in the reverse direction to be replayed through the earbuds.
This might seem like a lot of stages of communication, but it takes just seconds to happen. And it is necessary – firstly, because the processor in the earbuds is not powerful enough to do translation by itself, and secondly because their memory storage is insufficient to contain the language and acoustics models. Even if a powerful enough processor with enough memory could be squeezed in to the earbuds, the complex computer processing would deplete the earbud batteries in a couple of seconds.
Furthermore, companies with these kind of products (Google, iFlytek and IBM) rely on continuous improvement to correct, refine and improve their translation models. Updating a model is easy on their own cloud servers. It is much more difficult to do when installed in an earbud.
Ian McLoughlin, Professor of Computing, Head of School (Medway), University of Kent
This article was originally published on The Conversation. Read the original article.
Download our app and read this and other great stories on the move. Available for Android and iOS.