Meta has developed an artificially intelligent translation system that can convert an oral language – Hokkien – into spoken English. it’s one more step closer to finally making star trekUniversal translator a reality.
CEO Mark Zuckerberg shared a video demonstrating the technique with software engineer Peng-Jen Chen in a Facebook post on Wednesday. In it, the two converse in English and Hokkien, respectively, with Meta’s AI system audibly translating. The display appears to be quite impressive, although like the Meta’s VR legs, it’s highly likely that the video was edited for illustrative purposes and the current product isn’t quite as sleek.
Translation AI is typically trained on text, with researchers feeding their systems a body of written words for them to learn from. However, there are over 3,000 languages that are primarily spoken and do not widely use written systems, making them difficult to include in such training.
Hokkien is one such language. Used by more than 45 million people in mainland China, Taiwan, Malaysia, Singapore and the Philippines, Hokkien is an oral language without an official, standard written system.
Thus, Hokkien speakers who are required to write down information do so phonetically, resulting in significant variation depending on the scribe. There is little recorded data from Hokkien translated into English, and professional human translators are rare.
‘Squid Game’ and ‘Untranslatable’: The Debate Around the Subtitles Explained
To work around this, Meta used written Mandarin as an intermediary between English and Hokkien when training its AI.
“Our team first translated English or Hokkien speech into Mandarin text, and then back-translated it into Hokkien or English — both with human annotators and automatically,” said meta researcher Juan Pino. “They then added the paired sentences to the data used to train the AI model.”
Of course, filtering a sentence through multiple languages can sometimes distort its meaning – as anyone who’s played with Google Translate knows. META also worked with Hokkien speakers to check translations, and is releasing its models, data and research as open-source information for other researchers to use.
Mashable has reached out to Meta for comment.
We’re still a long way from a fully functioning TARDIS translation matrix. Meta’s spoken translation system can currently only handle one sentence at a time, and only works with translations between Hokkien and English. Still, it’s promising progress towards Meta’s goal of real-time verbal translation.
META’s Hokkien-English oral translation project is part of META’s ongoing Universal Speech Translator program, which aims to be able to translate spoken language between the two languages.