Please register to access this content.
To continue viewing the content you love, please sign in or create a new account
Dismiss
This content is for our paying subscribers only

Meta AI builds first speech translator for spoken languages, makes it open-source

The translator works on spoken languages; over 40% of languages are primarily oral



Mark Zuckerberg, Meta's chief executive, first shared the progress on this research during an AI Inside the Lab event in February.
Image Credit: Facebook via NYT

Meta, on Thursday, announced the first AI-powered speech translation system for an unwritten language, Hokkien. The translation system is the first milestone for Meta AI’s Universal Speech Translator (UST) project, which focuses on developing AI systems that provide real-time speech-to-speech translation across all languages, even primarily spoken ones. 

Using computers to translate languages isn’t a new concept, but previous efforts have focused on written languages. Yet of the more than 7,000 living languages, more than 40 percent of languages are primarily oral and do not have a standard or widely known writing system like Hokkien. Speakers of unwritten languages often face hurdles when trying to participate in online communities, like the metaverse, and the hope, Meta said in a statement, is that this work will make it easier for them to communicate in a way that is natural to them.

Meta released a demo, open-sourcing the Hokkien translation model, the benchmark datasets, and the speech matrix. Meta said it hoped that this will enable researchers to create their own speech-to-speech translation (S2ST) systems.

Advertisement