SeamlessM4T: Meta Introduces a Multimodal AI Translation and Transcription Model
The world we live in has never been more interconnected, giving people access to more multilingual content than ever before. This also makes the ability to communicate and understand information in any language increasingly important.
In August, Meta introduces SeamlessM4T, the first all-in-one multimodal and multilingual AI translation model that allows people to communicate effortlessly through speech and text across different languages.
SeamlessM4T supports:
- Speech recognition for nearly 100 languages
- Speech-to-text translation for nearly 100 input and output languages
- Speech-to-speech translation, supporting nearly 100 input languages and 36 (including English) output languages
- Text-to-text translation for nearly 100 languages
- Text-to-speech translation, supporting nearly 100 input languages and 35 (including English) output languages
In keeping with Meta's approach to open science, they’re publicly releasing SeamlessM4T under a research license to allow researchers and developers to build on this work. They're also releasing the metadata of SeamlessAlign, the biggest open multimodal translation dataset to date, totaling 270,000 hours of mined speech and text alignments.
Building a universal language translator, like the fictional Babel Fish in The Hitchhiker’s Guide to the Galaxy, is challenging because existing speech-to-speech and speech-to-text systems only cover a small fraction of the world’s languages. But they believe the work they announce is a significant step forward in this journey.
Compared to approaches using separate models, SeamlessM4T’s single system approach reduces errors and delays, increasing the efficiency and quality of the translation process. This enables people who speak different languages to communicate with each other more effectively.
SeamlessM4T builds on advancements Meta and others have made over the years in the quest to create a universal translator. Last year, they released No Language Left Behind (NLLB), a text-to-text machine translation model that supports 200 languages, and has since been integrated into Wikipedia as one of the translation providers.
They also shared a demo of their Universal Speech Translator, which was the first direct speech-to-speech translation system for Hokkien, a language without a widely used writing system.
And earlier this year, Meta revealed Massively Multilingual Speech, which provides speech recognition, language identification and speech synthesis technology across more than 1,100 languages.
SeamlessM4T draws on findings from all of these projects to enable a multilingual and multimodal translation experience stemming from a single model, built across a wide range of spoken data sources with state-of-the-art results.
This is only the latest step in their ongoing effort to build AI-powered technology that helps connect people across languages.
In the future, they want to explore how this foundational model can enable new communication capabilities — ultimately bringing us closer to a world where everyone can be understood.
Learn more about SeamlessM4T on Meta's AI blog.
---
Send us your press releases to shareyournews@lucidityinsights.com