Meta the umbrella company of Instagram, WhatsApp and Facebook, has announced the creation of an Artificial Intelligence (AI)-based translation system, that works on speech translation. This announcement is believed to be a big leap in the direction of bolstering the Metaverse. Zuckerberg said that with the help of Universal Speech Translator, speech-to-speech will be translated instantly in all languages. The project includes the languages which are mostly spoken.
Trying to include maximum languages:
According to Zuckerberg, five years ago translation was done in a dozen languages. Three years ago there were up to 30 languages and this year we are targeting to translate hundreds of languages. The company is also building a new AI model called ‘No Language Left Behind’ that allows users to learn new languages with data with less training than existing models and can also be used with expert quality translation in hundreds of languages.
Significance in Metaverse:
The aim of Facebook’s parent company is to create a virtual world where people can connect with each other transcending language barriers. Zuckerberg said:
“The goal here is instantaneous speech-to-speech translation across all languages, even those that are mostly spoken; the ability to communicate with anyone in any language.”
At the event, Zuckerberg revealed that he is also working on AI research to create a new generation of smart assistants. Which will help people navigate the physical world with augmented reality (AR) along with the virtual world.
Zuckerberg said that on wearing AR glasses, it will be the first time that an AI system will show the real world from different perspective. The Metaverse helps us see what we see and hear in a much intuitive way. He opined that this project will help strengthen AI.
Working on End-to-end neural model voice assistance
Meta has also announced a new initiative called Project Karaoke, an end-to-end neural model for creating on-device assistants that give people more natural conversations with voice assistance. With the model created with Project Karaoke, people will be able to talk more fluently to the person in front of them, as well as understand the things that are incomprehensible. It will also help to understand the context. This model will also work on the commands of gestures.