The voice of the helper will be provided by yet another group of famous people.
Meta has made its artificial intelligence assistant so pervasive in its apps over the course of the past year that it is almost difficult to realize that Meta AI has just been around for a year. However, one year after its launch at the most recent Connect, the business is beginning to incorporate a plethora of new capabilities into Meta AI in the expectation that more people would find its assistant to be informative and helpful.
The ability for people to engage in voice talks with Meta AI is one of the most significant advancements that will take place. Since the beginning of time, the only means to communicate with Meta AI was through the use of the Ray-Ban Meta smart glasses. As was the case with the debut of Meta AI the previous year, the corporation recruited a group of celebrities to help with the transformation.
Awkwafina, Dame Judi Dench, John Cena, Keegan Michael Key, and Kristen Bell are just some of the celebrities whose voices will be able to be imitated by Meta AI. Additionally, it will be able to imitate a few more generic sounds. It is important to note that the firm secretly phased down its celebrity chatbot identities that were introduced at Connect the previous year. This is taking place despite the fact that the company is hoping that the celebrities will convince customers to purchase Meta AI’s additional capabilities.
The additional image capabilities that Meta AI is acquiring are in addition to the voice chat features that it already has. It will be possible for Meta AI to reply to requests to update and edit photographs that are made through text chats within Instagram, Messenger, and WhatsApp. Users are able to ask the artificial intelligence to add or delete things, as well as change parts of a photograph, such as switching out a background or a piece of clothing, according to the business.
Alongside the release of the most recent Llama 3.2 model, the new capabilities are also available. “Bridging the gap between vision and language by extracting details from an image, understanding the scene, and then crafting a sentence or two that could be used as an image caption to help tell the story,” the new iteration, which comes barely two months after the release of Llama 3.1, is the first to have vision capabilities. It is also the first to have vision capabilities. Meta asserts that Llama 3.2 is “competitive” when compared to comparable products offered by ChatGPT and Claude in terms of “image recognition and a range of visual understanding tasks.”
The social network is experimenting with alternative approaches, some of which could be considered controversial, to integrate artificial intelligence into the fundamental elements of its primary applications. Reels will be used to test the company’s artificial intelligence-generated translation tools, which will include “automatic dubbing and lip syncing.” The statement made by Meta is that this “will simulate the speaker’s voice in another language and sync their lips to match.” In the United States and Latin America, it will initially be made available to “some creators’ videos” in both English and Spanish; however, the business has not disclosed any information regarding the rollout schedule.
Meta is also planning to experiment with incorporating information generated by artificial intelligence right into the main feeds of Facebook and Instagram. During the test, Meta AI will present visuals that were generated by artificial intelligence and are intended to be tailored to each user based on their interests and previous activities. One example of what Meta AI could do is present you with a picture that has been “imagined for you” and features your face.