This morning, the alpha version of ChatGPT will be made available to a select group of paid users.
The sophisticated Voice Mode functionality that OpenAI has been working on has begun to go out. Beginning today, a select group of ChatGPT users who have paid for the service will have the opportunity to engage in a conversation with the artificial intelligence chatbot. By the fall of this year, it is expected that all ChatGPT Plus users will have access to the extended toolkit.
In a statement that was published on X, the business stated that this more advanced version of its Voice Mode “offers more natural, real-time conversations, allows you to interrupt anytime, and senses and responds to your emotions.”
We’re starting to roll out advanced Voice Mode to a small group of ChatGPT Plus users. Advanced Voice Mode offers more natural, real-time conversations, allows you to interrupt anytime, and senses and responds to your emotions. pic.twitter.com/64O94EhhXK
— OpenAI (@OpenAI) July 30, 2024
In September of last year, ChatGPT was updated to include support for voice discussions, and in May, the more advanced version was made available for public demonstration. Due to the fact that ChatGPT-4o employs a single multimodal model for the speech capabilities, as opposed to the three different models that were utilized by its prior audio solution, the latency that occurs during discussions with the chatbot is brought down.
During the presentation in May, OpenAI received a lot of criticism for introducing a voice option that sounded uncannily similar to Scarlett Johansson. Johansson is known for her acting career, which included providing the voice for the AI character Samantha in the film Her directed by Spike Jonze. Almost immediately following the outcry, the release date for enhanced Voice Mode was pushed back one week. Despite the fact that the corporation maintained that the voice actor was not mimicking Johansson’s performance, the voice actor’s sounds that were similar to Johansson’s have subsequently been removed.