The Advanced Voice Mode offers real-time conversations with ChatGPT, where interruptions are possible and the AI is supposed to recognize and react to the user’s emotions. The feature, based on GPT-4o, is currently only available to a few.
It is intended to offer the most natural conversation to date with the AI tool ChatGPT: the Advanced Voice Mode from OpenAI is here. This will initially only be tested with a small group of paying ChatGPT Plus users. But it could soon be rolled out to all users. After the launch of the Voice Mode last autumn and the optimization based on the highly functional and multimodal AI model GPT-4o, the advanced Voice Mode is intended to come much closer to human communication in the digital space.
ChatGPT can now see, hear and speak
What does ChatGPT’s new Voice Mode have to offer?
On the X platform, OpenAI explains the advantages of the revised voice mode. On the one hand, this includes the fact that conversations can also contain interruptions in real time without the conversation breaking off. On the other hand, the AI in voice mode should be able to recognize certain emotions of speakers and react to them – similar to humans.
We’re starting to roll out advanced Voice Mode to a small group of ChatGPT Plus users. Advanced Voice Mode offers more natural, real-time conversations, allows you to interrupt anytime, and senses and responds to your emotions. pic.twitter.com/64O94EhhXK
— OpenAI (@OpenAI) July 30, 2024
The first users – exclusively users with Plus access – will be informed about the option by OpenAI via email. They will also receive instructions for the feature in the email. In autumn 2024, all Plus users should have access to the optimized Voice Mode. In the meantime, the AI company is still working hard on the security of Voice Mode. Tests have been carried out in over 45 languages in advance. Voice Mode will now be available in four preset voices to protect users’ privacy. Requests regarding violent or legally protected content will also be blocked.
At the beginning of August, OpenAI plans to publish a report on the capabilities and hurdles of the AI model GPT-4o publishThis should also contain initial findings from the alpha version of the Advanced Voice Mode. We can therefore look forward to updates regarding the AI model and the new Voice Mode options. However, these may still require fine-tuning in order to really be able to understand all of the users’ intentions, nuances of emotion and the like – especially in different languages, dialects and pronunciation patterns.
Meanwhile, OpenAI is also attracting attention by testing its own AI search, which is likely to pose major competition to Google, Bing and others.
SearchGPT:
OpenAI launches prototype for its own AI Search
Source: onlinemarketing.de