OpenAI Introduces GPT-4o: Enhancing Multimodal Capabilities in ChatGPT
OpenAI debuts GPT-4o, a groundbreaking Omnimodel integrating audio, text, and video seamlessly into ChatGPT and API services. This innovative technology is highlighted for its capability to recognize emotions in live selfies, outperforming its predecessor in non-English languages.
The new GPT-4o model offers enhanced interactivity, realistic voice conversations, and real-time processing of audio and visual inputs, setting new benchmarks for AI accessibility and functionality. OpenAI's advancements aim to provide users with a more human-like and engaging AI experience, offering various interactions such as interviews, customer service, translations, and even playful responses to pets.
Related news on that topic:
The press radar on this topic:
New GPT-4o AI model is faster and free for all users, OpenAI announces
OpenAI's new model that can read people's emotions from facial expressions: GPT-4o
ChatGPT Upgrades Voice Mode to Resemble Her's AI Assistant
ChatGPT kann jetzt singen - DER SPIEGEL
Welcome!

infobud.news is an AI-driven news aggregator that simplifies global news, offering customizable feeds in all languages for tailored insights into tech, finance, politics, and more. It provides precise, relevant news updates, overcoming conventional search tool limitations. Due to the diversity of news sources, it provides precise and relevant news updates, focusing entirely on the facts without influencing opinion. Read moreExpand