News
OpenAI Launches Faster, Cheaper AI Model with GPT-4o
(Bloomberg) — OpenAI is launching a faster, cheaper version of the artificial intelligence model that powers its chatbot, ChatGPT, as the startup works to maintain its lead in an increasingly crowded market.
Bloomberg’s Most Read
During a livestreamed event on Monday, OpenAI debuted GPT-4o. It is an updated version of its GPT-4 model, which is already more than a year old. The new big language model, trained on massive amounts of internet data, will be better at handling text, audio and images in real time. Updates will be available in the coming weeks.
When a question is asked verbally, the system can respond with an audio response within milliseconds, the company said, allowing for a more fluid conversation. In a demonstration of the model, OpenAI researchers and chief technology officer Mira Murati spoke to the new ChatGPT using only their voice, showing that the tool could respond. During the presentation, the chatbot also appeared to translate speech from one language to another almost instantly, and at one point sang part of a story upon request.
“This is the first time we have made a big leap in interaction and ease of use,” Murati told Bloomberg News. “We are truly enabling you to collaborate with tools like ChatGPT.”
The update will bring a number of features to free users that were previously limited to those with a paid ChatGPT subscription, such as the ability to search for answers to questions on the web, speak to the chatbot and hear answers in multiple voices, and command it to store details that the chatbot can recover in the future.
The launch of GPT-4o is poised to shake up the rapidly evolving AI landscape, where GPT-4 remains the gold standard. A growing number of startups and large technology companies, including Alphabet Inc.’s Anthropic, Cohere and Google, have recently launched AI models that they say match or beat GPT-4’s performance on certain benchmarks.
OpenAI’s announcement also comes a day before the Google I/O developer conference. Google, an early leader in the artificial intelligence space, is expected to use the event to unveil more AI updates after racing to keep up with Microsoft Corp.-backed OpenAI.
In a rare blog post on Monday, OpenAI CEO Sam Altman said that while the original version of ChatGPT hinted at how people could use the language to interact with computers, using GPT-4 seems “viscerally different”.
The story continues
“It looks like movie AI; and it’s still a little surprising to me that it’s real,” he said. “Reaching human-level response times and expressiveness ends up being a big change.”
Twice as fast
Instead of relying on different AI models to process different inputs, GPT-4o – the “o” stands for omni – combines voice, text and vision into a single model, allowing it to be faster than its predecessor. For example, if you feed the system an image prompt, it might respond with an image. The company said the new model is twice as fast and significantly more efficient.
“When you have three different models that work together, you introduce a lot of latency into the experience and that breaks the immersion of the experience,” said Murati. “But when you have a model that reasons natively in audio, text and vision, you eliminate all the latency and can interact with ChatGPT more like we are interacting with it now.”
But the new model encountered some obstacles. The audio frequently cut out while researchers were speaking during the demonstration. The AI system also surprised the audience when, after guiding a researcher through the process of solving an algebra problem, he chimed in with a seductive voice: “Wow, that’s the outfit you’re wearing.”
OpenAI is starting to roll out GPT-4o’s new text and image features to select paying ChatGPT Plus and Team users today, and will soon offer these features to enterprise users. The company will make the new version of its “voice mode” assistant available to ChatGPT Plus users in the coming weeks.
Read more: Apple closes deal with OpenAI to put ChatGPT on iPhone
As part of its updates, OpenAI said it will also allow anyone to access its GPT Store, which includes custom chatbots made by users. Previously, it was only available to paying customers.
Speculation about OpenAI’s next launch has become a parlor game in Silicon Valley in recent weeks. A mysterious new chatbot has caused controversy among AI watchers after it appeared on a benchmarking website and appeared to rival the performance of GPT-4. Altman offered references to the chatbot in X, fueling rumors that his company was behind it. On Monday, an OpenAI employee confirmed on social platform X that the mysterious chatbot was indeed GPT-4o.
The company is working on a wide range of products, including voice technology and video software. OpenAI is also developing a search feature for ChatGPT, Bloomberg previously reported.
On Friday, the company quelled some rumors by saying it would not imminently launch GPT-5, a long-awaited version of its model that some in the tech world hope will be radically more capable than current AI systems. He also said Monday’s event would not unveil a new search product, a tool that could compete with Google. Google shares rose on the news.
But after the event ended, Altman was quick to keep the speculation going. “We will have more to share soon,” he wrote on X.
(Updates with context starting in third paragraph.)
Bloomberg Businessweek Most Read
©2024 Bloomberg LP