Last week, news filtered out that ChatGPT-4, the latest iteration of OpenAI’s chatbot, would be announced, and indeed today is that day. In a Tweet and a new web page, OpenAI has pulled back the covers on its “most capable and aligned model yet:”
here is GPT-4, our most capable and aligned model yet. it is available today in our API (with a waitlist) and in ChatGPT+.https://t.co/2ZFC36xqAJ
it is still flawed, still limited, and it still seems more impressive on first use than it does after you spend more time with it.
— Sam Altman (@sama) March 14, 2023
The webpage links to a fairly technical paper outlining GPT-4, saying:
“…GPT-4, a large-scale, multimodal model which can accept image and text inputs and produce text outputs. While less capable than humans in many real-world scenarios, GPT-4 exhibits human-level performance on various professional and academic benchmarks, including passing a simulated bar exam with a score around the top 10% of test takers.”
This new ChatGPT-4 model is “multi-modal,” meaning it can accept text or image inputs, opening up lots of new possibilities. Microsoft is using OpenAI and a modified version of ChatGPT called the “Prometheus Model” in the new Bing,
although it’s not thought to be GPT-4 based, so we’ll have to see how rapidly Bing iterates on this new model.
Update: Actually, Bing Chat has been using an early version of GPT-4, as confirmed today on the Bing Blog:
We are happy to confirm that the new Bing is running on GPT-4, customized for search. If you’ve used the new Bing in preview at any time in the last six weeks, you’ve already had an early look at the power of OpenAI’s latest model. As OpenAI makes updates to GPT-4 and beyond, Bing benefits from those improvements to ensure our users have the most comprehensive copilot features available.
There will be a webcast at 1pm PDT today on YouTube with more information, we’ll keep you up to date on all the latest.