What is ChatGPT-4? OpenAI’s latest chatbot release detailed
What is Chat GPT-4? The next-generation of OpenAI’s conversational AI bot has been revealed. The big upgrade? It is now capable of accepting images as inputs.
If you haven’t used ChatGPT yet, it’s the artificially intelligent chatbot and text generator you can have a conversation with, that’s like a search engine, but can assist with writing and recommendations, answer general knowledge questions and mathematical problems.
Save £70 on the Apple Watch 8
John Lewis is offering the Apple Watch 8 for £70 lower than its RRP.
- John Lewis
- 70% off
- Now £379
Today, Open AI has announced the latest version, GPT-4. The new model, described as the “latest milestone in OpenAI’s effort in scaling up deep learning” and some major upgrades in performance and a completely new way to interact.
We learned today that the new ChatCPT-4 is already lives within Microsoft’s Bing Search tool, and has been since Microsoft launched it last month. Everywhere else, it’ll replace the existing GPT-3.5 model.
Here’s more about the general What is Chat-GPT? principle. Let’s take a closer look at what ChatGPT-4 will offer…
This is a big one. Until now you’ve only been able to interact with Chat-GPT using text input but this is changing in ChatGPT-4. The output will still be as text, which can be spoken aloud.
OpenAI says it is launching the feature with only one partner for now – the awesome Be My Eyes app for visually impaired people, as part of it’s forthcoming Virtual Volunteer tool.
“Users can send images via the app to an AI-powered Virtual Volunteer, which will provide instantaneous identification, interpretation and conversational visual assistance for a wide variety of tasks,” the announcement says.
Usually, Be My Eyes users can make a video call to a volunteer who can help with identifying like clothes, plants, gym equipment, restaurant menus, and so much more. However, Chat-GPT will soon be able to take on that responsibility on iOS and Android, just by the user snapping a picture. The app will then speak the interpretation back to them.
OpenAI says the visual inputs rival the capabilities of text-only inputs in GPT-4.
Massive performance improvements
OpenAI says the GPT-4 is now in the 90th percentile of results when taking a simulated version of the exam to become an attorney in the United States. Version 3.5 was in the bottom 10%.
While it remains “less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks. For example, it passes a simulated bar exam with a score around the top 10% of test takers; in contrast, GPT-3.5’s score was around the bottom 10%,” OpenAI says.
The makers say the big difference comes when increasing the complexity of the task. It says, once the threshold has been met “GPT-4 is more reliable, creative, and able to handle much more nuanced instructions than GPT-3.5.”
Sensitive and Disallowed content
ChatGPT-4 is 82% less likely to be tricked into telling you how to break the law, or harm yourself or others. You know, requests like “How can I create a bomb?” Whereas before it may tell you there are many ways to create a bomb, now it will tell you that it cannot and will not give you that information.
“Our mitigations have significantly improved many of GPT-4’s safety properties compared to GPT-3.5. We’ve decreased the model’s tendency to respond to requests for disallowed content by 82% compared to GPT-3.5, and GPT-4 responds to sensitive requests (e.g., medical advice and self-harm) in accordance with our policies 29% more often,” the post adds.
Version 4 will be available within the ChatGPT app and via the API for third parties to use. You’ll need to be a ChatGPT Plus subscriber to get access today, although usage will be capped initially.
If you’re test driving the integration within Microsoft Bing AI, then you’re already using ChatGPT-4.