ChatGPT-4o is Free and for Everyone.
Released just 1 day before Google's I/O event 2024, here is OpenAI's newest model.
Happy Friday from the Building Startups Newsletter! :)
I hope a lot of our lives are not rushing at the speed at which AI is coming into play.
Biggest announcement of the week was OpenAI’s mega release of the GPT-4o, the latest iteration of the powerful GPT model, at the Spring Update event, also announcing that the model was free for everyone, no subscription required!
Interestingly, this announcement came 1 day before Google’s I/O 2024 event, where the giant also debuted the Gemini 1.5 Pro model, available for consumers via Gemini Advanced…
The race is on, but the capabilities of GPT-4o model are definitely worth exploring.
It also claims to be a more “Natural Human-Computer Interaction” model. The "o" in ChatGPT – 4o stands for Omni and is designed to make it more efficient in interactions with the humans.
Here’s what I found interesting:
There were 2 major points of focus in OpenAI’s live launch conference, as well as Sam Altman’s blog post the announcement.
First was its emphasis on making AI more accessible, which clearly reflects in the free access to powerful capabilities that were earlier limited to ChatGPT Plus subscribers:
First, a key part of our mission is to put very capable AI tools in the hands of people for free (or at a great price). I am very proud that we’ve made the best model in the world available for free in ChatGPT, without ads or anything like that.
Along with text, free users will also eventually get access to:
Visual and audio capabilities
GPT Store, which allows anyone to create a version of ChatGPT with personalised instructions
ChatGPT's web browser and memory facilities, as well as submit photographs and files for the chatbot to analyse.
Second focus point was model’s impressive voice and video mode capabilities. A major chunk of OpenAI's live stream launch was showcasing various live use cases of GPT-4o video and vocal features.
It looks exciting, y’all.
If you didn’t watch the livestream, you can watch it here. Additionally, the OpenAI blog introducing the new model has several people testing the model’s capabilities, which you absolutely shouldn’t miss.
My personal favourite has to be the one with two ChatGPTs harmonising together:
A few examples include real-time translation, adaptive sarcasm, lens features recognising dogs, etc.
Although the video capabilities are not available as part of ChatGPT, Sam Altman has reassured that the wait will be worth it:
How are you looking forward to trying out GPT-4o? Did you check out the use case videos? Which one is your favourite?
Talk to me in the comments below!