GPT-4 is the most anticipated launch for the whole AI industry. Ever since ChatGPT set the new benchmarks for what can be done with AI, the expectations from the next version of GPT to be launched by OpenAI are sky high. Therefore, it’s not surprising that the rumor mill is working overtime to decode all that can be known about GPT-4 before its launch.
We’ve tried to compile a list of everything that’s known about this next big thing in the world of AI. However, all of this is based on rumors, so we’d suggest that you take it with a pinch of salt. With that being said, let’s see how different it’s going to be from GPT-3!
Key Takeaway
There’s a lot to expect from GPT-4, but all of it can be broken down into 5 categories. And what can you expect from those categories? Well, here it is at a glance:
Feature | Expectations |
Multimodality | Highly expected, with support for image and sound-based inputs in addition to text. |
Number of parameters | 1 trillion |
Alignment | Greater alignment with human values and expectations |
Density vs. Sparsity | Sparsity — not all neurons work all the time |
Compute Power and Optimization | More compute power requirement, but lower training cost of “~$.e6” ($1-10M) |
Release | Q1, 2023 |
Having said that, now it’s time to take a look at all of these things in detail to understand what they mean for the future of AI.
Related Read: All About ChatGPT: Definition, How To Use & Make Money
Multimodality
One of the most-awaited and rumored AI capabilities is multimodality. It is the ability of an AI model to act in multiple modes, such as generating and dealing with text, images, sounds, etc. Right now all AI models are capable of operating in a single mode only. ChatGPT, for instance, deals strictly with text. But multimodality can change that as models with this functionality can deal with text, images, speech, and whatever else is thrown at them.
It’s widely expected that GPT 4 will have multimodality, which can make it a really next-level thing in comparison to GPT 3 and GPT 3.5, which was used to create ChatGPT. How big will be its multimodal capabilities, we don’t know yet, but it’s safe to assume that it may accept inputs in images and sounds in addition to text.
There are also rumors that it may accept video inputs as well, largely because generative audiovisual AI is a field that OpenAI has been exploring for quite some time, but we haven’t heard of such claims from too many people yet so take them with big pinch of salt.
Number Of Parameters
It’s almost guaranteed that GPT 4 will use a larger number of parameters. More parameters mean greater accuracy and better speed at generating output, but at the same time it also means more compute power requirements.
There have been some reports claiming that it may use as many as 100 trillion parameters compared to 175 billion of GPT 3.5, but Altman recently refuted this in an interview with StrictlyVC.
However, he confirmed that there will definitely be a jump in the number of parameters, and internal team members of OpenAI who have access to GPT 4 have revealed to some people in the tech community that there is a significant difference.
That brings us to a much more realistic speculation of 1 trillion parameters. It’s a number that can make significant improvements to the existing GPT 3.5 while also keeping compute power requirements within manageable limits.
Alignment With Human Values And Expectations
The debate regarding ethics, values and use cases of AI was always there, but it has caught a sense of unprecedented urgency after the emergence of ChatGPT. There are talks around the world to regulate AI and make it more aligned with human values and ethics, and the OpenAI team is also well aware of that.
They already put their first step in that direction when they launched InstructGPT, which was a renewed version of GPT 3 trained by humans. The reason why ChatGPT is so smart is that it makes use of ChatGPT, which makes its output more in line with human expectations.
There’s no reason why OpenAI wouldn’t want to improve on it while developing GPT 4, so we can expect GPT 4 to be trained by people from a more diverse range of nationalities, ethnicities, geographies, and professional backgrounds, thus making its responses more acceptable than ever.
Density vs Sparsity
There are two types of AI-based models: Dense models, and Sparse models.
Dense models are the ones that use all the parameters of a model to generate any output, while sparse models use conditional computing to determine which parts of the model will be best suited for the inputs provided. This allows for a much larger set of parameters without significantly increasing the compute power requirements.
Since GPT 4 is going to use a much larger set of parameters, it’s highly likely that it will be a sparse AI model instead of the dense ones that OpenAI has created till now. That will also put it in a different league, and can lead to a whole new set of possibilities. The rumor mill has also been predicting a Sparse model instead of a dense one for GPT 4.
Compute Power And Optimality
With a larger data set it’s nearly confirmed that GPT 4 will require more computing power to work. However, sparsity can help optimize its performance and compute power requirements to a large extent. While it’s certainly going to use a lot more computing power than GPT 3 (something that was confirmed by Altman too in an interview last year), the OpenAI team will certainly try to make it as optimal as possible by working on the optimization of things other than the model size.
In the last few weeks, there have also been rumors that the training cost of GPT 4 is ~$.e6 ($1-10M), which is, surprisingly, lesser than the training cost of GPT 3. If it’s indeed true, then OpenAI has also found a way to reduce costs while making the model better.
Ultimately, it’s this combination of better-optimized variables with a higher number of parameters and a larger model size that can lead to unprecedented improvements in all performance benchmarks for GPT 4.
Release Date
Based on the rumor mill it’s expected that GPT 4 may be released anytime by the end of next month (Feb 2023 that is). However, in the interview with Strictly VC Sam Altman refused to commit to any specific timeline for launch. He said that they’ll launch it when they feel that they can do it responsibly and safely, in an apparent remark to allay the ethical and other concerns regarding the capabilities of AI.
Regardless, there are reports that GPT 4 is ready and it’s being tested internally by OpenAI staff. That means it may arrive in Q1 of this year, if not in Feb itself.
Conclusion: GPT-4
OpenAI has mostly been tight-lipped about GPT 4, and whatever information we have provided above is based on what’s known in the rumor mill. Therefore, we’d suggest that you take it with a pinch of salt. With its launch just around the corner, all our questions are about to get an official answer soon. Keep your fingers crossed, and also keep an eye on our updates because if there’s any new information about this widely anticipated leap in the world of AI, we’ll definitely let you know through our posts.
What do you think?