While the world is still coming to terms with what OpenAI’s ChatGPT and GPT-4 is capable of, rumours of GPT-5 being in development are doing the rounds.
As netizens eagerly anticipate GPT-5, speculation about its development and potential release date is rampant, along with the impact it could have on artificial intelligence (AI) tools.
But first, a quick recap about OpenAI’s GPT (generative pre-trained transformer) models:
According to OpenAI, GPT-4 draws from a larger training dataset and has advanced algorithms enabling it to better understand context, nuance and semantics in human communication.
Some sources claim GPT-5 is under development with a potential 2024 release, but OpenAI CEO Sam Altman recently denied that the model is currently being trained.
However, this doesn’t rule out the possibility that GPT-5 is in its data collection or architecture planning phase.
Since GPT-4 can process images as well (a feature not included in any of the previous language model versions), the next iteration of this LLM could likely incorporate sound recognition and video processing abilities.
GPT-5 will undoubtedly mark another milestone in the evolution of AI technology.
The next iteration is speculated to have groundbreaking functionalities compared to previous versions.
Some rumours suggest it might have up to 100 times more parameters than GPT-4 – an estimated 17.5 trillion parameters, making it one of the largest neural networks ever created.
GPT-5 might also require 1.26 zettaflops of computing power, surpassing the combined power of all the world’s supercomputers.
A zettaflop[2] is a way to measure how fast a computer can do calculations.
Imagine you have a super-fast calculator that can solve 1,000,000,000,000,000,000,000 (that’s a sextillion; a one with 21 zeros) math problems in just one second!
That’s a zettaflop.
This type of computing power raises concerns that GPT-5 could achieve what Hollywood has been fantasising about for decades: the rise of AGI (artificial general intelligence).
It was initially estimated that we would achieve AGI by 2032, and the launch of GPT-5 will at least give us an indication on whether OpenAI has successfully trained its model with AGI capabilities.
While achieving AGI is a significant milestone for AI systems as far as revolutionising natural language processing is concerned, it’s probably not the best thing for us a species at this particular moment in time.
READ: ‘AI a profound risk to humanity’: Elon Musk and others call for halt
Experts are terrified of AGI (as well they should be).
Not only could it be hard to control, but it may also facilitate a power imbalance, create cybersecurity threats, and perhaps even usher in the creation of highly advanced, autonomous weapons.[3]
But all the fearmongering aside, it’s a bad idea for the simple fact that we have no policy in place to govern generative AI, let alone AGI.
We talked all about it this week’s Tech Check with Kahla and Kruger podcast. (Spoiler alert, the first 2 minutes is OpenAi propaganda since ChatGPT wrote that portion of our script…)
READ: When AI takes over: Our ChatGPT podcast disaster
Sources:
[1] ChatGPT Is a Tipping Point for AI, Harvard Business Review; Ethan Mollick; 14 December 2022.
[2] The Technology Lane on the Road to a Zettaflops; Semantic Scholar, 2006
[3] Autonomous Weapons and the Ethics of Artificial Intelligence, S. Matthew Liao, Ethics of Artificial Intelligence (New York, 2020; Oxford Academic, 22 Oct. 2020)
Download our app and read this and other great stories on the move. Available for Android and iOS.