Will GPT-5 be released before 2025?
➕
Plus
2k
891k
Dec 30
49%
chance

Resolves YES if GPT-5 is publicly released before Jan 1, 2025; resolves NO otherwise.

Update from March 18: If the next OpenAI flagship model deviates from the GPT-N naming scheme and there will clearly be no future model named GPT-5 that is going to be released from OpenAI (e.g. they completely changed the trajectory of their naming scheme) then I will count it as "GPT-5" regardless.

However, if the next flagship model is not called GPT-5 but still conforms to the OpenAI GPT-N naming scheme (e.g. GPT-4.5) or it is implied that there will eventually be a model called GPT-5 that will be released (i.e. they are going to continue with the same naming scheme) then I will not count it as "GPT-5".

Get Ṁ600 play money
Sort by:

Kevin Scott, CTO of Microsoft:

"The past handful of years, the breakthroughs have come almost every other year. And the thing to remember is, GPT-4 was about two years ago, and so the thing that I think everyone should be looking forward to this year is, like, the next update will happen."

https://youtu.be/R5VzcPhRVvg?t=119

Sounds like he's talking about GPT-5

"About two years ago" seems like a rounding error; it's been 1 year and 3 months. If we knew it was a 2 year release cycle, we'd expect GPT-5 in March 2025

Sounds about right.

GPT-4 finished training in Summer 2022; Microsoft started testing it in Bing in late 2022. So, for Kevin Scott personally, it probably does feel like roughly two years ago.

That said, you've got one of the most important people in the development of GPT-5 straight up telling you with no uncertainty in his words that the 'next breakthrough' is coming this year, and somehow you manage to conclude from this that it's happening in March 2025? Seems crazy to me. I think Scott saying it's happening in 2024 is very strong evidence.

To clarify, that is not my conclusion. I'm only saying that his numbers don't add up, and that if we accept his premise we would come to a different result. If I take your correction, it only tells me that Kevin Scott can expect access this year. But there is still the real possibility that he knows that it will be public this year and is reverse engineering a reason because he's not allowed to announce that on OpenAI's behalf.

I was refering to David Bolin with 'you', sorry for the confusion.

"But there is still the real possibility that he knows that it will be public this year and is reverse engineering a reason because he's not allowed to announce that on OpenAI's behalf."

I'm confused by your skepticism here. He very clearly says that it will be released this year. "Everyone should be looking forward to something that only I will have access to" doesn't seem like a reasonable interpretation. He isn't saying "Historically, releases have come every two years, a smart guy (who can't do arithmetic) should therefore expect a release this year." He is saying "Historically, releases have come every two years, thus we will release something this year".

He already had access to early versions of the 'new models' by the end of may by the way:

"If you think of GPT-4 and that whole generation of models as things that can perform as well as a high school student at things like AP exams, some of the early things that I'm seeing right now with the new models is like you know maybe this could be the thing that could pass your qualifying exams when you're a PhD student"

https://youtu.be/b_Xi_zMhvxo?t=98

bought Ṁ1,000 NO from 49% to 45%
bought Ṁ1,000 NO

Obviously not. They just delayed the voice mode for GPT-4o.

opened a Ṁ68 YES at 36% order

GPT-4 was done training August 2022. However, it was only released March 14, 2023, which is 7 months. During this time, they had the model internally but were safety testing. GPT-4 training was around 5 months. This means that the development process for GPT-4 was one year. Assuming GPT-5 has begun training the same day the screenshot was posted (which probably isn't the case but close enough for an estimate), that places the release date for the public release of GPT-5 to later than May 28, 2025. I say later than because GPT-5 would probably take longer to train as there haven't been any major processor advancements and GPT-5 will obviously be a larger model than GPT-4.

bought Ṁ50 NO at 53%

there haven't been any major processor advancements

This doesn’t seem true

In a year not much changes in the world of processors anymore since we're already at 3 nanometers and quantum tunneling prohibits making a transistor much smaller. I'm not saying that processors have stopped getting better, a creator can still introduce architecture improvements along with other tricks to make it better, but performance won't improve by an order of magnitude. OpenAI can always just allocate more compute to training GPT-5, but there isn't really a reason to do that as there is a lot already.

What’s your alternative explanation for why most LLMs are many times faster than they were a year or 2 ago?

More compute

do you consider processors being cheaper an advancement?

Yea for the most part but not significant but if they got much more efficient it would be great because I assume that energy costs are high for LLMs

I would bet against “performance won’t increase by an order of magnitude.” We are not that close to what’s physically optimal - nowhere close to as far as computronium can go.

The current smallest widely adopted transistors are 3 nanometers, and it it possible to go down to around 1 nanometer. Quantum tunneling becomes a significant issue when you try to go lower. This implies a 3x improvement in processor operations per second with the same amount of area. Transistor size is not a definite indicator of potential performance, but it can provide a general insight. All of this is assuming that a major scientific breakthrough doesn’t occur.

process names are almost all marketing and detached from actual gate size, gate size is 20-30nm

wait whaaat

So why do they market it as 3nm?

Because that's the next size categorie in the marketing hype machine.

The reason the LLMs are faster is they have made changes to the algorithm. Attention mechanism is used to skip or simplify calculations that are considered less important.

Which does mean that making them faster makes them worse; they just hope the benefit of the speed increase outweighs the loss due to simplifying the computations.

It has little or nothing to do with processor speeds.

Do you think it’ll be released after 2024?

Either that or it won't be much better than current ones. So probably after 2024.

"So why do they market it as 3nm?" Because it's "equivalent" to the size that would be required with 2D transistors to achieve the same performance. At least, that was the reason for the names in the first place; now it's marketing, a way of identifying each new generation.

I opened this market to bet on future chip advances: https://manifold.markets/Paul/will-ai-accelerators-improve-in-flo?r=UGF1bA

More related questions