Will AGI undergo a hard take-off?
Standard
17
Ṁ1437
2025
19%
chance

Hard take-off defined as rapid self-improvement, in a matter of hours or days, so that it gradually and quickly becomes superhuman at every cognitive task. I would consider it soft takeoff if it takes more than 2 months.

Get
Ṁ1,000
and
S1.00
Sort by:
bought Ṁ30 NO

As soon as we reach the AGI threshold? Nah

@MalachiteEagle Within 2 months of reaching AGI.

If in that time, it improves to the level of "ASI", defined as:
at least human level at any cognitive tasks an average human can do, and at least a higher IQ than any human alive per single instance, which at the moment seems to be around 276 IQ.

If anyone has a better metric, feel free to share it.

@MarioCannistra I think for superintelligence it should demonstrate a capability that no human possesses, which delivers significant economic value, in addition to being able to do all economically valuable digital work that humans can perform.

@MarioCannistra like coming up with multiple transformative algorithms, new physics and engineering, pharmaceutical designs...

@MarioCannistra I don't think those will occur within 2 months of getting AGI

@MalachiteEagle That's one potential definition. I think they're pretty close, once you get an AGI that has a greater IQ than the smartest human, and is "controllable", meaning it does what we ask it (as opposed to the smartest human), and we can instantiate it multiple times, I think it will deliver significant economic value soon after, and will demonstrate capabilities that no human possesses (which current models already do).

@MarioCannistra yes, that sounds correct. However, I think as we get closer to AGI the common definition of ASI will be a bar that gets raised quite high. Right now the term is quite nebulous. 12 months vs 2 months progress post AGI is likely to produce altogether different outputs

I think the hardware bottleneck is too big for a real hard takeoff.

predicts YES

@SophusCorry I think there is no bottleneck at all, because current methods are extremely inefficient. Once we get the first AGI that manages to figure out a better method to rewrite itself and its new iterations, we get hard takeoff.

predicts NO

@MarioCannistra within hours?

predicts YES

@SophusCorry Hours or days, with a upper limit of 2 months. After that I consider it soft takeoff.

We're talking about AGI already, which by definition would be at least as capable as any other human, but it has the advantage of not needing rest, being parallelizable as you can run multiple instances of it, and it presumably it wouldn't be exactly at the threshold of "human level" for every task, but at least at it, and for some tasks it would be better.

Given all that, I think there is a good chance that it would quickly be able to improve on its own code iteratively or even rewrite it completely as soon as it is run for the first time.

predicts NO

@MarioCannistra after 2 days I don't consider it a hard takeoff anymore, but I see now you put that limit in the description