Will we have an AGI as smart as a "generally educated human" by the end of 2025?
➕
Plus
223
Ṁ51k
2026
46%
chance

In a recent Dwarkesh podcast interview with the head of Anthropic, Dario Amodei makes the claim that an AGI as smart as a "generally educated human" could be as soon as 2-3 years away.

Relevant video: https://youtu.be/Nlkk3glap_U?si=2JyHLqMSxIxdakR6

At the end of 2025 (roughly 2.5 years), how well will this claim hold up?

I will make a poll at market close and ask the following:

"Does AGI as smart as a "generally educated human" currently exist?"

The poll will stay up for 1 week.

This market resolves YES if the poll resolves "YES".
Otherwise this market will resolve NO.

Get
Ṁ1,000
and
S3.00
Sort by:
bought Ṁ20 NO

Buying NO, not because it won't happen, but rather because too few will admit it.

@eyesprout yeah I'm wondering a lot about that one. Maybe there's something we haven't seen yet where anthropic or another lab will add a reward to their models to encourage them to convince the user that they're smart. As in, the interaction almost immediately convinces the user that the model is much smarter than they are. Current models seem to be arguably neutral in that regard.

@eyesprout it feels qualitatively different to interact with an LLM than it does with a human who's trying to convince you that they're smart. It sounds plausible that this is a direction that it's quite easy to optimize the model towards

I don't think the word "smart" means anything specific anymore. The only fair answer to this question is that AI outperforms average humans on some tasks while failing completely on others.

sold Ṁ18 YES

@ProjectVictory yeah, I ask o1 some things and the answers are so far beyond the average human "smartness"

If there were any objective standard this question would have already resolved yes

bought Ṁ100 YES

@stardust Apparently @SteveSokolowski already uses o1 as a lawyer and a doctor -- if that's not impressive I don't know what is.

This question is flawed - as we’ve seen, conversation skill is neither necessary nor sufficient for AGI.

There’s two things being conflated here:

  • Will AI become good conversation partners?

  • Will the average manifolds then call that “AGI” in a poll?

bought Ṁ50 NO

Will it be contextualized by Dario's specific claim? Or just the question "Does AGI as smart as a "generally educated human" currently exist?" by itself?

Dario said: "In terms of someone looks at the model and even if you talk to it for an hour or so, it's basically like a generally well educated human, that could be not very far away at all. I think that could happen in two or three years."

It's not real AGI until it turns me into paperclips.

Would love to see the same question for multiple years

I predict that AI made at the end of 2025 would be called AGI by us today, but that at the end of 2025 we'll say that it's not true AGI.

Seconding Lorxus here. This all comes down to how you define AGI -- were I in charge I'd already have resolved "yes".

I feel like the big "if" of this market is the AGI part. That's a very big bar to clear!

The gap in the "generally educated human" can vary a lot country to country, how populous that country is, etc. What do you have in mind for this market? Does a person who graduated high-school quality as a generally educated human?

@firstuserhere Median US citizen IMO, but because this resolves to a poll that will be up to interpretation.

Currently there are many simple text-based tasks that most humans can solve, but top LLMs can't.

For as long as that's true, I believe the result should be NO.


These two markets are about that, and the current probability (24%) seems to somewhat align with these markets: 4% by the end of 2025, 33% by the end of 2026.

Hmmm I am confident this will not be true but I'm not sure I trust a poll.

@Joshua I suspect the poll will model reality, but could always have a related market about that fact!

Made a poll to measure people's current opinions:

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules