Are LLMs capable of reaching AGI?
➕
Plus
73
Ṁ8737
2100
72%
chance

This resolves YES if there exists an architecture that would unambiguously count as both an LLM and AGI, and could be trained and run on all the world's computing power combined as of market creation.

This market resolves after there's a broad consensus as to the correct answer, which likely won't be until after AGI has been reached and humanity has a much better conceptual understanding of what intelligence is and how it works. In the event of disagreements over what constitutes an LLM or AGI, I'll defer to a vote among Manifold users.

(In order to count as an AGI, it needs to be usefully intelligent. If it would take 1000 years to answer a question, that doesn't count.)

(Note that there are two forms of non-predictive bias at play here. If your P(doom) is high, you'll value mana lower in worlds where LLMs can reach AGI, since we're more likely to die in those worlds than if we don't obtain AGI until much later. But if your P(doom) is low, this market probably resolves sooner if the answer is YES, so due to your discount rate there's a bias towards betting on YES.)

Get
Ṁ1,000
and
S3.00
Sort by:

could be trained and run on all the world's computing power combined as of market creation

Given arbitrary training data?

@MartinRandall imo giving it training data like: "these are the thousand shortest aays to create an AGI" would not make the LLM itself an AGI.

What hypothetical data do you have in mind?

bought Ṁ100 YES

does this count LMM like GPT-4o as LLMs?

i.e. is the question more: are autoregressive transformers capable of reaching agi? or is the transformers architecture capable of reaching agi? (including things like SoRA)

This would probably never resolve to no.

How does this question resolve if the architecture uses LLMs as the cruicial subcomponent behind it's intelligence, but nonetheless it's overall architecture isn't an LLM. Specifically I'm thinking of agentic systems like AutoGPT, which have a state-machine architecture with explicitly coded elements like short-term and long-term memory, but use LLMs to form (natural language) plans and decide on what state transitions should be made. If these systems become AGI when LLMs are scaled up, how does the question resolve.

What counts as AGI here? Is it sufficient for it to do all text-based tasks as well as the average human?

hmm, what if I implement a dovetail by tweaking the weights of a transformer architecture and clocking it with a loop? then it implements all programs simultaneously, including AGIs.

@Mira Each sub-program may be an LLM, but I think you'd be hard-pressed to say that the overarching one is. Also, it would be too slow to qualify as an AGI. Same problem faced by the computable variations of AIXI.

@IsaacKing Oh no, I meant a single model frozen and unchanging during the whole process, which when clocked implements a universal dovetail. So there would be a only one program.

But it would take more than 1000 years to destroy humanity, so your update wouldn't count it...

predicts YES

@Mira Oh, I see. Yeah that's not what I had in mind, so I've edited the description to fix that.

@IsaacKing Also Mira's proposal would not work in the real world, not even after 1000 years. The machinery / memory / whatever would fail long before anything intelligent happened.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules