Will an LLM Built on a State Space Model Architecture Have Been SOTA at any Point before EOY 2027? [READ DESCRIPTION]
Plus
15
Ṁ5022027
43%
chance
1D
1W
1M
ALL
I don't mean "achieves SOTA on one benchmark", or "is the best FOSS model", I mean "is the equivalent of what GPT-4 is right now".
The SSM must be in contention for the position as the most generally capable LLM. I will not trade in this market because the resolution condition isn't entirely objective.
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Sort by:
@HanchiSun I think he means like Mamba: https://arxiv.org/pdf/2312.00752.pdf
They are vaguely related to RNNs though
Related questions
Related questions
Will we have any progress on the interpretability of State Space Model LLM’s in 2024?
71% chance
Will an LLM be able to solve confusing but elementary geometric reasoning problems in 2024? (strict LLM version)
25% chance
Will any LLM released by EOY 2025 be dangerously ASL-3 as defined by Anthropic?
48% chance
Will there be an LLM which can do fluent conlang translations by EOY 2024?
72% chance
Will a lab train a >=1e26 FLOP state space model before the end of 2025?
22% chance
Will any foundation models/LLMs be able to reliably come up with novel unparalleled misalignments before EOY 2024?
90% chance
Will MCTS methods by used by any frontier LLM by EOY 2024
25% chance
Will any LLM released by EOY 2024 be dangerously ASL-3 as defined by Anthropic?
7% chance
What will be true of Anthropic's best LLM by EOY 2025?
What will be true of OpenAI's best LLM by EOY 2025?