Will we have any progress on the interpretability of State Space Model LLM’s in 2024?
Standard
12
Ṁ312Jan 1
71%
chance
1D
1W
1M
ALL
State Space Models like Mamba introduce new possibilities, as States are a new object type, a compressed snapshot of a mind at a point in time which can be saved, restored, and interpreted. But a cursory search didn’t turn up any work on interpreting either States or State Space Models.
This resolves Yes if research comes out that makes any significant interpretability progress into a state space large language model. I will not bet on this market.
Get
1,000
and1.00
Related questions
Related questions
By the end of 2026, will we have transparency into any useful internal pattern within a Large Language Model whose semantics would have been unfamiliar to AI and cognitive science in 2006?
30% chance
Will an LLM Built on a State Space Model Architecture Have Been SOTA at any Point before EOY 2027? [READ DESCRIPTION]
43% chance
Will mechanistic interpretability be essentially solved for GPT-2 before 2030?
29% chance
Will there be an open source LLM as good as GPT4 by the end of 2024?
68% chance
By 2028 will we be able to identify distinct submodules/algorithms within LLMs?
75% chance
[Situational awareness] Will pre-2026 LLMs achieve token-output control?
30% chance
Will there be a gpt-4 quality LLM with distributed inference by the end of 2024?
28% chance
Will the most interesting AI in 2027 be a LLM?
40% chance
[Situational awareness] Will pre-2028 LLMs achieve token-output control?
38% chance
Will a lab train a >=1e26 FLOP state space model before the end of 2025?
25% chance