Will Aidan McLau's claim that very large models are "refusing instruction tuning" be validated by 2030?
Basic
1
Ṁ202030
59%
chance
1D
1W
1M
ALL
https://x.com/aidan_mclau/status/1859444783850156258 According to Aidan McLau the reason large models are not being released is because these models are resisting instruct tuning. Resolves yes if a current or former AI researcher at Google, OpenAI, Anthropic, or Meta validates this claim or it confirmed independently by research.
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Related questions
Related questions
Will any 10 trillion+ parameter language model that follows instructions be released to the public before 2026?
54% chance
Will a model costing >$30M be intentionally trained to be more mechanistically interpretable by end of 2027? (see desc)
57% chance
Will models be able to do the work of an AI researcher/engineer before 2027?
40% chance
AI: Will someone train a $1B model by 2025?
67% chance
AI: Will someone train a $10T model by 2100?
57% chance
AI: Will someone open-source a $100M model by 2025?
60% chance
AI: Will someone train a $1T model by 2080?
62% chance
100GW AI training run before 2031?
46% chance
By March 14, 2025, will there be an AI model with over 10 trillion parameters?
62% chance
Will an AI model outperform 95% of Manifold users on accuracy before 2026?
56% chance