Will LLMs be worse than human level at forecasting when they are superhuman at most things?
Standard
11
Ṁ3712030
45%
chance
1D
1W
1M
ALL
Resolves to my personal opinion, unless @SemioticRivalry disagrees.
Feel free to ask for clarification about what is meant in the comment but I'll likely keep it vague to resolve to the spirit of the question if precise definitions might take away from that
Get
1,000
and1.00
Sort by:
@Joshua We die in most timelines ̶w̶̶h̶̶e̶̶r̶̶e̶̶ ̶̶t̶̶h̶̶i̶̶s̶̶ ̶̶r̶̶e̶̶s̶̶o̶̶l̶̶v̶̶e̶̶s̶̶ ̶̶n̶̶o̶̶
Related questions
Related questions
Will LLMs be better than typical white-collar workers on all computer tasks before 2026?
27% chance
At the beginning of 2028, will LLMs still make egregious common-sensical errors?
56% chance
Will the most interesting AI in 2027 be a LLM?
40% chance
Will Google have a better LLM than OpenAI by 2025?
35% chance
Will LLMs' loss function achieve the level of entropy of human text by the end of 2030?
61% chance
Will an opensource LLM on huggingface beat an average human at the most common LLM benchmarks by July 1, 2024?
74% chance
Will LLMs mostly overcome the Reversal Curse by the end of 2025?
64% chance
Are LLMs capable of reaching AGI?
79% chance
Will an LLM improve its own ability along some important metric well beyond the best trained LLMs before 2026?
58% chance
Will there be any simple text-based task that most humans can solve, but top LLMs can't? By the end of 2026
64% chance