By EOY 2025, will the model with the lowest perplexity on Common Crawl will not be based on transformers?
Plus
32
Ṁ37012025
27%
chance
1D
1W
1M
ALL
If perplexity on Common Crawl is not available for models, I will use other benchmarks as a surrogate. This will inherently be a judgement process. If a model has not been announced by EOY 2025 and no benchmarks have been posted publicly, it will not be counted for the purpose of this market.
"Based on transformers" for the purpose of this question will be anything with multi-headed self-attention that feeds into an MLP.
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Sort by:
@ConnorMcCormick oh yeah that's definitely confusing people. We'll, better for us who do understand it :)
@jacksonpolack The API only refreshes the data every 15 seconds, so if you're quick on the draw, it's totally doable.
Related questions
Related questions
When will a non-Transformer model become the top open source LLM?
Who will have the best Text-to-Image Model at the end of 2024 (as decided by the Artificial Analysis Leaderboard)?
Will openAI have the most accurate LLM across most benchmarks by EOY 2024?
37% chance
By EOY 2026, will it seem as if deep learning hit a wall by EOY 2025?
25% chance
Will Transformer based architectures still be SOTA for language modelling by 2026?
70% chance
Will any image model available to the public be able to generate arbitrary non-adversarial text before 2025?
59% chance
Which of these companies will release a model that thinks before it responds like O1 from OpenAI by EOY 2024?
On January 1, 2027, a Transformer-like model will continue to hold the state-of-the-art position in most benchmark
54% chance
Will the Jan 2024 version of the LLM detector "Binoculars" be effective against OpenAI's best model at end 2024?
59% chance
Will a new deep learning paradigm replace the transformer by the end of 2024?
8% chance