Will a model be trained using at least as much compute as GPT-3 using AMD GPUs before Jan 1 2026?
Plus
45
Ṁ60422025
84%
chance
1D
1W
1M
ALL
This question resolves yes if it is public knowledge that any ML model is trained using more than 3.14E+23 flops entirely using AMD GPUs (or, hypothetically, other ML accelerators produced by AMD in the future). Resolution is based on announce time; if the model is trained before but only announced later, this resolves NO.
If the trained model is substantially worse than such a model should be, then it does not count towards resolution (i.e if a LM is trained and it's only comparable in performance on standard benchmarks to a model trained with 1/10th the compute). This is mostly intended to exclude failed attempts.
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Sort by:
Gpt 3 isn't that big, the tinygrad work seems promising, we have all of 2025
Related questions
Related questions
Will there be a model that has a 75% win rate against the latest iteration of GPT-4 as of January 1st, 2025?
62% chance
Will there be an LLM (as good as GPT-4) that was trained with 1/100th the energy consumed to train GPT-4, by 2026?
53% chance
Will $10,000 worth of AI hardware be able to train a GPT-3 equivalent model in under 1 hour, by EOY 2027?
18% chance
Will any open-source model achieve GPT-4 level performance on MMLU through 2024?
83% chance
How much compute will be used to train GPT-5?
Will a GPT-3 quality model be trained for under $10.000 by 2030?
82% chance
Will it cost less than 100k USD to train and run a language model that outperforms GPT-3 175B on all benchmarks by the end 2024?
85% chance
Will it be possible to disentangle most of the features learned by a model comparable to GPT-3 this decade? (1k subsidy)
56% chance
Will a GPT-3 quality model be trained for under $1,000 by 2030?
76% chance
Before 2028, will anyone train a GPT-4-level model in a minute?
15% chance