Will async SGD become the main large-scale NN optimization method before 2031?
Mini
2
Ṁ612031
72%
chance
1D
1W
1M
ALL
Resolves as YES if asynchronous stochastic gradient descent becomes dominant in large scale neural network optimization before January 1st 2031.
Get Ṁ600 play money
Related questions
By EOY 2026, will it seem as if deep learning hit a wall by EOY 2025?
24% chance
10GW AI training run before 2029?
50% chance
By 2027 will Adam be replaced by a novel optimization algorithm?
60% chance
Will there be another major public-facing breakthrough in AI before December 31, 2024 [subjective - 1000M boost added]
67% chance
Will Adam optimizer no longer be the default optimizer for training the best open source models by the end of 2026?
61% chance
1GW AI training run before 2027?
40% chance
Will software-side AI scaling appear to be suddenly discontinuous before 2025?
18% chance
Will we see the emergence of a 'super AI network' before 2035 ?
72% chance
100GW AI training run before 2031?
17% chance
At least one of the most powerful neural nets at end of 2026 will be trained using 10^27 FLOPs
77% chance