If AGI has an okay outcome, will there be an AGI singleton?
Basic
3
Ṁ5882101
45%
chance
1D
1W
1M
ALL
An okay outcome is defined in Eliezer Yudkowsky's market as:
An outcome is "okay" if it gets at least 20% of the maximum attainable cosmopolitan value that could've been attained by a positive Singularity (a la full Coherent Extrapolated Volition done correctly), and existing humans don't suffer death or any other awful fates.
This resolves YES if I can easily point to the single AGI that has an okay outcome, and NO otherwise.
Get
1,000
and1.00
Related questions
Related questions
Will we get AGI before 2026?
23% chance
If Artificial General Intelligence has an okay outcome, what will be the reason?
Will we get AGI before 2030?
53% chance
If Artificial General Intelligence has an okay outcome, what will be the reason?
Will we get AGI before 2029?
44% chance
Will we get AGI before 2032?
66% chance
Which company will achieve the "weak AGI"?
Will we get AGI before 2031?
62% chance
Will we reach "weak AGI" by the end of 2025?
28% chance
Will we get AGI before 2035?
69% chance