What will be the median p(doom) of AI researchers after AGI is reached?
Plus
13
Ṁ5412101
83%
Above 5%
73%
Above 10%
62%
Above 20%
19%
Above 50%
10%
Above 80%
AGI defined as an AI that is better at AI research than the average human AI researcher not using AI.
p(doom) defined as human extinction or outcomes that are similarly bad.
In Katya Grace's 2022 survey, median values were 5% for "extremely bad outcome (e.g., human extinction)” and 5-10% for human extinction.
All answers which are true resolve Yes.
Related:
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Sort by:
Related questions
Related questions
What will be the average P(doom) of AI researchers in 2025?
20% chance
Will we reach "weak AGI" by the end of 2025?
23% chance
Will OpenAI be in the lead in the AGI race end of 2026?
43% chance
Will we have at least one more AI winter before AGI is realized?
38% chance
If a large, good survey of AI engineers in the United States is run, what will be the average p(doom) within 10 years?
14% chance
Doom if AGI by 2040?
58% chance
Will Paul Christiano publicly announce a greater than 10% increase in his p(doom | AGI before 2100) within the next 5 years?
44% chance
In what year will we achieve AGI?
The probability of "extremely bad outcomes e.g., human extinction" from AGI will be >5% in next survey of AI experts
73% chance
The probability of extremely good AGI outcomes eg. rapid human flourishing will be >24% in next AI experts survey
54% chance