
In 2050, will the general consensus among experts be that the concern over AI risk in the 2020s was justified?
75
1.2kṀ39112051
77%
chance
1D
1W
1M
ALL
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
If humanity survives to 2100, what will experts believe was the correct level of AI risk for us to assess in 2023?
35% chance
Public opinion, late 2025: Out-of-control AI becoming a threat to humanity, a real threat?
Is the nature of AI risk completely misunderstood today with respect to the state of the art in 2030?
30% chance
At the beginning of 2026, what percentage of Manifold users will believe that an AI intelligence explosion is a significant concern before 2075?
66% chance
Will AI be considered safe in 2030? (resolves to poll)
72% chance
Will >90% of Elon re/tweets/replies on 19 December 2025 be about AI risk?
10% chance
Will humanity wipe out AI x-risk before 2030?
10% chance
In 2050, what will be the most accurate statement about the control of AI?
By end of 2028, will AI be considered a bigger x risk than climate change by the general US population?
56% chance
Will there be a massive catastrophe caused by AI before 2030?
31% chance
Sort by:
@jonsimon Neither matters. What this market cares about is "was the probability they placed on the world being destroyed by AI justified by the evidence they had at the time?"
@IsaacKing Whose probability/concern needs to be justified? Laypeople? Computer scientists? Computer scientists who responded to the AI Impact survey? Existential safety advocates / the AI existential risk community? Eliezer Yudkowsky?
I mainly ask because I think the probabilities of, say, extinction would range from something like 5% (maybe laypeople and computer scientists) to 50% (average existential safety advocate) to >99.9% (Yudkowsky).
People are also trading
Related questions
If humanity survives to 2100, what will experts believe was the correct level of AI risk for us to assess in 2023?
35% chance
Public opinion, late 2025: Out-of-control AI becoming a threat to humanity, a real threat?
Is the nature of AI risk completely misunderstood today with respect to the state of the art in 2030?
30% chance
At the beginning of 2026, what percentage of Manifold users will believe that an AI intelligence explosion is a significant concern before 2075?
66% chance
Will AI be considered safe in 2030? (resolves to poll)
72% chance
Will >90% of Elon re/tweets/replies on 19 December 2025 be about AI risk?
10% chance
Will humanity wipe out AI x-risk before 2030?
10% chance
In 2050, what will be the most accurate statement about the control of AI?
By end of 2028, will AI be considered a bigger x risk than climate change by the general US population?
56% chance
Will there be a massive catastrophe caused by AI before 2030?
31% chance