MANIFOLD
Browse
US Election
News
About
App
Sign in
Dark
Light
(auto)
UK Election
Dem Nomination
Elon/Tesla
Trump VP
EURO 2024
OpenAI
French Elections
Sama Drama
Prize Markets
Starliner Delay
AGI Timelines
Israel
Netanyahu
Twitter
ACX 2024
76th Emmy Awards
One Piece
Taylor Swift
Legal Weed
Music AI Race
TV Props
Hard Forkasts
Life in 2040
Tour de France 2024
AI Doom
Manifold AI
Vox
The case for taking AI seriously as a threat to humanity
Why some people fear AI, explained.
Manifold AI
Mini
P(Doom) - Dario Amodei
25
Lower
About right
Higher
#
Technology
#
AI Doom
#
AI Alignment
16
Ṁ1k
Manifold AI
Mini
P(Doom) - Emmett Shear
36
Lower
About right
Higher
#
AI Alignment
#
AI Safety
#
Technology
7
Ṁ520
Manifold AI
Mini
P(Doom) - Yoshua Bengio
35
Lower
About right
Higher
#
AI
#
Technology
#
AI risk
10
Ṁ1k
Nathan Young
Will an AI related disaster kill a million people or cause $1T of damage before 2070?
32%
chance
Bet Yes
Bet No
#
AI Doom
#
AI
#
AI risk
51
Ṁ1k
Nathan Nguyen
By end of 2028, will AI be considered a bigger x risk than climate change by the general US population?
41%
chance
Bet Yes
Bet No
#
AI
#
Climate
#
Politics
227
Ṁ2.1k
Eliezer Yudkowsky
Plus
By the end of 2026, will we have transparency into any useful internal pattern within a Large Language Model whose semantics would have been unfamiliar to AI and cognitive science in 2006?
43%
chance
Bet Yes
Bet No
#
Technical AI Timelines
#
AI Alignment
#
Mechanistic interpretability
714
Ṁ20k
Mira
Will OpenAI + an AI alignment organization announce a major breakthrough in AI alignment? (2024)
18%
chance
Bet Yes
Bet No
#
AI
#
OpenAI
#
Science
269
Ṁ2.7k
Zvi Mowshowitz
By EOY 2026, will it seem as if deep learning hit a wall by EOY 2025?
25%
chance
Bet Yes
Bet No
#
Technology
#
AI
158
Ṁ1.7k
Matthew Barnett
Will mechanistic interpretability be essentially solved for GPT-2 before 2030?
28%
chance
Bet Yes
Bet No
#
AI
#
Mechanistic interpretability
92
Ṁ1.3k
Eliezer Yudkowsky
Will I think all AI hell broke loose in 2024?
6%
chance
Bet Yes
Bet No
#
AI Impacts
#
AI
153
Ṁ1.4k
MIT Technology Review
Now we know what OpenAI’s superalignment team has been up to
The firm wants to prevent a superintelligence from going rogue. This is the first step.
Primer
Will OpenAI achieve "very high level of confidence" in their "Superalignment" solutions by 2027-07-06?
5%
chance
Bet Yes
Bet No
#
AI
#
OpenAI
#
AI Alignment
68
Ṁ1k
jcb
Will Superalignment succeed, according to Eliezer Yudkowsky?
NO
#
AI
#
AI Alignment
108
Ṁ3.3k
AdamK
Will OpenAI's Superalignment project produce a significant breakthrough in alignment research before 2027?
NO
#
OpenAI
#
AI Alignment
#
Alignment Research Agendas
163
Ṁ2.2k
See more questions:
Browse
Election
News
About
Sign in