If Artificial General Intelligence has a poor outcome, what will be the reason?
Basic
2
Ṁ110
2030
86%
Someone finds a solution to alignment, but fails to communicate it before dangerous AI gains control.
80%
Someone successfully aligns AI to cause a poor outcome
75%
Something from Eliezer's list of lethalities occurs.
25%
Alignment is impossible.

Inverse of https://manifold.markets/EliezerYudkowsky/if-artificial-general-intelligence-539844cd3ba1?r=S3JhbnR6.

Will not resolve.

Primarily for users to explore particular lethalities.

Please add responses.

"poor" = human extinction or mass human suffering

Get
Ṁ1,000
and
S3.00
© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules