I make a contribution to AI safety that is endorsed by at least one high profile AI alignment researcher by the end of 2026
Plus
12
Ṁ2802026
59%
chance
1D
1W
1M
ALL
Let's consider any AI alignment researcher who has written a sequence in either the Alignment Forum library or the LessWrong library "high profile" for the purposes of this question.
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Related questions
Related questions
In 2025, will I believe that aligning automated AI research AI should be the focus of the alignment community?
59% chance
Will I (co)write an AI safety research paper by the end of 2024?
45% chance
Will Dan Hendrycks believe xAI has had a meaningful positive impact on AI alignment at the end of 2024?
23% chance
Will a leading AI organization in the United States be the target of an anti-AI attack or protest by the end of 2024?
30% chance
Will there be a noticeable effort to increase AI transparency by 2025?
50% chance
Will OpenAI + an AI alignment organization announce a major breakthrough in AI alignment? (2024)
5% chance
By 2028, will I believe that contemporary AIs are aligned (posing no existential risk)?
33% chance
Will a large-scale, Eliezer-Yudkowsky-approved AI alignment project be funded before 2025?
5% chance
Will a large scale, government-backed AI alignment project be funded before 2025?
9% chance
By the end of 2025, which piece of advice will I feel has had the most positive impact on me becoming an effective AI alignment researcher?