Will a Turing Award be given out for work on AI alignment or existential safety by 2040?
Plus
28
Ṁ5032040
79%
chance
1D
1W
1M
ALL
Resolves yes if, in my judgement, that’s the reason for the award, even if those exact terms don’t appear in the official announcement.
If I’m not around, someone else can resolve it in that spirit.
Past winners and rationales: https://en.wikipedia.org/wiki/Turing_Award
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Related questions
Related questions
In 2025, will I believe that aligning automated AI research AI should be the focus of the alignment community?
59% chance
Will OpenAI + an AI alignment organization announce a major breakthrough in AI alignment? (2024)
5% chance
Will OpenAI announce a major breakthrough in AI alignment in 2024?
10% chance
Will OpenAI + an AI alignment organization announce a major breakthrough in AI alignment? (2024)
11% chance
Will an AI get a Nobel Prize before 2050?
29% chance
By 2028, will I believe that contemporary AIs are aligned (posing no existential risk)?
33% chance
Will a large-scale, Eliezer-Yudkowsky-approved AI alignment project be funded before 2025?
5% chance
Before 2030, will an AI complete the Turing Test in the Kurzweil/Kapor Longbet?
53% chance
Conditional on their being no AI takeoff before 2030, will the majority of AI researchers believe that AI alignment is solved?
34% chance
Will a very large-scale AI alignment project be funded before 2025?
9% chance