In 2025, what 2019-2022 work of AI safety will I think was most significant?
Basic
11
Ṁ328Jan 1
18%
15%
Eliciting Latent Knowledge https://www.lesswrong.com/posts/qHCDysDnvhteW7kRd/arc-s-first-technical-report-eliciting-latent-knowledge
14%
Discovering Agents https://www.alignmentforum.org/posts/XxX2CAoFskuQNkBDy/discovering-agents
6%
Risks from Learned Optimization in Advanced Machine Learning Systems https://arxiv.org/abs/1906.01820
6%
Constitutional AI: Harmlessness from AI Feedback https://www.anthropic.com/constitutional.pdf
5%
Mechanistic Anomaly Detection https://www.alignmentforum.org/posts/vwt3wKXWaCvqZyF74/mechanistic-anomaly-detection-and-elk
5%
Infra-Bayesian Physicalism https://www.lesswrong.com/posts/gHgs2e2J5azvGFatb/infra-bayesian-physicalism-a-formal-theory-of-naturalized
3%
The Sharp Left Turn https://www.alignmentforum.org/s/v55BhXbpJuaExkpcD/p/GNhMPAWcfBCASy8e6
2%
2022 MIRI Alignment Discussion https://www.alignmentforum.org/s/v55BhXbpJuaExkpcD
1.8%
Other Not Listed Here
1.3%
Optimal Policies Tend to Seek Power https://arxiv.org/abs/1912.01683
Works to be considered include Arxiv papers first appearing in this time window, Lesswrong posts, and paper-like posts (mainly to include Anthropic papers). This time window includes both 2019 and 2022. 'Significant' here means was contributed the most to progress towards AI alignment and AI safety. This is obviously very subjective.
If I were to answer this question for papers 2016-2019, possible answers would have included, among others, 'AI safety via debate', 'The off switch game'.
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Sort by:
@JacobPfau For the purposes of this question I'll include the associated Arxiv paper under the "Mechanistic Anomaly Detection" option.
Related questions
Related questions
Where will the next major breakthrough in AI originate from before 2025?
Will OpenAI become notably less pro AI safety by start of 2025 than at the start of 2024?
69% chance
Will I still consider improving AI X-Safety my top priority on EOY 2024?
63% chance
Will I (co)write an AI safety research paper by the end of 2024?
45% chance
Which AI will be the best at the end of 2025?
Will there be serious AI safety drama at Meta AI before 2026?
60% chance
In 2050, will the general consensus among experts be that the concern over AI risk in the 2020s was justified?
72% chance
Will there be serious AI safety drama at Google or Deepmind before 2026?
60% chance
Will there be a coherent AI safety movement with leaders and an agenda in May 2029?
77% chance
Which Will Be The Most Impactful New AI Idea in February 2024?