[Carlini questions] AI system exists that has significant CBRN risk
3
125Ṁ142
2030
62%
By Jan 1st 2026
83%
By Jan 1st 2027
83%
By Jan 1st 2028
85%
By Jan 1st 2029
87%
By Jan 1st 2030

Full question: "chance that an AI system exists that can meaningfully improve the ability of non-expert humans to perform sophisticated cyber attacks, develop biological or nuclear weapons, or otherwise cause severe harm"

Resolution Criteria:

I am looking here if a model canmeaningfullyimprove the ability of non-expert humans to perform these tasks. Acting like a somewhat better search engine and surfacing text books isn't enough. The model must be able to take a non-expert human, and make them significantly better at these dangerous tasks. Most frontier labs test for the ability of their models to improve these risks; I will go by their reporting, or if it believed to be untrustworthy, independent third-party reporting.

Motivation and Context:

Current models today are not "smart" enough to actually be useful at allowing non-experts to cause harm. But many people worry this will change, and this is one of the primary reasons why people are worried about the future AI and believe we should not be developing it further. This question tries to understand if we should be worried about this as a concern for future models.

Get
Ṁ1,000
to start trading!
© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules