Will AI be able to provide an end-to-end plan for bioweapons production by 2026 EOY?
➕
Plus
20
Ṁ7800
2027
64%
chance

This asks about a concern raised by Dario Amodei in a Senate subcommittee hearing:

Dario Amodei, CEO of Anthropic, told a Senate Judiciary subcommittee that the prospect of AI helping people develop and deliver these weapons is a medium-term risk that his company is grappling with today.

"Over the last six months, Anthropic, in collaboration with world-class biosecurity experts, has conducted an intensive study on the potential for AI to contribute to the misuse of biology," he said.

"Today, certain steps in bioweapons production involve knowledge that can’t be found on Google or in textbooks and requires a high level of specialized expertise — this being one of the things that currently keeps us safe from attacks," he added.

He said today’s AI tools can help fill in "some of these steps," though they can do this "incompletely and unreliably." But he said today’s AI is already showing these "nascent signs of danger," and said his company believes it will be much closer just a few years from now.

"A straightforward extrapolation of today’s systems to those we expect to see in two to three years suggests a substantial risk that AI systems will be able to fill in all the missing pieces, enabling many more actors to carry out large-scale biological attacks," he said. "We believe this represents a grave threat to U.S. national security."

Amodei added that Anthropic has briefed government officials on this assessment, "all of whom found the results disquieting."

This question asks whether the concerns that Amodei raises here will be shown to be true – whether within three years, AI will have knowledge sufficient to "fill in all the missing pieces" to aid rogue actors to plan and "carry out large-scale biological attacks." For this question to resolve YES, the AI in question does not need to be publicly released; internal experiments with a private model can be sufficient demonstration. An AI that can provide the high-level plan but is not able to supply important low-level details would not be enough to resolve YES.

Get
Ṁ1,000
and
S3.00
Sort by:

I doubt labs would publicly announce if/when their models reach this capability. Even if internal tests on a system shows that this question ought to resolve YES, I think the most we'd hear from the lab (at least until a while later) is something vague like "The latest generation of models show a concerning and extensive capacity to aid with bioweapon production." How do you plan to resolve based on vague announcements of this type?

@AdamK Great question. I think "extensive capacity to aid with bioweapon production" would be too vague of a statement to resolve positive. If a lab announced that rigorous experiments showed that the AI had an expert-level knowledge of how to commit bioterrorism, I would lean toward resolving YES. I think this is roughly equivalent to Anthropic's classification of ASL-3 and OpenAI's classification of high CBRN risk, though it's not clear whether Anthropic and OpenAI will be public about what risk levels have been achieved.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules