Will the US fund defensive information security R&D for limiting unintended proliferation of dangerous AI models by 2028
Basic
5
Ṁ812028
53%
chance
1D
1W
1M
ALL
This market will resolve to yes if the US creates a policy by 2028 to fund defensive information security R&D specifically aimed at limiting the unintended proliferation of dangerous AI models.
Luke Muehlhauser from Open Philanthropy suggests this idea in his April 2023 post, "12 tentative ideas for US AI policy." This market idea was proposed by Michael Chen.
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Related questions
Related questions
Will the US government commit to legal restrictions on large AI training runs by January 1st, 2025?
3% chance
Will the US require a license to develop frontier AI models by 2028?
50% chance
Will the US implement software export controls for frontier AI models by 2028?
75% chance
Will the US regulate AI development by end of 2025?
41% chance
Will the US establish a clear AI developer liability framework for AI harms by 2028?
39% chance
Will the US restrict transfer of trained AI models before 2026? (Deny ≥100 countries)
19% chance
The US, UK or EU put limits on training of AI models (eg $ spent) before 2024?
15% chance
Will the US restrict transfer of trained AI models before 2026? (Deny all non-US, allow US or int'l licensing)
31% chance
Will the US restrict transfer of trained AI models before 2026? (Deny all, secret or no new licenses)
19% chance
Will the US restrict transfer of trained AI models before 2026? (Deny some entities)
82% chance