Will the US implement testing and evaluation requirements for frontier AI models by 2028?
Basic
6
Ṁ100
2028
82%
chance

This market will resolve to yes if the US creates a policy by 2028 requiring safety testing and evaluation for frontier AI models, which are defined as those with highly general capabilities (over a certain threshold) or trained with a certain compute budget (e.g. as much compute as $1 billion can buy today). The policy should also mandate independent audits by qualified auditors to assess the safety and performance of these models.


Luke Muehlhauser from Open Philanthropy suggests this idea in his April 2023 post, "12 tentative ideas for US AI policy." This market idea was proposed by Michael Chen.

Get
Ṁ1,000
and
S3.00
© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules