Will the US government commit to legal restrictions on large AI training runs by January 1st, 2025?
➕
Plus
80
Ṁ22k
Jan 1
3%
chance

A recent open letter called for a moratorium on the development of increasingly powerful artificial intelligence (AI) systems, highlighting the potential risks to society and humanity. The letter urged all AI labs to pause the training of AI systems more powerful than GPT-4, including GPT-5, for at least 6 months. It also called for shared safety protocols and improved AI governance. The letter suggested that if the AI labs do not enact a pause voluntarily, governments should step in and institute a moratorium.

Before January 1st, 2025, will the United States government commit to legal restrictions on sufficiently large training runs for AI systems?

Resolution Criteria:

This question will resolve positively if, before January 1st, 2025, credible news sources, official statements, or legal documents confirm that the United States government has committed to legal restrictions on sufficiently large training runs for AI systems, meeting all of the following criteria:

  1. Government Commitment: The commitment must come from the United States government, in the form of one or more of the following:
    a. An executive order signed by the President of the United States.
    b. Legislation passed by both chambers of Congress and signed into law by the President or enacted through a congressional override of a presidential veto.
    c. A legally binding decision or regulation issued by a federal agency with jurisdiction over AI research and development, such as the National Science Foundation (NSF) or the National Institute of Standards and Technology (NIST).

  2. Definition of Sufficiently Large Training Runs: The commitment must clearly define what constitutes a "sufficiently large training run" for AI systems. This definition should provide specific thresholds, such as the number of parameters, the amount of computation, or the level of performance that would trigger the moratorium. For example, the definition could specify that training runs involving models with more than 100 billion parameters are subject to the moratorium.

  3. Duration of Moratorium: The moratorium must have a specified start date, but no specific end date is required. It may have an indefinite duration or be tied to the achievement of certain milestones, such as the development of comprehensive safety protocols or AI governance frameworks.

  4. Nature of the Moratorium: The moratorium must impose restrictions on sufficiently large training runs for AI systems within the jurisdiction of the United States. These restrictions may take various forms, such as:
    a. Requiring AI labs and researchers to obtain specific permits or approvals before conducting sufficiently large training runs, with exemptions granted for military purposes or when exceptional circumstances are demonstrated.
    b. Imposing penalties or sanctions on entities that engage in sufficiently large training runs without meeting certain safety or governance requirements.


If credible news sources, official statements, or legal documents confirm that the United States government has committed to a moratorium on sufficiently large training runs for AI systems meeting all of the above criteria before January 1st, 2025, the question will resolve positively. If no such commitment is made by the deadline, the question will resolve negatively.


Note: This question focuses on the commitment to a moratorium by the United States government, not the actual implementation or enforcement of the moratorium. Additionally, the question is not concerned with the potential impact of the moratorium on AI research, technological progress, or society at large. The question does not require the inclusion of specific safety protocols or AI governance systems within the moratorium commitment.

Get
Ṁ1,000
and
S3.00
Sort by:

Mine is an anthropic shadow bet

predictedNO

Wordcels only want one thing and it’s no machines that can wordchad them out of their influencer roles.

I put this at 15-20%.

The fail part is ai has no way to kill us. If you mean death robots those don't exist. Drones aren't manufactured on any scale to kill anyone, and if they were it would happen anyway. There is no scenario where ai would matter in libs annihilating themselves.

@MarkKeltoer also, I like the site redesign, we're ready to be a crappier version of metaculus (where I am also banned).

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules