• Data Centre Hosting AI: A facility used for housing computer systems, where at least 50% of computational activities are verifiably related to AI. This includes research, development, deployment, and commercial uses of AI technologies.
• Verification of AI Activities: Public records, corporate statements, or reputable news sources must verify that the facility’s primary function involves AI-related activities.
• Physically Attacked: Any intentional act causing material damage or disruption to the facility’s operations. An event qualifies as an “attack” if it leads to at least one of the following: 1) more than 24 hours of operational downtime, 2) destruction of physical hardware, 3) incurred repair costs exceeding $50,000. This criterion refers specifically to physical attacks and does not include cyberattacks or other non-physical forms of assault.
• Attribution of Attack: The attacking entity does not need to be formally identified. However, there must be a widespread consensus, supported by reputable news reports or expert opinion, that the attack was undertaken with the aim of limiting the risks associated with AI.
• Risks of AI: The term “risks of AI” refers to widely acknowledged concerns about AI, including but not limited to ethical considerations, job displacement, and data privacy.
• Geographical Scope: This prediction market question applies on a global scale, encompassing data centres located in any country or territory.
• Additional Criteria: For the prediction to be considered ‘Yes,’ verifiable evidence (e.g., manifesto, public statement, or digital trail) must clearly indicate that the attack aimed to address the “risks of AI.”
• Before December 31, 2029: The event must occur prior to 11:59 PM UTC on this date.
• Market Resolution:
• In the event of an attack meeting all specified criteria by 11:59 PM UTC on December 31, 2029, this market resolves to ‘Yes.’
• In the event of no such attack by this deadline, this market resolves to ‘No.’
If the attack is motivated by the desire to prevent a certain regime to acquire AI capabilities, does it still count? Say there is a data center in Theocratistan where LLM are trained by the morality police to spot blasphemy online and someone attacks it arguing that Theocratistan shouldn't have AI because it is illiberal and tyrannical. This boils down to admissible "risks of AI" being only global/general as opposed to local/interacted with local circumstances.
@Gyfer Completely agree. Direct effectiveness of an attack may not be there, however the implications of the statement made by such an attack could be drastic.