Broadly defined, but more like terrorism than someone just getting mad and punching their computer or something. Includes state actions, such as air striking a data center, but not mere regulatory enforcement.
More specific terms, just terrorism:
https://manifold.markets/BenjaminIkuta/will-anyone-commit-terrorism-in-ord?r=QmVuamFtaW5Ja3V0YQ
@BenjaminIkuta the direction is still not clear to me. I mean, every "mere regulatory enforcement" comes with the implied threat of escalation to Waco levels. How will we know for sure how to resolve this, once a given regulatory enforcement action happens?
@BrunoParga If it does escalate to Waco levels, it counts, if it doesn't, it doesn't. Of course there could be edge cases though. I'm open to input about how to resolve such edge cases.
@BenjaminIkuta my input is to drop the idea of "mere regulatory enforcement". If Waco is the threshold, then you'd probably consider, say, the murder of George Floyd as "mere regulatory enforcement".
All state action is legitimized use of violence.
@BrunoParga Regardless of opinion, stuff like Waco is obviously "something happening" in some sense.
@BenjaminIkuta well, I don't think you have clarity on the criteria for your market, and that makes it risky to trade on it. Have a great day!
@BrunoParga So again, how would you make it more specific? I'm not asking about your opinion that the very existence of the state is inherently violent.
@BenjaminIkuta that is not my opinion, that is the generally accepted sociology/political science definition of what a state is - the entity with the monopoly on the use of socially legitimized violence in a given territory.
I've already given you my input - if the state takes action beyond saying "stop this training or else", that's violence. It might be legitimized, I'm not a crazy anarcho-capitalist libertarian (anymore).
Change it to "use of force" if that makes it more palatable to you. Either that, or come up with a better threshold than Waco, one that includes George Floyd - you do agree his death was violent, right?
If China invades Taiwan with the plausible goal of reducing US access to the best AI hardware, would this count?
@BenjaminIkuta What if it’s their stated goal, or clearly their primary goal in the context of world events and their other communication?
@AdamK Hmm... I suppose that would make me learn more towards yes, but it would have to be clearly to slow AI in general, rather than to weaken the US, which seems unlikely.
@BenjaminIkuta Such an attack would functionally do both, so it would be exceedingly hard to tell the difference. And China’s public communication following such an event would not necessarily be a reliable indicator of their internal reasoning
@BenjaminIkuta I think it will actually be pretty clear in context whether AI is the primary motivation for an invasion, and I think it would be reasonable to resolve as PROB based on how compelling the association is. Here are two lists of world events that I think would increase or decrease, respectively, the appropriateness of a YES resolution:
Lean YES:
An invasion occurs less than 6 months after a large public breakthrough in American AI capabilities
An intelligence brief is declassified shortly before the war by a Western intelligence agency (similar to the case of Russia-Ukraine), both correctly predicting the war and identifying AI as the primary motivator
China’s geopolitical advocacy (through summits, UN resolutions, etc) stresses an AI slowdown, and Chinese diplomats publicly associate their thinking on Taiwan with these issues
High-profile acts of sabotage of the American AI industry (through physical, cyber, or other means) is associated with China shortly before a war
Lean NO:
AI progress has stalled by the time of an invasion
Chinese sabre-rattling and public statements about Taiwan exclusively focus on its conventional (non-AI) justifications
By the time of an invasion, Taiwan is no longer considered a significant bottleneck for AI hardware production
By 2050? And any type of physical violence no matter how ineffectual? Seems virtually guaranteed.
@BenjaminIkuta Do you mean that it is plausible that the stated goal is slowing AI? Or that it is plausible that it will actually slow AI?
@FlorisvanDoorn The intent must be to slow AI, but if there's no known motivation, I suppose that could be an edge case. Yes, it must be a plausible attempt. Like if I try to kill Sam Altman, but I'm hallucinating and shoot a scarecrow instead, that doesn't count. Can you think of any edge cases? I welcome input about how to better define this.
@BenjaminIkuta I was mostly thinking of the following scenario. What if some group tries to publicly stop AI progression, and they perform a violent act. However, any reasonable observer can see that this violent act will not have any chance of directly slowing AI progression. Would that count? Example: they violently kidnap the newest intern at OpenAI, or something.
@FlorisvanDoorn well, that would be terrorism, right? It doesn't have to directly affect it in that sense. And of course terrorism can turn out to be counterproductive too.
@higherLEVELING also if it wasn't clear I was saying terrorism definitely counts even if the effect turns out to be counterproductive.