Resolves to YES if it becomes illegal to develop certain forms of AI in all US states. It must either be a ban at the federal level or a ban that every state independently enacts.
Bans that attempt to prevent specific outcomes don't count; the ban must be targeted towards general capabilities research.
For example, "no neural networks with over 100 billion parameters", or "no recurrent neural networks" would resolve this market to YES. "No AI-generated pornography" or "no AI designed to operate a military drone" would not be sufficient to resolve this to YES.
In essence, I want this market to resolve YES if it seems the government recognizes the problems with having unaligned near-or-above-human-level AIs running around, and tries via regulation to prevent the existance of AIs above a certain level of intelligence. I'm open to a better operationalization of this concept if one is suggested.
Here's one for the tech junkies on here: A sufficiently advanced AI figures out alignment is impossible, and has a very large chance of failing within say 30 years, if there exists any AI past some cutoff.
Accordingly, the AI starts a movement to get sufficiently advanced AI banned, then kills itself.
@PipFoweraker Hmm. I think it should? Unless the order is so obviously unconstitutional that there's no chance anyone would enforce it I guess.
@IsaacKing As far as I understand it for the US, the constitutionality of an executive order is not a grounds for procedural challenge - so the executive could still issue the order, at which point I'd argue for a resolve yes.
@PipFoweraker Is there precedent for something like this? If Trump had issued an executive order that said "everyone must refer to me as the God King at all times", I'd be hesitant to actually call that a real law.
@o If the US is replaced by a very similar country with a different name, I'll use that country for resolution instead.
If the US actually ceases to exist as a coherent entity, I'll resolve this to N/A.