For example, if they allow SSH access to an environment that allows "essentially" full access to the weights, even if it's very difficult in practice to actually download all the weights (possiby intentionally so), this resolves Yes.
This market doesn't resolve Yes if they have another model that's significantly more capable internally (eg they have GPT-5 internally and allow access to the weights of GPT-4).
This question is kind of subjective and tricky to operationalize and resolve. Please ask for clarifications in the comments so that it can be operationalized better.
I will not trade in this market.
1. OAI seems inclined to have its models evaluated (See ARC Evals).
2. The EU AI Act likely requires audits for deploying models on the EU market.
3. I expect audits will increasingly need to be based on model weights (rather than just behavior), especially if mechanistic interpretability techniques or, e.g., DLK are used.
4. Apollo Research seems to be trying to position itself to do something like this.
5. Generally, I expect by 2030, frontier AI models will have caused significant harm, and audits are among the most agreed-upon policy proposals