This article seems to state a really strong case for the resolution being improper: https://frankmuci.substack.com/p/polymarket-settles-bet-against-its
Resolves to PROB of where on the scale of "nealy certainly malicious" to "perfectly correct behaviour" I judge UMA's resolution at close, at start it would resolve around 10%.
I am willing to listen to arguments how this isn't obvious misconduct and have a track record of resolving markets like this against my Mana interest.
Deadline is market close.
Nothing involving ChatGPT is "arguments".
To update after a discussion with @Bayesian on discord (https://discord.com/channels/915138780216823849/1286187576465555519/1286211221649100883) I'm somewhere around 80%, obviously open to new arguments, preferably in context of the previous discussion