Resolves according to my subjective judgement, but I'll take opinions of those I respect at the time into account. As of market creation, people whose opinions I value highly include Eliezer Yudkowsky and Scott Alexander.
As of market creation, I consider that AI safety is important; making progress on it is good and making progress on AI capabilities is bad. If I change my mind by 2028, I'll resolve according to my beliefs at the time.
I will take into account their output (e.g. papers, blog posts, people who've trained at them) but also their inputs (e.g. money and time). I consider counterfactuals valid, like "okay Anthropic did X but maybe someone else would have done X anyway"; but currently I think those considerations tend to be weak and hard to evaluate.
If I'm unconfident I may resolve the market PROB.
If Anthropic rebrands, the question will pass to them. If Anthropic stops existing I'll leave the market open.
I don't currently intend to bet on this market until at least a week has passed, and to stop betting in 2027.
Resolution criteria subject to change. Feel free to ask about edge cases. Feel free to ask for details about my opinions. If you think markets like this are a bad idea feel free to convince me to delete it.
Similar markets:
https://manifold.markets/philh/by-2028-will-i-think-miri-has-been
https://manifold.markets/philh/by-2028-will-i-think-deepmind-has-b
https://manifold.markets/philh/by-2028-will-i-think-openai-has-bee
https://manifold.markets/philh/by-2028-will-i-think-conjecture-has
https://manifold.markets/philh/by-2028-will-i-think-redwood-resear