Resolves according to my subjective judgement, but I'll take opinions of those I respect at the time into account. As of market creation, people whose opinions I value highly include Eliezer Yudkowsky and Scott Alexander.
As of market creation, I consider that AI safety is important; making progress on it is good and making progress on AI capabilities is bad. If I change my mind by 2028, I'll resolve according to my beliefs at the time.
I will take into account their output (e.g. papers, blog posts, people who've trained at them) but also their inputs (e.g. money and time). I consider counterfactuals valid, like "okay DeepMind did X but maybe someone else would have done X anyway"; but currently I think those considerations tend to be weak and hard to evaluate.
If I'm unconfident I may resolve the market PROB.
If DeepMind rebrands, the question will pass to them. If DeepMind stops existing I'll leave the market open.
I don't currently intend to bet on this market until at least a week has passed, and to stop betting in 2027.
Resolution criteria subject to change. Feel free to ask about edge cases. Feel free to ask for details about my opinions. If you think markets like this are a bad idea feel free to convince me to delete it.
Similar markets:
https://manifold.markets/philh/by-2028-will-i-think-miri-has-been
https://manifold.markets/philh/by-2028-will-i-think-openai-has-bee
https://manifold.markets/philh/by-2028-will-i-think-conjecture-has
https://manifold.markets/philh/by-2028-will-i-think-anthropic-has
https://manifold.markets/philh/by-2028-will-i-think-redwood-resear
@L I don't know for sure that I will start betting. I dunno how bettors would feel about me having a financial incentive to resolve a certain way, when the market is so subjective. (Possibly I could have a rule like, I'm allowed to have up to N shares in a certain direction but only if I've put in at least 3N M$ of subsidies? I dunno.)
But, it sounds like you expect my own bet here would currently be NO? You'd expect correct, if so.
@PhilipHazelden yup. you seem like you believe in yudkowskian ai risk, which isn't a real thing, foom can't jump ahead the way he fears, instead we are all going to foom or there is going to be an interspecies war, and the latter seems much more likely to be started by elongated muskrat or China. deepmind is by far the most competent ai team around, and they're likely to solve safety and hit go on the big one within a few years (not sure if they know this yet). openai has also contributed to getting there. once we get there amount of positive impact on the world becomes so enormous that it dwarfs the small risk induced by things like chat GPT. most of the risk comes from intense agency, and I don't think current reinforcement learners will be able to beat humans on agency in general domains for a while after those agents are superintelligent but barely agentic.
also keeping in mind here, when I say the big one, I do not mean to say will be the last improvement in capabilities. It probably still won't be strong enough to do full scale cell simulation, for example.
@L also, because my model is incredibly confidently the above, I would bet yes against you comfortably.
@L Yeah, I'm pretty sure I'm less possimistic than EY but I definitely lean in that direction and disagree with most of what you say here. If I become convinced of your beliefs by 2028 and still resolve this market NO, that would be deeply suspicious.
I appreciate knowing that you'd be happy for me to bet.