/JamesDillard/will-ai-wipe-out-humanity-before-th
/EliezerYudkowsky/will-ai-wipe-out-humanity-by-2030-r
/IsaacKing/if-humanity-survives-to-2100-what-w
/IsaacKing/will-this-market-still-exist-at-the
None of these actually work. The basic mechanics that make (traditional) prediction markets accurate do not apply when traders will value mana differently if the market resolves one way than if it resolves the other, or if there's no profit incentive at all.
Moreover, the existance of these sorts of markets and people who cite them as though they were credible discredits prediction markets and those people in general in the eyes of economically-savvy outsiders. Figuring out a way to reward honest bets on on human extinction would be an extremely useful innovation for anyone concerned about existential risk, as it would allow humanity to get a concrete, empirical estimate of its likelyhood to convince the general public.
I've placed a M$50,000 limit order on NO, as a reward for anyone who can figure out how to resolve this to YES.
@IsaacKing what is the fundamental issue with this style of market? I think it could be made a bit better (add more resolution criteria about what entity will be calculating the estimate so we know it is a intellectually honest super intelligence, etc), but the idea works in my opinion.
@RobertCousineau Could be wrong, and too long a time frame for people to really care about predicting it accurately now.
I claim it is theoretically impossible to forecast this with only a market. I do believe one can forecast it reasonably by other methods, but not with just a market.
First, I will note that it is possible to make market forecasts that at least correlate with risk of your own death, or risk of extinction. That risk is part of the price of all loans, which are constantly being bought and sold on the markets.
The fundamental problem is that there is no way for a market to distinguish one type of extinction risk from another. The other fundamental problem is that these prices reflect discount rates where x-risk is just one, small component - there are other much bigger components, like the rate of economic growth.
Here is a post I made with a more detailed explanation: https://manifold.markets/EliezerYudkowsky/will-ai-wipe-out-humanity-by-2030-r#8o50dQrbUR5gW9vxOZId
Again, the solution is to make non-market-based predictions. One can simply ask forecasters for their predictions. You don't want to score it on resolution of course.
Here's an example: https://forecastingresearch.org/news/results-from-the-2022-existential-risk-persuasion-tournament
Metaculus has a lot of good AI risk questions, but I think the quality of predictions on the specific AI extinction questions I'm aware of is pretty bad. E.g. https://manifold.markets/jack/will-humans-go-extinct-before-2100 - I think this the Metaculus predictions here are very low-quality, and as evidence I can point to them being horribly inconsistent with other Metaculus questions. But this is not an issue with all X-risk questions - others have what I believe are pretty good-quality predictions.
I have already made markets on global catastrophic risk from AI, which I and many other forecasters believe is very similar to the risk of AI-caused human extinction. https://manifold.markets/jack/will-a-global-catastrophe-kill-at-l
There's a pretty good strategy for making high-quality predictions on X-risk:
Assess forecaster's prediction accuracy on measurable questions, particularly on ones that relate to the topic of AI and x-risk.
Ask them for predictions on the X-risk questions that cannot be directly scored - e.g. "Will AI cause human extinction by 2100?"
Also ask them for predictions on related questions that can be measured and scored - e.g. on AI capabilities and safety progress.
Aggregate the predictions with weighting based on their track record on past measurable questions.
Assess how well the predictions on the unmeasurable questions fit with the predictions on the measurable questions.
(It's kind of like the strategy for assessing the quality of long-range predictions when your dataset so far only has short-range predictions.)
Multiple forecasting groups have already been doing this type of work.
@jack Can you refer me to some of the work done by these forecasting groups? I would be very interested to read it.
@NBAP Here are some links that relate - if others have links please add them!
@alextes Traders much have a profit incentive to bet their true beliefs. (Or very close to them.)
@IsaacKing yes, and this belief often depends on the resolution, which in this case depends on you. Are you saying you won’t resolve before close? If you reserve the right to resolve before close based on your subjective idea of “a robust way to run a market on AI-X risk” I feel I can’t safely bet my beliefs here anymore. If you do at close it’s still unsafe without defined criteria but I can safely exit before then.
@alextes Traders much have a profit incentive to bet their true beliefs in the market about AI risk.
@IsaacKing You have not answered my question about resolving prior to close. Nor acknowledged how your subjective perspective influences my “true beliefs” for this market.
That’s okay, although it also means I can’t trade this. Good luck 😄.
@alextes It's pretty objective whether there's an incentive to bet your true beliefs in any given market structure (or at least something close to them, modified by the Kelly criterion due to non-infinite capital), and it's not worth my time to explain basic economics that you can look up elsewhere. (If you'd like a starting point, look up "incentive compatibility" or "strategyproofness". Or just spend a few minutes thinking creatively about what would make you the most mana, and then see if it involves betting to your credence on the market resolution event occurring.)
This market will resolve to YES as soon as such a system is implemented, even if it occurs before 2028.
@IsaacKing Are you requiring that it be 100% incentive compatible? What if it works well enough in practice to give good results 90% of the time, for example? (I would note that Manifold is certainly not 100% incentive compatible for a ton of reasons)
@jack It needs to be good enough that, as someone concerned about existential risk, I'd seriously consider throwing few thousand dollars into subsidizing it so we can finally have a reliable number that I can show to people and say "this market system proves the risk is about this high, because anyone who thinks it's lower could turn a profit by betting in that direction".
@IsaacKing How about funding a non-market-based superforecasting study with a few thousand dollars? I think that is the better approach, as described in the thread above. (For addressing x-risk, not for resolving this market.)
@IsaacKing Quite possible, but is there actual data on how convincing people find prediction markets vs other forecasts?
@jack Not that I'm aware of. (Seems like an interesting study area) Seems obvious to me though that saying "here's a market, if you think it's wrong you can turn a profit by trading in it" is a lot more convincing than "here's a number that someone you disagree with came up with".