If at least one human is alive on Jan 1, 2030, then this question resolves to NO.
For this question, we'll define humans the same as this Metaculus question:
as biological creatures who have as their ancestors – via a chain of live births from mothers – circa 2000 humans OR who could mate with circa 2000 humans to produce viable offspring. (So AIs, ems, genetically engineered beings of a different species brought up in artificial wombs, etc. would not count). Any living humans living anywhere in the observable universe (or multiverse) (who are known to the entities operating Metaculus) on that date will be sufficient to resolve the question negatively.
Hi! The following market might be in your interest: https://manifold.markets/jgould1090Gould/how-many-times-before-january-1st-w
@RossTaylor I think there's wide disagreement between forecasts on AI risk, and I actually don't think 14% is absurd, although it's certainly higher than most forecasts.
I'm going to cite a chain of reasoning with Metaculus forecasts here to paint the picture for why it's not absurd:
50% chance of weak AGI by beginning of 2027 https://www.metaculus.com/questions/3479/date-weakly-general-ai-is-publicly-known/
48% chance of superintelligent AGI within 3 years of weak AGI https://www.metaculus.com/questions/9062/time-from-weak-agi-to-superintelligence/
This is highly correlated with the above question, conditional on faster weak AGI development we'd also expect faster superintelligent AGI development to be more likely. So we should expect a higher than 25% chance of superintelligent AI this decade.
50% chance of negative transition to a world with superintelligent AGI. A positive transition is defined as one "where the dominant influence over the future course of history takes place under the direction of widely held moral ideals." - this is roughly whether we'll succeed at aligning the superintelligent AI(s) https://www.metaculus.com/questions/4118/will-there-be-a-positive-transition-to-a-world-with-radically-smarter-than-human-artificial-intelligence/
75% chance that if there's a AI catastrophe that reduces population by at least 10%, it reduces the human population by at least 95% https://www.metaculus.com/questions/2513/ragnar%25C3%25B6k-question-series-if-an-artificial-intelligence-catastrophe-occurs-will-it-reduce-the-human-population-by-95-or-more/
This definitely isn't an exact fit with the previous point, but it gives a loose sense that negative transitions are probably very bad, not just slightly bad.
40% chance that if the human population declines by 95%, humanity goes extinct https://www.metaculus.com/questions/8103/extinction-if-population-falls-400-million/
Note: this is conditional on any cause of a 95% decline. I think most forecasters believe that the probability of extinction conditional on AI catastrophe is relatively high compared to that of other possible global catastrophic risks (nuclear, biological, etc). I don't have a Metaculus link for that exactly, but see https://www.metaculus.com/questions/2568/ragnar%25C3%25B6k-seriesresults-so-far/ for some related analysis.
Multiplying all that out and assuming everything is uncorrelated, we get 3.6%. In actuality, I think the correlations between these questions make the risk much higher than that. Also to do the math properly you'd want to combine the distributions, not just the point estimates we used above, which I think also would end up giving a higher number.
Anyway, don't take the specific number too seriously - we can also compare to https://www.metaculus.com/questions/578/human-extinction-by-2100/ which says there's a 1.4% chance of extinction by 2100, which obviously disagrees with the above chain of reasoning. Or you can look at the Existential Risk Persuasion Tournament where we get a range of 1% - 6% probability of extinction by 2100. I just wanted to show that there's a plausible picture for assigning a probability of more than a few percentage points to AI extinction in this decade alone.
@jack I look it from the view of a plausible mechanism that could make humanity extinct.
Is it a nuclear event? Even in a nuclear winter, total extinction seems unlikely? I believe most simulations show this?
Is it AI eliminating humanity? What mechanism is this? Bioweapons? Nuclear? Some unknown consequence of superintelligence?
I would weigh Knightian factors arising from superintelligence higher in a few decades, but not in the next seven years.
TLDR: Catastrophic events are in the low single figure % in mind, but total extinction seems an order of magnitude lower?
FYI I am not at all surprised that this is lower than https://manifold.markets/MartinRandall/will-ai-wipe-out-humanity-before-th-d8733b2114a8. The conjunction fallacy is especially strong in x-risk forecasting, and also the superpower of prediction markets - the incentives for making accurate predictions - turn into a negative on x-risk predictions due to the misaligned incentives.
I actually think AI x-risk in this decade is not unlikely, and I think it's a much bigger concern than any other x-risk.
@jack inclusion of the multiverse, theoretically could send this to zero, but for the "observable" qualifier.
@ShitakiIntaki Yeah, I'm kind of confused by the multiverse part of that - isn't the observable part of the multiverse just the observable universe?
@jack i suppose it is a place holder for future developments in what the agent resolving this market might consider to be observable. This distinction does not matter if humans are still around in this universe.
In the here and now, I would agree that the observable multiverse is just the observable universe.
@jack Well that seems misleading, given the title one might expect it to resolve yes in that case.
@MartinRandall Yes, that is certainly the intent of the question, I just can't promise that we'll be able to go about implementing that resolution.
In all seriousness though, the question certainly should be read as resolving YES if no humans are alive. And there are possible scenarios in which humanity goes extinct but we still manage to collect the mana payout - e.g. if all humans are uploaded it would be a YES (the resolution criteria specify biological humans) and the YES betters would collect their payouts (assuming Manifold still exists of course).
@jack Sure, but you also can't guarantee a NO resolution, many things could happen between now and 2030.
@MartinRandall Yeah, the omission was just for irony. And yes, like most questions there's an implicit conditional on Manifold existing, etc etc.
@MartinRandall I also think that's too low. Metaculus's forecasts on extinction widely vary and are incompatible when compared to each other (just like Manifold's do), for example https://www.metaculus.com/questions/17735/conditional-human-extinction-by-2100/ is literally the same resolution criteria but is higher when they added a conditional. And https://www.metaculus.com/questions/4118/will-there-be-a-positive-transition-to-a-world-with-radically-smarter-than-human-artificial-intelligence/ assigns a very high chance to a negative outcome from AGI (in theory you might allow for negative non-extinction outcomes, but I think there are other Metaculus forecasts that put AI x-risks higher)