Will AI wipe out humanity before the year 2200?
Basic
34
Ṁ4892
2200
14%
chance
Get
Ṁ1,000
and
S1.00
Sort by:

There's only a 3% spread between the years 2200 and 2525, so that's saying within 325 years we only gain ~1% per Century, whereas currently we are at 25% gain per century between now and 2200. This intuitively does not make sense, as technology improves exponentially.

Logical conclusions or reasoning for this:

  • AI Doomerism is complete bullshit, people fail to consider multiple scenarios and just pile on to whatever the popular opinion is.

  • Markets are low sample size, skewed opinions.

  • People believe that we will get over a, "hump" where AI will no longer eliminate us, and it will now be something else that will eliminate us besides AI, which means that AI must at that point either be neutral or helpful in terms of preventing our destruction.

  • [insert other Yudowskian statement if A then B, but not C then D and E however F is not G whereas H and J, yet considering Z and Q don't forget to cross your T's and dot your I's, then evidently you can't think in sufficient resolution to understand the problem]

@PatrickDelaney There are distortions in long-term markets, and also distortions in existential markets, so I wouldn't read too much into the literal numbers here.

That said, the "hump" theory seems solid to me. See "The Precipice" et al.

@PatrickDelaney Yep, it's a shame that the people pushing the AI doom markets up are undermining their own credibility, and the credibility of markets like this in general. If they want to send a signal, having it at 10% without manipulation comes off as more frightening than when it shoots up to 40% and no one takes it seriously because of manipulation, ironically.

AI doomerism isn't necessarily bullshit, but their overconfidence sure makes it seem like it.

If this is yes and the 2100 market is no then perhaps we are not currently in "the most important century".