Will artificial superintelligence exist by 2030? [resolves N/A in 2027]
➕
Plus
320
Ṁ96k
2027
40%
chance

Will continuing progress in AI capabilities result in the existence of unambiguous artificial superintelligence, clearly superior to humans and humanity at virtually every cognitive problem, by Jan 1st 2030? (Every problem on which it's possible to do better than humanity at all, eg, not Tic-Tac-Toe or inverting hashes.)

An artificial superintelligence like this would, according to some market participants' beliefs, probably kill everyone. This creates a market distortion because M$ is not worth anything if the world ends. Pleading with people to just bet their beliefs on this important question doesn't seem like the best possible solution.

This market resolves N/A on Jan 1st, 2027. All trades on this market will be rolled back on Jan 1st, 2027. However, up until that point, any profit or loss you make on this market will be reflected in your current wealth; which means that purely profit-interested traders can make temporary profits on this market, and use them to fund other permanent bets that may be profitable; via correctly anticipating future shifts in prices among people who do bet their beliefs on this important question, buying low from them and selling high to them.

For more on this see the paired question, https://manifold.markets/EliezerYudkowsky/will-ai-wipe-out-humanity-by-2030-r

Get
Ṁ1,000
and
S3.00
Sort by:

o3 has no effect at all? whats it gonna take ?

What caused the recent dip? I would have imagined news of 🍓 would have had the opposite effect

The N/A in ‘27 thing biased this high. Who cares but found it interesting.

As it’s a tail event and you can’t really kill the tail until 2030.

So it’s not a bet about 2030, it’s a bet about how we/EY(?) will feel in ‘27 and hence still hugely subject to manipulation.

Imagine the same logic applies if you roll the n/a date backwards a day from ‘27 or one day earlier than ‘2030

it didnt bias it high. it just corrected for the overwhelming bias in favor of it counterfactually being too low. If someone offers to double my money (if I pay them today) say by Jan 1 2030 because they think the world ends by then, I am certainly taking that bet. Because If I die before then I wont care about the returns anyway.

How can we discuss super intelligence when we still don’t have a full grasp of human intelligence. Also seems like there will be lots of hype like this but no stable (falsifiable) definitions of “super intelligence.”

bought Ṁ50 YES from 25% to 26%

@BenjaminJensen it's in the question description: "clearly superior to humans and humanity at virtually every cognitive problem". That's plenty well defined to bet on.

Obviously

bought Ṁ267 YES from 33% to 35%
predictedNO

I love that manifold is so biased towards AI progress being insanely fast from here because there are a lot of free mana markets (ASI within 7 years, realistic video from prompt in 2023) where even in the unlikely scenario that I'm wrong, the world will change so drastically that I won't care about my mana much anymore.

It's not even that I think ASI is impossible, 2030 just seems way too soon. It's about 6 years and it's already been 3 years since gpt-3

predictedNO

@SophusCorry People are biased because they were surprised when ChatGPT came out (it was not the GPT-2 or the initial version of GPT-3, nor did the surprise come with GPT-4).

So they expect to continue to be surprised in similar ways, and they cannot imagine such a sequence of surprises that would not lead to ASI in a short period.

(1) There are many such sequences that would not lead to ASI in a short period
(2) In reality we should expect reversion to the mean, not such a sequence in the 1st place

@DavidBolin we also may slam into temporary logistical limits. The free market right now wants to build a lot more GPUs for AI accelerators but it will take months to build more using existing capacity and years to expand capacity for the types of memory and process nodes accelerators need.

Similar limits are on robots. This can add years, decades to timelines of exponential growth while capacity is added before true ASI is possible.

@GeraldMonroe robots arent needed for ASI and once you develop asi if you dont immediately die then you can use it to massively accelerate robotics development.

@SophusCorry You claim that Manifold is biased towards AI progress being insanely fast, but it seems like you are just saying it's biased because it disagrees with you. You are adding heat, but not light, to the conversation.

(Also ironically your point about how the world will change drastically and you won't care about your mana losses if you are wrong is evidence of bias in the opposite direction -- people such as you are underestimating AI progress in part because of the asymmetry in payoffs you point out.)

@DanielKokotajlo It seems the opposite to me - Manifold frequently, imo, underestimates how capable AI will be - e.g. a full length movie.

@DanielKokotajlo the simple problem with "every cognitive task" is there are many tasks where nobody recorded every step, and the experts have 20-40 years of experience. It's also simply not worth it to automate the job of less than 100 people worldwide if the ASI builder has to go to special effort.

"Every" is insanely more difficult than "90 percent of cognitive tasks". Even if the underlying model can online learn the last 10 percent it won't know them in 2030.

This is the issue with most of your predictions, you get excited about the adversarial Turing test and conclude it will be able to research ai and make progress at hyper speed (most PhD cs professors can't improve AGI why would the model be able to?) or you have superintelligence starting to work and you infer instant worldwide usage, or robotics can finally be controlled by AI and suddenly millions or billions exist.

Entirely possible in 2030 we have general systems that are superintelligent in many domains and can online learn but it still takes 10-20 more years to be a Singularity or future that looks like Deus Ex.

For the simple reason it takes time (1-3 years probably) for robots to double infrastructure, and thousands of compute cards to inference an ASI.

Exponential growth means eventually you have billions of robots and compute cubes but maybe not by December 2030...

For evidence, assume robots can build robots by 2029. Lookup how many robots exist now. Estimate how many factory workers could join the robot assembly plants and how much of the current economy can be redirected. WW2 is a good example for real numbers. Then calculate the curve. Similarly assume superintelligence needs much more compute than AGI, possibly 1000x more though I would be willing to accept higher or lower.

@GeraldMonroe "It's not worth it to automate the job of less than 100 people..." It will be worth it if the job is high-value enough. And if you have a million superintelligent workers, you'll be able to put in that 'special effort' very quickly for a lot of different things in parallel.

I agree that I might be overestimating the rate at which a million superintelligences with dictatorial powers over the whole planet could scale up robot production. If you would like to do a serious analysis of this question, I might be interested in paying you for it. At the very least I'd be happy to see it and interested to read it.

@DanielKokotajlo I think the main reason people are underestimating progress is because they only see progress in discrete iterations. A lot of people think GPT4 was a big update but then we have not seen anything better for pretty much a year. Im sure youve seen >GPT4 level stuff being an OAI employee but you cant expect everyone to have updated on things they arent aware of.

@AkramChoudhary Fair point, but still, I think there's enough publicly available info at this point that people should be able to figure out that crazy stuff is coming in the next few years. E.g. look at the scaling trends. You shouldn't have to wait for a >GPT4 level system to be announced, to be fairly confident that one will appear sometime in the next few years.

@DanielKokotajlo theres also plenty of info to suggest the contrary. GPT4 level systems are a few OOMs below what the economic limits are before energy limits / GPU acquisition becomes a severe problem. Also arent the scaling curves bending a little already suggesting future jumps wont be as dramatic ? Could have sworn Amodei said something like that on his Dwarkesh podcast. Theres also a separate question of how many GPT3-->4 jumps gets you to ASI. I personally think its on the lower end i.e 2-3 perhaps. But I can see why others disagree. You have people like Yann saying GPT isnt even dog-level general while Eliezer says its Human child level general. Never mind predicting the future we cant even agree how smart our current systems are.

@EliezerYudkowsky FYI: I think if you're trying to extract info about whether AI kill us from prediction markets, a useful tactic would be to ask "When asked at the end of 2023, what credence will Yudkowsky put on AI doom by 2030?". If it's higher than your current value, the market is predicting bad signs will emerge in the future. If it's lower, the market is predicting good signs.

(I guess for a simple binary market you can split payoffs between options?)

A note on:

WRONG: Purely profit-interested traders can still make money on trading this market, by correctly anticipating the shift in trades among people who do bet their beliefs, so that you can buy low from them and sell high to them. CORRECT: All trades on the market will be rolled back on Jan 1st, 2027, apparently! Unless Manifold develops some other way to treat N/A markets before then. Sorry about that, I had not thought that was how it worked!

Trades, profit, and losses here will be rolled back, but the effects on traders' portfolios before this N/As will stay the same until then, so profits (and loans) can be leveraged forward in the interim elsewhere if cashed out.

A superintelligence by this definition implies it is better than humans at coming up with cognitive challenges, or specifically challenges that it knows humans would be better at- which leads to a contradiction if the set of cognitive challenges is infinite.

Maybe a more narrow definition like: “Achieves a perfect score on any novel, solvable educational test (AP exams, ACT/SAT, PhD qualification exams)” would encompass enough human cognitive parity that anyone would call it superintelligence.

@whalelang "virtually every cognitive problem" given the "virtually" and a charitable interpretation I don't think it will cause any controversy as it's currently defined.

Couldn't you just resolve the market to whatever probability it has when it ends? Pair this with a randomized end time to avoid last-minute sniping.

As is, the N/A resolution gives me zero incentive to bet either way, regardless of what I believe. A market that resolves to ending prob at least creates a self-fulfilling collective delusion.

predictedNO

@VitorBosshard How have PROB markets previously played out in practice, does anyone know? Did they stay semantically anchored?

@EliezerYudkowsky I don't know, it's pure theory crafting from my side.

@EliezerYudkowsky In the past they have definitely not stayed semantically anchored, regardless of whether they resolve to PROB, N/A, or some more complicated self-resolving mechanism. See for example the series of self-resolving "Is Joe Biden president?" questions: https://manifold.markets/MichaelWheatley/weighted-poll-will-biden-be-preside?r=QQ

@A That market is so small though, there's like 4 people deciding most of the probability

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules