If Artificial General Intelligence has an okay outcome, what will be the reason?
➕
Plus
515
326k
2200
21%
C. Solving prosaic alignment on the first critical try is not as difficult, nor as dangerous, nor taking as much extra time, as Yudkowsky predicts; whatever effort is put forth by the leading coalition works inside of their lead time.
19%
Something wonderful happens that isn't well-described by any option listed. (The semantics of this option may change if other options are added.)
15%
J. Something 'just works' on the order of eg: train a predictive/imitative/generative AI on a human-generated dataset, and RLHF her to be unfailingly nice, generous to weaker entities, and determined to make the cosmos a lovely place.
11%
K. Somebody discovers a new AI paradigm that's powerful enough and matures fast enough to beat deep learning to the punch, and the new paradigm is much much more alignable than giant inscrutable matrices of floating-point numbers.
8%
M. "We'll make the AI do our AI alignment homework" just works as a plan. (Eg the helping AI doesn't need to be smart enough to be deadly; the alignment proposals that most impress human judges are honest and truthful and successful.)
7%
B. Humanity puts forth a tremendous effort, and delays AI for long enough, and puts enough desperate work into alignment, that alignment gets solved first.
5%
A. Humanity successfully coordinates worldwide to prevent the creation of powerful AGIs for long enough to develop human intelligence augmentation, uploading, or some other pathway into transcending humanity's window of fragility.
4%
I. The tech path to AGI superintelligence is naturally slow enough and gradual enough, that world-destroyingly-critical alignment problems never appear faster than previous discoveries generalize to allow safe further experimentation.
4%
O. Early applications of AI/AGI drastically increase human civilization's sanity and coordination ability; enabling humanity to solve alignment, or slow down further descent into AGI, etc. (Not in principle mutex with all other answers.)
2%
E. Whatever strange motivations end up inside an unalignable AGI, or the internal slice through that AGI which codes its successor, they max out at a universe full of cheerful qualia-bearing life and an okay outcome for existing humans.

An outcome is "okay" if it gets at least 20% of the maximum attainable cosmopolitan value that could've been attained by a positive Singularity (a la full Coherent Extrapolated Volition done correctly), and existing humans don't suffer death or any other awful fates.

This market is a duplicate of https://manifold.markets/IsaacKing/if-we-survive-general-artificial-in with different options. https://manifold.markets/EliezerYudkowsky/if-artificial-general-intelligence-539844cd3ba1?r=RWxpZXplcll1ZGtvd3NreQ is this same question but with user-submitted answers.

(Please note: It's a known cognitive bias that you can make people assign more probability to one bucket over another, by unpacking one bucket into lots of subcategories, but not the other bucket, and asking people to assign probabilities to everything listed. This is the disjunctive dual of the Multiple Stage Fallacy, whereby you can unpack any outcome into a big list of supposedly necessary conjuncts that you ask people to assign probabilities to, and make the final outcome seem very improbable.

So: That famed fiction writer Eliezer Yudkowsky can rationalize at least 15 different stories (options 'A' through 'O') about how things could maybe possibly turn out okay; and that the option texts don't have enough room to list out all the reasons each story is unlikely; and that you get 15 different chances to be mistaken about how plausible each story sounds; does not mean that Reality will be terribly impressed with how disjunctive the okay outcome bucket has been made to sound. Reality need not actually allocate more total probability into all the okayness disjuncts listed, from out of all the disjunctive bad ends and intervening difficulties not detailed here.)

Get Ṁ600 play money
Sort by:

I bought E from 1% to 8% because maybe the CEV is natural — like, maybe the CEV is roughly hedonium (where "hedonium" is natural/simple and not related to quirks about homo sapiens) and a broad class of superintelligences would prioritize roughly hedonium. Maybe reflective paperclippers actually decide that qualia matter a ton and pursuing-hedonium is convergent. (This is mostly not decision-relevant.)

This seems likely to be a poorer predicter than usual - as you can't collect if you're dead

bought Ṁ50 If you write an argu... NO

Which options would be the most popular among ML researchers who aren't concerned about AI risk? From what I've read, the top choices would be "solving alignment is easy" (C or J) or "alignment isn't even a problem" (E?).

Yes, and G too I think.

@dionisos good point, G is probably the position of the "there's no such thing as intelligence" crowd

bought Ṁ1 Something wonderful ... YES

Two questions:

  1. Why is this market suddenly insanely erratic this past week?

  2. Why are so many semi-plausible sounding options being repeatedly bought down to ludicrously low odds <0.5% when you'd expect almost all to be within an order of magnitude of the base rate uniform distribution across options of ~6%?

opened a Ṁ25 E. Whatever strange... NO at 24% order

Some of the options like H seem... logically possible yes, but a bit out there.

As for #2 that's why I bet up E a bit. I find it the most plausible contingent on a slightly superhuman AGI happening within the next 100 years, since it's the only real "it didn't work, but nothing that bad" happened option

The probilities are currently very easy to move.

K. Somebody discovers a new AI paradigm that's powerful enough and matures fast enough to beat deep learning to the punch, and the new paradigm is much much more alignable than giant inscrutable matrices of floating-point numbers.

this is way too high but it's a keynesian beauty contest

I thought this market was going to be resolved by Eliezer after AGI happens?

opened a Ṁ10,000 K. Somebody discove... NO at 80% order

@Krantz take my orders on that option

@jacksonpolack Why did this go to 80% lmao

@benshindel

krantz has strong beliefs about it

Enough humans survive to rationalize whatever the outcome is as good, actually, and regardless of cost 90%+ of society treats anyone who points to visible alignment failure as a crazy person

If you write an argument that breaks down the 'okay outcomes' into lots of distinct categories, without breaking down internal conjuncts and so on, Reality is very impressed with how disjunctive this sounds and allocates more probability.

At the time of writing this comment, the interface tells me that if I spend M3 on this answer, it will move it from 0% to 10% (4th place).

I just wanted to check that this is right given that the mana pool is currently around M150k, subsidy pool ~M20k.

(The other version of this question seemed to have a similar issue.)

Why is this market structured with mutually-exclusive options? These don't seem remotely mutually exclusive. Indeed I would be pretty surprised if, conditional on survival, we didn't get some mixture of several of these options.

@MugaSofer Per the inspiration market: 'It resolves to the option that seems closest to the explanation of why we didn't all die. If multiple reasons seem like they all significantly contributed, I may resolve to a mix among them.'

Chunks of several buckets that add up to 'we got lucky and live in a universe where cooperation is mostly convergent under most starting conditions.'

I just upgraded this old market type to the new fixed payout version! Let me know if you see any bugs.

Humans are not the only weaker beings on earth and if ASI decides to protect weaker beings, at least we all get to be vegans in a vastly nicer economy.

Trying a new format for this question where you get all your mana back in a week, but then if we survive your bets actually pay off in 2060. Will it work? Let's find out!

Lots of people talking and arguing about AGI, but nobody's talking about how this market resolves.

One might naively think that we need AGI and it's unlikely to resolve at all, so this market is more of a discussion stand than a real market. But note the description states:

you get 15 different chances to be mistaken about how plausible each story sounds; does not mean that Reality will be terribly impressed with how disjunctive the okay outcome bucket has been made to sound. Reality need not actually allocate more total probability into all the okayness disjuncts listed

This doesn't read like someone actually planning to resolve to A-O + Other, and there happens to be 2 meta-options. And probability should correspond to "this market can possibly resolve to those options." So let's look at the 3 options other than A-O and see if they can resolve:

Something wonderful happens that isn't well-described by any option listed. (The semantics of this option may change if other options are added.)

This is the "Other" option, getting 11%. The market can't resolve to this option, or A-O, because it requires getting "at least 20% of the maximum attainable cosmopolitan value that could've been attained by a positive Singularity" and Manifold will shut down by the time that happens. So we can scratch this one off, leaving 5% and 1% to the two meta-options.

If you write an argument that breaks down the 'okay outcomes' into lots of distinct categories, without breaking down internal conjuncts and so on, Reality is very impressed with how disjunctive this sounds and allocates more probability.

"Reality" is either the market participants voting percentages of their capital, or Eliezer Yudkowsky choosing how to resolve this market. (Actual reality, not being an agent, doesn't get impressed and allocate probability to prediction markets by being persuaded; but market creators, as judges of observations of reality, can be) And, because these are prediction markets and not polls, it's the resolution probability that matters(for profit).

So, has "reality" been allocating resolution probability to this option, which self-referentially makes it true? Not quite:

It seems like reality has most recently chosen to bid it up to about 1%, and bid 14% of the money they chose to wager at the time. So maybe this option should resolve 1%, representing how impressed reality is with Manifold's reading comprehension.

What is the other meta-option?

You are fooled by at least one option on this list, which out of many tries, ends up sufficiently well-aimed at your personal ideals / prejudices / the parts you understand less well / your own personal indulgences in wishful thinking.

Scrolling through the trade history, that's definitely been happening. It seems like most traders just scroll down the list until their eyes glaze over from boredom and they buy whatever stuck in their mind.

Since the actual AGI disjunctions effectively resolve NA, the other meta-option probably resolves around 1%, and most traders do seem to be doing this, it seems fine to resolve 99% "You are fooled".

Reality also seems to be bidding up some of the AGI options, about 50% of their cost basis. These could be "non-epistemic bets"; or we may need to wait for Manifold to implement "partial resolutions" so that most of the pool can be distributed to the meta-options leaving the rest to the AGI options(and then canceled). Most of reality's payouts are on the meta-options and option D; and reality seems to be bidding "you are fooled" as high as 7% and with the highest overall cost basis of any option.

What does everyone else think? You've collectively put 91,400 mana into the liquidity pool. As rational profit-motivated bettors, you must expect to profit on resolution when you bet(otherwise you're being irrational), but given prevailing interest rates and anthropic complications, I don't see how that's possible for anything except the meta options. And those options do seem to be happening.

Nobody here knows enough about neurofeedback to consider it at all, which is the simplest explanation for why it ended up at the literal bottom of the list when in reality it is in the top 5.

@ooe133 Oops, this statement is worded in a non-bayesian manner. Correction: not enough people here know enough about...