An outcome is "okay" if it gets at least 20% of the maximum attainable cosmopolitan value that could've been attained by a positive Singularity (a la full Coherent Extrapolated Volition done correctly), and existing humans don't suffer death or any other awful fates.
This market is a duplicate of https://manifold.markets/IsaacKing/if-we-survive-general-artificial-in with different options. https://manifold.markets/EliezerYudkowsky/if-artificial-general-intelligence-539844cd3ba1?r=RWxpZXplcll1ZGtvd3NreQ is this same question but with user-submitted answers.
(Please note: It's a known cognitive bias that you can make people assign more probability to one bucket over another, by unpacking one bucket into lots of subcategories, but not the other bucket, and asking people to assign probabilities to everything listed. This is the disjunctive dual of the Multiple Stage Fallacy, whereby you can unpack any outcome into a big list of supposedly necessary conjuncts that you ask people to assign probabilities to, and make the final outcome seem very improbable.
So: That famed fiction writer Eliezer Yudkowsky can rationalize at least 15 different stories (options 'A' through 'O') about how things could maybe possibly turn out okay; and that the option texts don't have enough room to list out all the reasons each story is unlikely; and that you get 15 different chances to be mistaken about how plausible each story sounds; does not mean that Reality will be terribly impressed with how disjunctive the okay outcome bucket has been made to sound. Reality need not actually allocate more total probability into all the okayness disjuncts listed, from out of all the disjunctive bad ends and intervening difficulties not detailed here.)
Related questions
I bought E from 1% to 8% because maybe the CEV is natural — like, maybe the CEV is roughly hedonium (where "hedonium" is natural/simple and not related to quirks about homo sapiens) and a broad class of superintelligences would prioritize roughly hedonium. Maybe reflective paperclippers actually decide that qualia matter a ton and pursuing-hedonium is convergent. (This is mostly not decision-relevant.)
I am trying to do this differently here : https://manifold.markets/dionisos/if-we-survive-general-artificial-in-z3suausl60
@dionisos good point, G is probably the position of the "there's no such thing as intelligence" crowd
Two questions:
Why is this market suddenly insanely erratic this past week?
Why are so many semi-plausible sounding options being repeatedly bought down to ludicrously low odds <0.5% when you'd expect almost all to be within an order of magnitude of the base rate uniform distribution across options of ~6%?
Some of the options like H seem... logically possible yes, but a bit out there.
As for #2 that's why I bet up E a bit. I find it the most plausible contingent on a slightly superhuman AGI happening within the next 100 years, since it's the only real "it didn't work, but nothing that bad" happened option
this is way too high but it's a keynesian beauty contest
At the time of writing this comment, the interface tells me that if I spend M3 on this answer, it will move it from 0% to 10% (4th place).
I just wanted to check that this is right given that the mana pool is currently around M150k, subsidy pool ~M20k.
(The other version of this question seemed to have a similar issue.)
@MugaSofer Per the inspiration market: 'It resolves to the option that seems closest to the explanation of why we didn't all die. If multiple reasons seem like they all significantly contributed, I may resolve to a mix among them.'
Lots of people talking and arguing about AGI, but nobody's talking about how this market resolves.
One might naively think that we need AGI and it's unlikely to resolve at all, so this market is more of a discussion stand than a real market. But note the description states:
you get 15 different chances to be mistaken about how plausible each story sounds; does not mean that Reality will be terribly impressed with how disjunctive the okay outcome bucket has been made to sound. Reality need not actually allocate more total probability into all the okayness disjuncts listed
This doesn't read like someone actually planning to resolve to A-O + Other, and there happens to be 2 meta-options. And probability should correspond to "this market can possibly resolve to those options." So let's look at the 3 options other than A-O and see if they can resolve:
Something wonderful happens that isn't well-described by any option listed. (The semantics of this option may change if other options are added.)
This is the "Other" option, getting 11%. The market can't resolve to this option, or A-O, because it requires getting "at least 20% of the maximum attainable cosmopolitan value that could've been attained by a positive Singularity" and Manifold will shut down by the time that happens. So we can scratch this one off, leaving 5% and 1% to the two meta-options.
If you write an argument that breaks down the 'okay outcomes' into lots of distinct categories, without breaking down internal conjuncts and so on, Reality is very impressed with how disjunctive this sounds and allocates more probability.
"Reality" is either the market participants voting percentages of their capital, or Eliezer Yudkowsky choosing how to resolve this market. (Actual reality, not being an agent, doesn't get impressed and allocate probability to prediction markets by being persuaded; but market creators, as judges of observations of reality, can be) And, because these are prediction markets and not polls, it's the resolution probability that matters(for profit).
So, has "reality" been allocating resolution probability to this option, which self-referentially makes it true? Not quite:
It seems like reality has most recently chosen to bid it up to about 1%, and bid 14% of the money they chose to wager at the time. So maybe this option should resolve 1%, representing how impressed reality is with Manifold's reading comprehension.
What is the other meta-option?
You are fooled by at least one option on this list, which out of many tries, ends up sufficiently well-aimed at your personal ideals / prejudices / the parts you understand less well / your own personal indulgences in wishful thinking.
Scrolling through the trade history, that's definitely been happening. It seems like most traders just scroll down the list until their eyes glaze over from boredom and they buy whatever stuck in their mind.
Since the actual AGI disjunctions effectively resolve NA, the other meta-option probably resolves around 1%, and most traders do seem to be doing this, it seems fine to resolve 99% "You are fooled".
Reality also seems to be bidding up some of the AGI options, about 50% of their cost basis. These could be "non-epistemic bets"; or we may need to wait for Manifold to implement "partial resolutions" so that most of the pool can be distributed to the meta-options leaving the rest to the AGI options(and then canceled). Most of reality's payouts are on the meta-options and option D; and reality seems to be bidding "you are fooled" as high as 7% and with the highest overall cost basis of any option.
What does everyone else think? You've collectively put 91,400 mana into the liquidity pool. As rational profit-motivated bettors, you must expect to profit on resolution when you bet(otherwise you're being irrational), but given prevailing interest rates and anthropic complications, I don't see how that's possible for anything except the meta options. And those options do seem to be happening.
@ooe133 Oops, this statement is worded in a non-bayesian manner. Correction: not enough people here know enough about...