Is the answer to the Sleeping Beauty Problem 1/3?
➕
Plus
49
Ṁ19k
2030
51%
chance

https://en.m.wikipedia.org/wiki/Sleeping_Beauty_problem

The Sleeping Beauty problem is a puzzle in decision theory in which whenever an ideally rational epistemic agent is awoken from sleep, they have no memory of whether they have been awoken before. Upon being told that they have been woken once or twice according to the toss of a coin, once if heads and twice if tails, they are asked their degree of belief for the coin having come up heads.

Resolves based on the consensus position of academic philosophers once a supermajority consensus is established. Close date extends until a consensus is reached.

References

Small print

I will use my best judgement to determine consensus. Therefore I will not bet in this market. I will be looking at published papers, encyclopedias, textbooks, etc, to judge consensus. Consensus does not require unanimity.

If the consensus answer is different for some combination of "credence", "degree of belief", "probability", I will use the answer for "degree of belief", as quoted above.

Similarly if the answer is different for an ideal instrumental agent vs an ideal epistemic agent, I will use the answer for an ideal epistemic agent, as quoted above.

If the answer depends on other factors, such as priors or axioms or definitions, so that it could be 1/3 or it could be something else, I reserve the right to resolve to, eg, 50%, or n/a. I hope to say more after reviewing papers in the comments.

Get
Ṁ1,000
and
S3.00
Sort by:

Disclaimer: I haven't spent much time thinking about or researching the problem, but I probably will at some point, and I'm perfectly open to the possibility that my conclusions then will contradict my intuitions now.

That said, here are two intuition pumps for thirders:

  • Imagine that, instead of waking her up only twice if the coin comes up tails, we instead wake her up 999 times if the coin comes up tails. In this case, upon waking, her credence should be 1/1000 that the coin came up heads?

  • Suppose that, instead of flipping a coin, we instead roll a 20-sided die. A result 1-19 means we wake her up only once, whereas a result of 20 means she will be woken up n times. On the thirder view, the math shakes out that p(tails) > (heads) at n = 20. So suppose we set n to 20. Upon awakening, her credence should be that the die is more likely to have rolled 20 than any value 1-19?

@MartinRandall If you need proof: https://www.researchgate.net/publication/282216476_The_Halfers_are_right_but_many_were_anyway_wrong_Sleeping_Beauty_Problem

I will attempt to summarize the main arguments of the above article and offer a detailed explanation of why the answer is 1/2 below. Specifically, the double halfer position (you can find that in the Wikipedia article) is correct.

By "below" I meant "in my next comment"

bought Ṁ90 YES

The 2/5ths position is woefully underrepresented

I'm tired of not smart people thinking it's 1/3 because it's the "smart answer" and "1/3 is more complicated than 1/2 so it must be the right answer". This is either going to resolve to NO or NAN.

@luvkprovider both answers appear like the simple answer to different people.

If you answer NO to this, you also think Adam can magically manifest a deer in front of himself, simply by resolving to have children with Eve if it doesn't happen.

Nonsense. Beyond agreeing with Lewisian Halfism, there are other reasons to answer NO to this question.

For instance, thinking that there is no one true answer to the SB problem and therefore the confidence level of the market should be 50%. Or agreeing with the correct, Double Halfer model, according to which:

P(Heads|Monday) = P(Heads&Monday) = P(Heads) = 1/2; P(Monday) = 1; P(Tuesday)=1/2

I'd say we want her to be well-calibrated in the sense that if we run the experiment many times and she says '1/2', it should come up heads half the time. I'm assuming that the coin will actually come up heads half the time. The tricky part is what we count as one time. If we count each time, she answers (wakes up) as one time, the well-calibrated answer would be '1/3' if we count each experiment as one time it'd be '1/2'.

Personally I think the first option makes more sense. Example: 100 runs, 50 heads and 50 tails. So she answers 150 times and each time she says '1/3'. She is right in that the coin came up heads 1/3 of the time she answered.

If you understand that probability estimate p in an event E means that if we repeated the experiment n times, then E happens approximately p*n times, how can you then say that we should count not by iterations of experiment, but by awakenings instead?

It would make sense if the awakenings themselves were random, from the perspective of the Beauty, if she was under the assumption that only one awakening happens during any iteration of the experiment, without having any information, which is more likely. But Beauty is completely aware that this is not the case.

Is there any example in recent history of a highly discussed philosophical question in which philosophers were divided for years into competing "isms" until a consensus was established which resulted in one of the positions being accepted as correct?

More generally, has philosophy ever provided clarity on any subject?

These are not rhetorical questions.

@MartinRandall This a much more substantive answer than I expected, and I am glad that I asked such a stupid question.
However, it's not clear to me that these examples really satisfy the conditions I asked for - a recent, entrenched dispute in philosophy that no longer exists, having been decisively settled.
To pick on just one example, number 7, this author claims that Hempel's raven paradox is settled, in this essay. But the Wikipedia article discusses numerous positions, and it appears there is some ongoing disagreement here after all.

@HarrisonNathan sure, this is just one philosopher's answer and it was more a reply to your second question than your first.

I am expecting available intelligence to increase over the coming years or decades so there may be faster progress in philosophy than has been the case historically.

@MartinRandall I probably shouldn't have phrased the second question the way I did: I meant for the "clarity" to be objective. Many people feel that the answer to the SBP is "clearly" 1/3 and others feel that it's "clearly" 1/2, but this is not what I mean. I would characterize the kind of things this author writes as the latter kind of "clarity" in that he seems to be forcefully arguing for positions about which there actually is some contention. (Though I have only read a fraction of it.)

It appears to me that most of the unclarity in philosophy is not of the sort that more intelligence can fix. Rather, it seems that most of philosophy is about matters of subjective opinion, or problems that simply aren't well-formulated, which cannot be solved because there actually is no objective solution. If it were otherwise, I would expect that philosophers would have resolved quite a lot of previously contentious issues, just as every science has done. So this is why I have pretty strong doubts that there will be a consensus about this particular problem.

@HarrisonNathan I think there are lots of problems that dogs think are insoluble that humans have solved and similarly I don't put much stock in humans thinking that problems are insoluble unless they have a proof.

For this specific problem I notice that we are living in a quantum universe where anthropics are a live issue, and there are likely facts about the most efficient way for an epistemic agent to reason and therefore how it should think about its own degree of belief.

Yes we need to bind the words to the facts and the maths but there is typically a best way to do that binding once a problem is resolved.

@MartinRandall Well, consider the question "what's the best color?" This is a question a small child might ask expecting someone wiser would know the right answer, but adults recognize it as a question without a conceivable answer.

You would like someone wiser to answer the question "what's the best way to reason?" It's not obvious to me that this is a different sort of question.

@HarrisonNathan there's surely a most aesthetically pleasing color averaged over all small children currently alive. Probably a pink of some kind.

Not a question that humans could conceivably answer.

@MartinRandall What you just did there is an example of making up a standard, and thereby transmuting the question into a different one.
That's exactly what philosophers do to questions such as "what's the best way to reason?" They formulate multiple entirely different adjacent questions, then call each other crazy. (This is unique to philosophy as far as I know - intelligent, skilled practitioners of the subject thinking not merely that their colleagues are wrong but that they are "crazy" and simply not seeing the obvious.)

@HarrisonNathan when the small child asked "what is the best color" they presumably meant something, and based on my small child experience they probably meant aesthetics-to-children. But of course if they meant something else there can be a different answer, or none.

@HarrisonNathan I see what you're getting at and I think you could make a prediction market on when we (broadly) will get to consensus on this question, if ever.

@MartinRandall They presumably didn't mean aesthetics averaged over all children - that's a bit advanced - and they certainly didn't mean to specify any of the various brain measurements one might do in order to operationalize that; neither did they think of the mathematical weightings that should apply to different levels of pleasantness of the colors in order to take the average. You can concoct all those things, and you can do it in a lot of different ways, but you will be answering questions that are different than the one the child asked.
Most of philosophy looks this way to me.
That's not to say nothing can ever be proven: indeed, some statements are tautologically false. However, I wouldn't have high confidence that any given controversial question is actually formulated well enough that a conceivable answer exists.

@HarrisonNathan I think you are using a wrong referent class here. Sleeping Beauty is a counterintuitive probability theory problem, not a philosophy one. It has much more in common with the Monty Hall problem, than, say, the problem of a heap.

Proof of 1/2 by reductio ad absurdum:

Assume the thirder position is correct. Let's exaggerate the situation and say that you are woken up a million times if it's tails. Since you are a thirder, when you wake up you will be 999,999/1,000,000 confident that a tails was flipped. In plain English, we would say that you know that tails was flipped.

You can deduce the above before the experiment is run. That is, you know that you will know that tails was flipped. If you know that you will know X, then you you know X. Therefore, you know that tails will be flipped before the experiment begins. This is obviously wrong.

@ItsMe This is just the normal "no updating" argument for 1/2 but with different numbers. Read a few of the linked papers.

@ItsMe take it further.

If instead the rules say sleeping beauty will not be woken at all if is heads and will be woken up once if it is tails. They know before the experiment runs that if they are woken it was tails, because waking them up gives new information and they should update on that information.

@ItsMe It's a nice thought experiment. I agree with @MartinRandall and @ShitakiIntaki - you will know that tails was flipped on Tuesday. But you won't know that tails was flipped on Wednesday or Monday. You will even know on Monday that you will know that tails in flipped on Tuesday. But you still won't know that tails will be flipped on Monday.

@ItsMe I don't see how this is a reductio ad absurdum. It's not obviously wrong to me.

@ShitakiIntaki waking up versus not waking up at all is not isomorphic analogy. Events of possible paths of the future in your example are not indistinguishable from each other, as in original or the "its me" version. So you can't say "take it further". You are not taking it further, you creating a new problem.

@KongoLandwalker your not wrong. Any change in the parameters is a new problem. A+

My statement seeks to find utility in looking to a family/set of related problems which differ in a single variable and to perhapse prompt the reader to investigate if there is a generalized property or attribute of that set of problem statements, such that if valid for the set, the same holds true for each member of the set.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules