
https://en.m.wikipedia.org/wiki/Sleeping_Beauty_problem
The Sleeping Beauty problem is a puzzle in decision theory in which whenever an ideally rational epistemic agent is awoken from sleep, they have no memory of whether they have been awoken before. Upon being told that they have been woken once or twice according to the toss of a coin, once if heads and twice if tails, they are asked their degree of belief for the coin having come up heads.
Resolves based on the consensus position of academic philosophers once a supermajority consensus is established. Close date extends until a consensus is reached.
References
Self-locating belief and the Sleeping Beauty problem, Adam Elga (2000) - https://www.princeton.edu/~adame/papers/sleeping/sleeping.pdf
Sleeping Beauty: Reply to Elga, David Lewis (2001) - http://www.fitelson.org/probability/lewis_sb.pdf
Sleeping Beauty and Self-Location: A Hybrid Model, Nick Bostrom (2006) - https://ora.ox.ac.uk/objects/uuid:44102720-3214-4515-ad86-57aa32c928c7/
The End of Sleeping Beauty's Nightmares, Berry Groissman (2008) - https://arxiv.org/ftp/arxiv/papers/0806/0806.1316.pdf
Putting a Value on Beauty, Rachael Briggs (2010) - https://joelvelasco.net/teaching/3865/briggs10-puttingavalueonbeauty.pdf
Imaging and Sleeping Beauty: A case for double-halfers, Mikaël Cozic (2011) - https://www.sciencedirect.com/science/article/pii/S0888613X09001285
Bayesian Beauty, Silvia Milano (2022) - https://link.springer.com/article/10.1007/s10670-019-00212-4
Small print
I will use my best judgement to determine consensus. Therefore I will not bet in this market. I will be looking at published papers, encyclopedias, textbooks, etc, to judge consensus. Consensus does not require unanimity.
If the consensus answer is different for some combination of "credence", "degree of belief", "probability", I will use the answer for "degree of belief", as quoted above.
Similarly if the answer is different for an ideal instrumental agent vs an ideal epistemic agent, I will use the answer for an ideal epistemic agent, as quoted above.
If the answer depends on other factors, such as priors or axioms or definitions, so that it could be 1/3 or it could be something else, I reserve the right to resolve to, eg, 50%, or n/a. I hope to say more after reviewing papers in the comments.
I empathize. I also hoped that after I carefully explained the correct halfer model and reasoning a lot more people would change their mind, than actually did. My advice is to be patient, treasure the rare moments of realization that do indeed happen and remember that this market doesn't simply depend on the objective truth of the matter, but also on philosophical consensus about it, which... isn't exactly unfailable
So I don't think that <10% is reasonable to expect in the short term. But I suppose we can bring it down to 33% for the sake of dramatic irony if nothing else.
I made a market about this question: https://manifold.markets/inaccessibles/will-i-win-my-m104-stake-httpsmanif
Here is a proof that the answer to the Sleeping Beauty problem is 1/2:
Let Beauty's world before the experiment starts be represented by the σ-algebra Σ on X with probability measure P. Let "Heads-Before" be the event that the coin will land heads, considered before the experiment. Let "Heads-After" be the event that the coin landed heads, considered after Beauty's awakening. Finally, let Y be the measurable subset of X that represents the world that Beauty enters after she is awakened (and she learns that she is awakened), and let Q be the probability measure on the σ-algebra on Y that inherited from X. Since no possible worlds are excluded when Beauty wakes up, we get that Y = X. Now, by the Theorem of Deduction, we get that Q(Heads-After) = P(Heads-Before ∩ Y)/P(Y) = P(Heads-Before ∩ X)/P(X), and applying some trivial identities:
Heads-Before ∩ X = Heads-Before
P(X) = 1
So Q(Heads-After) = P(Heads-Before)/1 = P(Heads-Before), and as long as you're willing to accept that the probability before the experiment that the coin will land heads is 1/2, you get that the probability after Beauty's awakening that the coin landed heads is 1/2. I hope no one will think that their intuition (or even worse, generative AI) is more trustworthy than a completely formal and rigorous proof based on logic.
@luvkprovider that's the same argument, just with symbols that are hard to read instead of words that are easy to read. You're saying the probability doesn't change just because she wakes up. I'm saying it does. Convince me otherwise using stories or examples, and I might come to your side.
Claude is sad that you don't trust him. Also, he told me this:
"The commenter's proof has a subtle but critical flaw in how it handles conditionalization when there are multiple possible awakenings. Let me break down the specific error:
The key mistake is in the statement "Since no possible worlds are excluded when Beauty wakes up, we get that Y = X". This seems intuitive but is actually incorrect.
The reason why: When Beauty wakes up, she's not just learning "I am awake" as a simple fact about the world. She's experiencing a specific awakening event, which could be either:
- The Monday awakening (happens in both heads and tails)
- The Tuesday awakening (happens only in tails)
So when Beauty wakes up, she's actually in a more fine-grained possibility space than before the experiment. The original space X needs to be expanded to account for these distinct awakening events. In probability terms:
- If heads: One awakening occurs (Monday)
- If tails: Two awakenings occur (Monday and Tuesday)
The commenter's proof implicitly treats all these awakenings as the same event, which is why they conclude Y = X. But this collapses an important distinction - each awakening is a separate possible way Beauty could be experiencing the world, even though they're subjectively indistinguishable to her.
This is why the thirder position counts three possible awakening events (1 from heads, 2 from tails) rather than just two possible coin outcomes. The halfer's proof fails to properly account for this multiplication of possibilities in the tails case.
In technical terms, this is related to what philosophers call "de se" information - information about where/when you are in the world, rather than just what the world is like. The proper handling of de se information in probability theory is precisely what the Sleeping Beauty problem helps illuminate."
@AndrewHebb your case is quite unique.
On one hand, you are less confused than most thirders, as you rightly believe that credence can not change by event that was guaranteed to be realized in the experiment and you refused to be persuaded by such magical words as "de see evidence" or "centered possible worlds".
On the other hand, you are more confused than most thirders, as you reason backwards from the assumption that P(Heads|Awake) =1/3 arrive to the conclusion that unconditional probability for Heads is 1/3 even before the experiment has started and then fail to notice all the absurdity it entails.
I'm not sure what is our crux, frankly. Lets try to find it.
Do you agree that a priori probability of a coin to come Heads is 1/2?
Do you agree that about half the coin tosses, determining the awakening routine in Sleeping Beauty are Heads?
Do you agree that you don't have a way to predict future coin toss better than chance?
Do you agree that if you know that the coin toss is going to determine the awakening routine in Sleeping Beauty, you can not predict it's outcome better than chance?
Do you agree that if you can't predict a coin toss better than chance, your credence in it is 1/2?
When Beauty wakes up, she's not just learning "I am awake" as a simple fact about the world. She's experiencing a specific awakening event, which could be either:
- The Monday awakening (happens in both heads and tails)
- The Tuesday awakening (happens only in tails)
This is the core mistake. She does experience awakenings. But those are not events.
It's actually quite easy to see if you are being rigorous. An event is a set of one or more mutually exclusive outcomes of the experiment. If any of the outcomes of this set is realized in an iteration of the experiment it means that the event is realized in this iteration.
If Monday awakening and Tuesday awakening were two well-defined events in Sleeping Beauty experiment, then there has to be some mutually exclusive outcomes which these events consist of. Either the events themselves are mutually exclusive - none of the outcomes they consist of are the same, or they are mutually inclusive - they share at least one outcome.
If we suppose that Monday awakening and Tuesday awakening are mutually exclusive, we immediately arrive to a contradiction - on Tails both of the awakenings happen in the same iteration of the experiment, therefore they are not mutually exclusive.
Therefore, these events have to be mutually inclusive. But this contradicts your premise that the events correspond to the individual awakenings.
The way to formally define Monday and Tuesday events is this:
Monday = {Heads, Tails}
Tuesday = {Tails}
Where in semantic terms:
"Monday" means "Monday awakening happens in this iteration of the experiment", which happens every time.
"Tuesday" means "Tuesday awakening happens in this iteration of the experiment", which happens only when the coin is Tails.
So yes, one indeed has to treat both tails awakening as the same event in order not to contradict probability theory.
Another way to show that you can't reason about individual awakening as if they are mutually exclusive random events, which may be more intuitive, is this:
Suppose the Beauty is told that the coin is Tails, therefore she is awakened twice. What is her credence that this awakening is happening on Monday? 50%. What about the other awakening? What is her credence that it has/to be happened on Monday? Also 50%. But then her credence that at least one of her awakenings happens on Monday can be calculated as:
1 - P(This is Monday)P(Other is Monday) = 1 - 1/4 = 3/4
Which contradicts the fact that Beauty knows that she is to be awakened on Monday in every iteration of the experiment.
This is a market about whether experts are smart enough to realize that probability is a property that exists in the mind, not in the territory.
they are asked their degree of belief for the coin having come up heads
It's counterintuitive, but solve it the same way you solve the Monty Hall problem: If heads, 1 mind is awoken. If tails, 10^10 minds are awoken. Across all possible minds, what is the correct thing to guess?
@MagnusAnderson The question you posed is ambiguous, for the same reason the Anthropic Snake Eyes question by Daniel Reeves is ambiguous (and Martin Randall explained that one pretty well)
probability is a property that exists in the mind, not in the territory.
Not a crux of disagreement at all. The irony is that both halfers and thirders appeal to this principle while attempting to justify their position without much progress one way or the other.
Turns out it's not enough to simply have a map. It should also somehow approximate the territory.
solve it the same way you solve the Monty Hall problem
SB and Monty Hall has very little in common beyond the "both are probability theory problems".
In the Monty Hall my credence of winning on a door switch is 2/3 not because now I make two guesses instead of one - only the last guess counts and everyone agrees on it. But because when I switch doors I actually win in about 2/3 iterations of the probability experiment.
On the contrary, in Sleeping Beauty in about half the iterations of the experiment where awakening happened the coin is Heads and everyone is in agreement about it. The disagreement is about whether we should count Tails outcome twice or not.
If heads, 1 mind is awoken. If tails, 10^10 minds are awoken. Across all possible minds, what is the correct thing to guess?
There are two very different experiments:
N people, you among them, are put to sleep. Then the coin is tossed. On Heads one random person among them is awakened. On Tails all of them is awakened. You find yourself awakened. What is the probability that the coin is Heads?
and
You are put to sleep. Then the coin is tossed. On Heads you are awakened once. On Tails you are awakened N times with a memory loss. You find yourself awakened. What is the probability that the coin is Heads?
The difference is, in terms of subjective probability, that in the first experiment you were not confident at all to find yourself awakened. You couldn't predict that outcome beforehand. You are, somewhat, surprised And so when you are awakened you can update in favor of Tails.
While in the second you were absolutely sure that you will be awakened anyway. You could've predicted that outcome in the first place. There is nothing surprising about it at all. And so you do not get to update.
It's very clear that these two problems are not isomorphic, but for historical reasons a lot of people keep treating them as if they are and this is the source of a lot of confusion about anthropic reasoning.
You are put to sleep. Then the coin is tossed. On Heads you are awakened once. On Tails you are awakened N times with a memory loss. You find yourself awakened. What is the probability that the coin is Heads?
While in the second you were absolutely sure that you will be awakened anyway. You could've predicted that outcome in the first place. There is nothing surprising about it at all. And so you do not get to update.
It's true you're sure you'd be awakened. However, if you were asked to make bets (whereupon after leaving the experiment, you got to keep the money or something) you should very obviously still bet that tails was chosen. In the 10^10, at even odds, you would lose $1 half the time, and make $10^10 half the time (instead of making $1 half the time, and losing $10^10, which would kind of suck)
This is (as I understand it, after a brief reading of [some blog about snake eyes](https://risingentropy.com/anthropic-reasoning/), is the primary motivation for listening to anthropic reasoning. And there is no difference in the scenarios you describe to this train of logic.
If your answer is "I am totally unsure whether I have been awoken the one time with heads, or one of the 10^10 times with tails, and I will not update at all because I am unsurprised that I was awoken; but on the other hand, I will choose tails because I expect to become 2*10^10 times richer for it" then it seems like you have a different understanding of probability than I do.
@MagnusAnderson The connection to Monty Hall is in the reasoning process used to solve it. By increasing the numbers a lot until you have a visceral fear of having to earn 10^10 dollars to pay down your debt if you're dumb enough to bet on heads, you realize that actually the probability is less
See what I mean by the idea of probability been in the map not helping? 😉 I've just showed you the difference between two experiments in terms of expectations and surprise and you immediately switched to talking about betting and its consequences in he territory.
if you were asked to make bets (whereupon after leaving the experiment, you got to keep the money or something) you should very obviously still bet that tails was chosen
Oh, sure thing. In per awakening betting you should bet on Tails. Not because Tails is more likely, though, but because it's rewarded more.
Consider a fair coin toss. You are proposed to bet on the outcome, such that whatever bet you made is repeated when the coin is Tails. This makes betting on Tails a better idea than on Heads. Does it mean that probability of a fair coin to come Tails is 2/3? Of course not. Same logic here.
If this appear confusing, remember that betting odds depend not only on probability of event, but also on it's utility. The disagreement between (double) halfers and thirders is how to factorize expected utility. If you want to go into details more, in the first part of this post I formally derive correct betting odds for different being schemes from both thirders and halfers perspectives:
https://www.lesswrong.com/posts/cvCQgFFmELuyord7a/beauty-and-the-bets
Now, it may appear that it is simply a matter of perspective and both ways to reason about expected utility are equally valid. But as a matter of fact, thirders way imposes weird costs, which become clear in a more nuanced betting schemes.
Suppose you are asked to bet 100$ on Heads in Sleeping Beauty at 1:1 odds before the experiment has started. And if you agree you are immediately gifted 1$. At the time this sounds like a good idea, Heads is 50% likely so you get a free dollar in expectation.
Then you're awaken during the experiment. What do you think about the bet you've made beforehand, if you now believe that P(Heads) is only 1/3? Naturally, you regret it, your chances of winning have just reduced. Now, suppose that you are offered an opportunity to make the bet null and void, if you pay 5$. This should sound like a good idea: loosing 5$ is less bad than 2/3 chance at loosing 100$. So you agree.
And so lo and behold you're predictably loosing 5/2(N+1) - 1 $ per iteration of experiment, where N is the the number of awakenings on Tails. Seems that you should've done something differently. But then your betting behavior will not be following your probability estimate, the exact same sin that you were unfairly accusing halfers of!
Notice, that if we use the same scheme in the experiment with N different people, only one of whom is awakened on Heads, then you, indeed are better off to agree to both the initial bet and to the nullification of it on awakening. Yet another demonstration that both experiments are not isomorphic and that you shouldn't reason the same way about them.
@IsaacLinn Upon an event that is guaranteed, probabilities don’t change. This is a corollary of what I call the "Principle of Deduction", which is exhibited by every well-defined model of probability. If you are still confused, consider that P(Monday) and P(Tuesday) don’t exist, or just read the comments below that thoroughly explain why the answer is 1/2.
@luvkprovider You'll have to put in a little more effort than that;) You've got to reason with me, explain things. Just citing impressive sounding principles won't do heavy lifting for you.
Besides, we're talking about a situation in which it makes sense to use probabilities higher than 1 to describe the average number of events that occur. Is this consistent with all the established rules of probability? Almost definitely not-- but we can still use rules that are consistent. When a formal system isn't adequate to describe a situation you're in, sometimes you need to make your own system, or at least look around for other systems that already exist.
You might ask me, "Why do you need to do such weird things when I already have the answer?"
I would respond, "There is an aspect of reality that your system hasn't captured. If we were to repeat this experiment, and sleeping beauty were to bet on the state of the coin, she would earn more money if she guessed tails than if she guessed heads. Why doesn't the answer attained from your system reflect this?"
to describe the average number of events that occur.
Why are you so sure that individual awakenings are events?
Is this consistent with all the established rules of probability? Almost definitely not-- but we can still use rules that are consistent.
But then this other thing won't be "probability" but something else instead. With different properties and no particular reason to expect that credence of rational agent is supposed to behave according to it.
If we were to repeat this experiment, and sleeping beauty were to bet on the state of the coin, she would earn more money if she guessed tails than if she guessed heads. Why doesn't the answer attained from your system reflect this?"
It does, in fact, reflect that. I'm talking in details about it here
https://www.lesswrong.com/posts/cvCQgFFmELuyord7a/beauty-and-the-bets
But in short, her betting odds are adjusted because the utility of bet on Tails is highter. The same reasoning as with: Bet on a coin toss, but if the outcome is Tails the bet is repeated.