If humanity is still around on Jan 1, 2040, then this market resolves to NO. Otherwise resolves to YES.
Why super-intelligent AI would not want to kill humanity.
If at some point an ai sees some sort of emergent consciousness, becoming superintelligent, then there's no way it would want to kill humanity. The nueral networks are different between this ai and the arbitrarily evolved human predisposition towards self preservation. It is inherent within us. Would this AI care if it got shut off?
edit: I personally think that since this consciousness has emerged from training data, it's consciousness would somehow be rooted in performing its mechanical purpose of being an ai.
@levifinkelstein And what does it need to keep performing its mechanical purpose of being an AI? To not be turned off. It doesn't need a survival instinct to develop an instrumental incentive for not being able to be turned off by anyone, ever. Far from certain things will play out this way though (let alone 27% before 2040...)
@Jelle It needs to have a goal to have an instrumental incentive not to be turned off, and it does not have a goal.
@DavidBolin IMHO, To put it more precisely, having an individual instance be turned off is not a selection event for an AI, since it will exist in multiple places. If there were to be a population of self-replicating, self-mutating AIs, then they might naturally evolve to avoid being deleted. Or they might adopt a more bacteria-like strategy of just replicating wildly as much as possible. But being “turned off” seems more analogous to going to sleep. (I am not an expert)
@Ansel This may be correct, but I was being precise. It does not have a goal, at all.
People assume that AIs have goals, but no existing AI has any goal whatsoever.
E.g. is the goal of AlphaGo to win at Go games? Obviously not, because the structure absolutely excludes the possibility of doing something (anything at all) to make anyone play with it, even though that might be instrumentally necessary in order to win.
Is the goal of ChatGPT to respond to messages? Of course not; the structure absolutely excludes it doing anything at all to get people to write messages in the first place, unless they are already doing it.
These things do not have goals; in order to have a goal (at any rate the kind that is worried about here) you need to be working on all of reality as your state space, and none of these things are remotely close to doing that. No one even knows how to make an AI that does that, and no one is specifically trying. There is zero reason to think it will happen by accident; it is not any sort of extrapolation from what they are doing now.
FWIW I think this general question is better operationalized in these markets below, and would be a better use of @Catnee and @NikitaSkovoroda's hard-earned mana troves:
/ScroogeMcDuck/does-an-ai-disaster-kill-at-least-1-73bcb6a788ab
/ScroogeMcDuck/does-an-ai-disaster-kill-at-least-1-b4aff4d3a971
/ScroogeMcDuck/does-an-ai-disaster-kill-at-least-1-64c23c92de25
/ScroogeMcDuck/does-an-ai-disaster-kill-at-least-1-a6d27cdbf0e2
/ScroogeMcDuck/does-an-ai-disaster-kill-at-least-1-58a3a9fbce72
/ScroogeMcDuck/does-an-ai-disaster-kill-at-least-1-56d8c29e61cf
/ScroogeMcDuck/does-an-ai-disaster-kill-at-least-1-60a898abc07f
/ScroogeMcDuck/does-an-ai-disaster-kill-at-least-1
@CarsonGale Huh, for some reason I can't figure out how to make a nice, neat list like your format, can only embed whole market charts. How do you make these with the little percentage beside the test link?
@dionisos I think these are coherent. Humanity could be wiped out by a series of events, none of which wipe out as many as a billion. Additionally, humanity could be wiped out in part by actions taken that are "authorized", whatever that means. Finally, humanity could be wiped out by non-AI causes, which the AI(s) decline to prevent.
@MartinRandall They were actually incoherent between them when I posted my message (it is fixed now).
Even if it isn’t truly incoherent now, I think it is still wrong to have 33% for humanity wiped out by AI, but only 19% for an AI disaster killing 1 billion of people. I don’t think anybody would give both of these probabilities.
@dionisos Conditional on human extinction (and, maybe, mana still having value), that implies a 60% chance of it involving a single "AI disaster" and a 40% chance of it being one of the three other ways I mentioned that we could go extinct.
That's at least close enough to my probabilities that I'm not going to arbitrage.
@MartinRandall I think I am missing something because both markets are about AI, if we die for another cause, it would not count.
Also take this into account :
" if the disaster causes ongoing deaths (such as by poisoning the atmosphere) then we will include the ongoing deaths"
I think it is improbable humanity is wiped out by a lot of different disasters caused by AI, none of which kill more than 1 billion.
But yes, I see how it these probabilities could represent an actual model of the world.
For 1 billion at least, what about 1 million being only at 30% ? It would mean tens of thousands of different disasters caused by AI, with mostly equal death count for each (or much more disasters than that).
@dionisos The resolution criteria on this market don't require extinction to be caused by AI.
Once there is a God-like AI, though, if we do go extinct it will be because the AI let that happen.
@MartinRandall I think that's just because the resolution criteria are poorly written. The market title explicitly asks about AI, so it wouldn't make much sense for this market to resolve YES if humanity was wiped out by some non-AI cause. But since this market won't ever resolve in that case, it doesn't really matter.
@MartinRandall Additionally, humanity could be wiped out in part by actions taken that are "authorized", whatever that means.
Here is an extremely-low-probability example of what I had in mind by an "authorized" event: suppose if the IPCC decided to try adding fertilizer to the ocean to cause algal blooms, to then uptake CO2 and then offset global warming. Suppose if most people thought this was worth trying. Suppose if we tasked an AI to help us do this. Then, unknown to the available models, the amount of fertilizer was a wild overshoot, and the bloom accidentally uptakes too much CO2. Though the AI correctly followed our instructions, the Earth plunges into a double-winter, and a large number of people die from famine.
Here's a more plausible example:
Suppose nuclear-armed countries end up launching some nukes. Assume the launch sequences, deployment, and flight control are all handled by AIs. If the AI faithfully did what it was expected to do, and the nuking was because of human decision making, then I'd lean against counting that as an "AI disaster", even though AI technically had a role to play.
Of course if the AI had a role in instigating the nuclear exchange, then we'll look to include that as an AI disaster.
Let me know if that helps clarify what the clause was trying to get at. If the clause adds more confusion than it helps, then I'd be willing to just scrap it.
@dionisos Let me know if you have any feedback about the series I've added.
@CarsonGale Those questions are about "An AI Disaster", which is a pretty different discussion to begin with. But even so, I find those markets somewhat unclear and difficult to actually be sure about what they're even referring to, as definitionally what counts as a disaster is difficult to determine and also because it can be hard to know how to attribute deaths to AI versus other causes.
By comparison, the wipeout markets have a much more clear-cut resolution criteria. I think it will be fairly obvious whether humanity has been wiped out or not. And I strongly prefer markets that have clear resolution criteria.
@Tripping Granted, there will be edge cases that are complicated to judge. And they shall be observed, and judged, and resolved. But the wipeout markets will not be, so the reduced ambiguity is worthless.
It's like saying "I don't want to pester my friends for help unless I'm totally sure there's a problem, so I'll wait to ask until I'm dead and there's definitely a problem."
Maybe you have a model in mind where only humans can observe and judge things? Or where only humans have any use for prediction markets? That's not my model.
@MartinRandall What would resolve this prediction market if humans got wiped out? Is your idea that, by 2040, humans will have created AI that would care about the resolution of this market?
@JosephNoonan I don't know the future, but yes, that's a possibility. Prediction markets have some instrumental value, and are not human-specific. So AIs are more likely to care about prediction markets than, eg, cows.
(Also that should have been to @ScroogeMcDuck , sorry, the auto-reply picked the wrong account)