Will someone commit violence in the name of AI safety by 2030?
20
1kṀ1159
2029
60%
chance

Some are concerned that the development of a very advanced AI (an artificial superintelligence or ASI) could lead to human extinction, due to the ASI regarding humanity as an obstacle or even an enemy to its goals. The risk this could happen is known as existential risk, or X-risk, and if the risk is coming from AI, AI X-risk.

Since literally everything hangs in the balance if one considers this risk to be real, it wouldn't be entirely surprising if someone becomes radicalized by this perspective and commits an assassination of either an AI researcher or executive, for example.

This market resolves to yes if such an assassination happens, but it will also resolve yes if someone else is killed due to fears of AI-driven human extinction (for example, a politician perceived to be increasing AI X-risk), or if its a mass-casualty event (e.g. OpenAI HQ gets blown up, or someone dies as collateral in a datacenter bombing).

Get
Ṁ1,000
to start trading!
Sort by:

The existence of this market is understandable but concerning.

The stakes involved can make violence seem like a reasonable response, even when it is irrational and harmful to the cause it is meant to support.

In case anyone is thinking about violence as part of a solution to the AI Risk crisis:

In every case other than "we get lucky and everything is fine," reaching a solution will require a high degree of coordination. We need coordination between governments, between organizations, between individuals, and between each of those layers. Violence destroys coordination in multiple ways.

  • The occurrence of violence raises tensions between entities so that they cannot coordinate as well

  • People react to violence with a hardening of stances (which means if you are losing, you will start to lose even harder)

  • The association with violence greatly reduces the ability of related movements to recruit (the appetite for violence in this context throughout the global population is extremely small)

  • Within a movement, the secrecy required around planning acts of violence greatly reduces the bandwidth and kinds of communication that can be done between its members

Violence is also loudly (and credibly sincerely) denounced by every major organization and figurehead in the world of AI Safety and AI Risk advocacy. Enacting violence would result in public denouncement and shaming from everyone in the space (and they will likely refuse to speak your name, to also disincentivize the pure seeking of fame).

So deontology, virtue ethics, and utilitarianism all return the same result in this case, and most selfish reasons are likewise counterproductive.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules