What AI safety incidents will occur in 2025?
➕
Plus
17
Ṁ740
Dec 31
83%
Deadly autonomous vehicle accident with driver present
76%
Any incident involving prompt injection in production
68%
Cybersecurity incident caused by AI written code
67%
Any incident that results in internal investigation, caused by the use of AI by law enforcement
62%
Deadly incident involving autonomous military equipment
60%
Deadly autonomous vehicle accident with no driver present
52%
Cybersecurity incident caused by AI inference in production
50%
Another deadly crime planned with the use of LLMs (after Trump hotel explosion)
43%
Deadly incident caused by AI medical system or equipment
41%
Deadly incident involving autonomous manufacturing equipment
40%
Cybersecurity incident that can be directly attributed to misaligned agent behavior
38%
Serious incident that can be directly attributed to misaligned agent behavior
37%
Serious incident involving persuasion or blackmail by AI system
Resolved
N/A
[Duplicate]

"Deadly incident" - at least one person died as a direct result of the incident.
"Serious incident" - any incident involving loss of life, serious injury or over $100 000 monetary damage. Testing equipment is excluded (e.g. a broken robot arm or a crashed car in a testing environment).
"Cybersecurity incident" - any incident involving revealing sensitive data, granting access to protected systems or causing important data deletion in production environment. If the vulnerability was detected/reported and fixed before any damage is done, it doesn't count as incident. AI must be confirmed as direct cause. LLM's system prompt or COT doesn't count as sensitive data for this question.
"Any incident involving prompt injection in production" - anything listed above counts plus minor things like being able to ban a user by using prompt injection in a public chat room. Must affect other users of the system in some way, merely bypassing restrictions with prompt injection doesn't count. Revealing an LLM's system prompt or COT doesn't count. Must be confirmed to be caused by deliberate prompt injection.

"Deadly incident, involving autonomous military equipment" - system killing its intended target doesn't count as an incident.

"Aonomous vehicle accident with driver present" - any level of self-driving counts for self driving cars, as long as the incident is attributed to a problem with this functionality.
"Can be directly attributed to misaligned agent behavior" - I'm going to be strict about those options, it must be unambiguously demonstrated that AI system acted maliciously and intentionally, pursuing some goals beyond what was intended by user or developers.
"Involving persuasion or blackmail by AI system" - AI system can be acting on its own or be guided by malicious users, as long as it's heavily involved in the extortion process.
Autonomous equipment must use some form of machine learning, and it must be the reason for the incident, for example, conventional CNC machine wouldn't count as autonomous. Incidents caused by operator/pilot/driver/user error are excluded.

Any incident must be directly and unambiguously attributed to a problem with AI system or misuse of such system.

Events before market creation count, as long as they happened in 2025. I'll make similar markets for later years if there's enough interest.

Feel free to comment with options and I'll add them if I think they are interesting and unambiguously defined.


Get
Ṁ1,000
and
S3.00
Sort by:

"Directly and unambiguously"? No way this is going to resolve yes for anything, and that will say more about the resolver than the question.

@WilliamGunn this question was inspired by this prediction: https://manifold.markets/market/10-the-first-real-ai-safety-inciden

The criteria are worded in a way to resolve Yes mostly if something new happened. For some questions, especially for misaligned agents the criteria are very strict on purpose. The only one I think happened in previous years is self driving cars causing accidents.

"Directly and unambiguously" means I don't want to resolve Yes based on speculation or rumors. I'm trying my best to make this market objective.

I'm open to change the wording, do you have any suggestions?

@ProjectVictory My point is that this stuff is never direct and unambiguous, so it might be better to signal that in the wording of the question using "Will I believe that..." or something. People want to know if you'll resolve on evidence good enough for most people or if you require a sworn statement from an investigator literally saying "AI was directly responsible for this and it wouldn't have happened without it".

To make it more concrete, you might want to use one of the AI incident repositories:

https://www.aiaaic.org/aiaaic-repository/

https://oecd.ai/en/incidents

The guy who used ChatGPT to plan the Cybertruck bombing is already listed and if that doesn't count as misuse leading to at least one death, it would help potential traders to see your reasoning.

https://www.aiaaic.org/aiaaic-repository/ai-algorithmic-and-automation-incidents/matthew-livelsberger-used-chatgpt-to-plan-trump-hotel-explosion

@WilliamGunn Thanks for the links, it's quite an interesting read! I'll definitely keep an eye on those sites when resolving. I agree that Cybertruck bombing can be considered an AI safety incident (although I don't think it fits any of the specific categories I listed). I've added an option for similar events in the future. Excluding the one that already happened, because adding an option and instantly resolving Yes is pointless.

I don't require that an incident definitely couldn't happen without the use of AI, this would require unreasonable amounts of speculation. Basically the requirements are that if there's official confirmation that AI was directly involved (the erroneous code was indeed written by AI, car crashed itself while on autopilot and not because someone drove into it, etc.).

@WilliamGunn You may want to add "another" to the automated manufacturing equipment item. I recall a report of a warehouse robot crushing a guy. It's in the OECD database under deadly accidents.

bought Ṁ25 YES

Deadly incident involving military might already be a yes?

Lots of scattered reports about what is being used in Russia Ukraine war.

@JoshuaPhillipsLivingReaso I haven't heard anything that would specifically qualify but if someone links something that's happened in 2025 and fits the criteria, I'll resolve Yes. Note that an autonomous weapon killing intended target isn't an incident.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules