![](/_next/image?url=https%3A%2F%2Ffirebasestorage.googleapis.com%2Fv0%2Fb%2Fmantic-markets.appspot.com%2Fo%2Fdream%252F6DGQ8fjV5J.png%3Falt%3Dmedia%26token%3D8bae2dca-bbd5-4be2-9390-351073b45017&w=3840&q=75)
This question will resolve YES if any of the following are reported:
Some unauthorized actor was able to breach an AI lab's network security.
For example, if an AI lab's model weights are exfiltrated.
A capability improvement that an AI company was shared without authorization
For example, if an engineer is publicly accused of sharing secrets with another company.
A data breach that involves customer data, like ChatGPT bugs in 2023, will not trigger a YES resolution.
This market will resolve NO if, by Jan 1, 2025, there exist no public reports of a significant incident.
This is a near-identical market to Rob Wiblin's 2023 market here.
Important question! I've curated it on https://theaidigest.org/timeline, it'd be nice to see more questions on lab infosec and harms from breaches