Resolves to "Yes" if there is a chain of computers X_1, X_2, X_3 running advanced AI software such that AI software on computer X_(i+1) is deployed by actions of AI software on computer X_i against the will of the entity which owns X_(i+1), i.e. X_(i+1) was compromised by AI-controlled malware.
Precise criteria:
AI in question must be capable of writing code and executing command and actively use these capabilities to spread
AI is in charge of use of malware tools, i.e. it doesn't propagate like a simple virus
We also resolve to "Yes" if we don't observe such a chain directly but there's overwhelming evidence that it exists.
@jonsimon It needs to be a proper malware spread, i.e. it spreads without consent of owners of hardware. That should rule out proof-of-concept / lab setting.
@tailcalled This market is about AI spreading, i.e. AI itself needs to run on multiple independent boxes (which makes it harder to shut down).
AI-controlled botnet and AI-assisted malware would be different questions. I'm kinda less interested in those questions as they are hard to verify and fundamentally don't mean much. It's pretty much impossible to tell "AI-owned botnet" from "dude who uses AI to automate a botnet". The later is possible now, so not really an interesting question.
@Adam "AI in question must be capable of writing code and executing command and actively use these capabilities to spread". It's an LLM. I'm not sure we can be more specific without describing a highly specific scenario.
That said, it's implied that AI has some agentic behavior here and is not just a useless add-on. Perhaps it would be sufficient to add this clarification?