The first Anthropic-trained AI system to qualify as ASL-3 qualifies because of misuse risks rather than autonomy risks
14
แน44610000
79%
chance
1D
1W
1M
ALL
This will be evaluated according to the AI Safety Levels (ASL) standard v1.0 defined by Anthropic here, which gives two different ways that an AI system could qualify for AI Safety Level 3 (ASL-3). This resolves based on the first clear public disclosure by Anthropic that indicates that they have trained a model and found it to qualify for ASL-3.
If Anthropic announces a policy that would prevent this information from being disclosed, announces that it has permanently ceased developing new AI systems, or ceases to operate, this will resolve N/A after six months.
Get แน1,000 play money
Related questions
Related questions
When will Anthropic first train an AI system that they claim qualifies as ASL-3?
When will there first be an AI system that qualifies as ASL-3?
When will there first be a credible report that an AI system qualifies as ASL-3?
Will Anthropic announce one of their AI systems is ASL-3 before the end of 2025?
68% chance
Will technical limitations or safeguards significantly restrict public access to smarter-than-almost-all-humans AGI?
45% chance
If Artificial General Intelligence has an okay outcome, which of these tags will make up the reason?
SoAI 23 3/10: Will Self-improving Al agents crush SOTA in a complex environment (e.g. AAA game, tool use, science)?
42% chance
There will be a name for escaped self-perpetuating AI systems in the wild, and it will be commonly used by mid 2027
28% chance
Is the nature of AI risk completely misunderstood today with respect to the state of the art in 2030?
41% chance
Will AGI be a problem before non-G AI?
26% chance