The prediction market will resolve as "Yes" if, by 2026, the following conditions are met:
Widespread adoption of advanced AI and automation technologies: The majority of internet users, services, and applications are exposed to advanced AI-driven systems (e.g., deepfakes, automated bots, or AI-generated content) that are capable of convincingly mimicking human behavior and communication patterns.
Ineffectiveness of existing verification methods: Traditional human verification methods (e.g., CAPTCHAs, two-factor authentication, or biometric security) are rendered ineffective or insufficient in distinguishing between humans and AI-driven systems in a majority of cases.
Lack of reliable new verification techniques: No new, widely adopted, and reliable verification techniques have been developed and implemented to successfully differentiate between humans and AI-driven systems in most cases.
In what sense is biometric security a traditional human verificiation method, and what could AI possibly do to break it? (And likewise with 2fa, what level of AI gets you an endless supply of free sim cards? That just has almost nothing to do with AI.)
I think the inclusion of biometrics in the category makes it a bad question.
@Nostradamnedus if verification is tied to bank account or something similar that most children don't have, would the market resolve YES or NO?
@RaulCavalcante I think the verification needs to be independent of any external mechanism, such as bank account or biometric ID, both of which are not available to the majority of the world's population.
@Nostradamnedus If you take the whole world population probably not. But I would believe it's likely that more than 50% of the fraction of the world population that has access to the internet also has access to a least one of these kind of methods.