6. An AI system will pass the Turing test for speech.
➕
Plus
8
Ṁ772
2026
72%
chance
  • All these predictions are taken from Forbes/Rob Toews' "10 AI Predictions For 2025".

  • For the 2024 predictions you can find them here, and their resolution here.

  • You can find all the markets under the tag [2025 Forbes AI predictions].

  • Note that I will resolve to whatever Forbes/Rob Toews say in their resolution article for 2025's predictions, even if I or others disagree with his decision.

  • I might bet in this market, as I have no power over the resolution.


Description of this prediction from the article:
The Turing test is one of the oldest and most well-known benchmarks for AI performance.

In order to “pass” the Turing test, an AI system must be able to communicate via written text such that the average human is not able to tell whether he or she is interacting with an AI or interacting with another human.

Thanks to dramatic recent advances in large language models, the Turing test has become a solved problem in the 2020s.

But written text is not the only way that humans communicate.

As AI becomes increasingly multimodal, one can imagine a new, more challenging version of the Turing test—a “Turing test for speech”—in which an AI system must be able to interact with humans via voice with a degree of skill and fluidity that make it indistinguishable from a human speaker.

The Turing test for speech remains out of reach for today’s AI systems. Solving it will require meaningful additional technology advances.

Latency (the lag between when a human speaks and when the AI responds) must be reduced to near-zero in order to match the experience of speaking with another human. Voice AI systems must get better at gracefully handling ambiguous inputs or misunderstandings in real-time—for instance, when they get interrupted mid-sentence. They must be able to engage in long, multiturn, open-ended conversations while holding in memory earlier parts of the discussion. And crucially, voice AI agents must learn to better understand non-verbal signal in speech—for instance, what it means if a human speaker sounds annoyed versus excited versus sarcastic—and to generate those non-verbal cues in their own speech.

Voice AI is at an exciting inflection point as we near the end of 2024, driven by fundamental breakthroughs like the emergence of speech-to-speech models. Few areas of AI are advancing more rapidly today, both technologically and commercially. Expect to see the state of the art in voice AI leap forward in 2025.

Get
Ṁ1,000
and
S3.00
© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules