Full question: "chance that the majority of consumer AI applications will operate autonomously on multi-step tasks rather than by just answering user questions as chatbots"
Resolution Criteria:
I will resolve this question to "yes" if it feels like most of the time we interact with AI systems it's as if we're interacting with a co-worker or friend who can actually take actions in the world, and to "no" if it feels like we're still using them as smarter internet search engines, knowledge bases, or question answering systems. By analogy, in 2024 if we want directions somewhere, we (mostly) have our phone guide us with step-by-step directions. But 30 years ago, we would have (mostly) looked at a physical map and planned the route ourselves. This question asks if we will (mostly) be using AI systems and expect them to take the actions we ask of them, or if we will still be using them as just tools to help answer our questions.
Motivation and Context:
Currently we use LLMs to answer our questions. We don't expect them to actuallydoanything on our behalf. We ask them for how to do something, they give us an answer, and then we go and do it. Will this still be the case? Or will we start to ask AI systems to do things for us and expect them to come back with solved tasks?
Question copied from: https://nicholas.carlini.com/writing/2024/forecasting-ai-future.html