This market is inspired by David Shapiro's video on hard takeoff:
https://youtu.be/71bSV-iHGLE?si=TzS38WnzVHma4OWn
Refer to it (2:18-8:34) for a definition of each.
The market resolves according to the outcome of a poll that I'll run on Manifold with this question at the beginning of 2029.
By 2028, I think one of the biggest constraints to AI development will be ethical and regulatory challenges. As AI grows more advanced, especially in sensitive industries like finance, ensuring transparency, fairness, and data privacy will be necessary. For example, banking app developers https://agilie.com/expertise/fintech/mobile-banking are already integrating AI for fraud detection and personalized financial services, but these systems need to be built with strict guidelines to avoid bias or misuse. Additionally, AI relies on massive datasets, and access to quality, unbiased data could become a bottleneck.
@OlegEterevsky I see your point and indeed there is some overlap, even if not totally, I think. For example, Yann LeCun is claiming that LLMs will never lead to AGI and that we are still missing the algorithms to achieve it, even if there's been substantial progress in that direction
@SimoneRomeo Yann LeCun also claims that company law is enough to align with humanity's values human-mind-basid systems smarter than any individual human, and extrapolates that to it being able to control any AI.
The assertion that company law is able to align even companies is clearly false. Proof by counterexample: LeCun's own employer Meta.