Will an AI achieve >85% performance on the FrontierMath benchmark before 2028?
9
𝕊53
2028
62%
chance

From a recent arXiv preprint,

We introduce FrontierMath, a benchmark of hundreds of original, exceptionally challenging mathematics problems crafted and vetted by expert mathematicians. The questions cover most major branches of modern mathematics -- from computationally intensive problems in number theory and real analysis to abstract questions in algebraic geometry and category theory. Solving a typical problem requires multiple hours of effort from a researcher in the relevant branch of mathematics, and for the upper end questions, multiple days. FrontierMath uses new, unpublished problems and automated verification to reliably evaluate models while minimizing risk of data contamination. Current state-of-the-art AI models solve under 2% of problems, revealing a vast gap between AI capabilities and the prowess of the mathematical community. As AI systems advance toward expert-level mathematical abilities, FrontierMath offers a rigorous testbed that quantifies their progress.

This question resolves to YES if the state-of-the-art average accuracy score on the FrontierMath benchmark, as reported prior to midnight, January 1st 2028 Pacific Time, is above 85.0% for any fully-automated computer method. Credible reports include but are not limited to blog posts, arXiv preprints, and papers. Otherwise, this question resolves to NO.

I will use my discretion in determining whether a result should be considered valid. Obvious cheating, such as including the test set in the training data, does not count.

This question is managed and resolved by Manifold.
Get
Ṁ1,000
and
S3.00
Sort by:
bought Ṁ100 YES

I would not he surprised if AlphaProof already can achieve 30%+

that's a market

@mathvc Seems literally false as stated because:
- AlphaProof needs formalized statements in lean.
- I don't see how you can use the system to generate numeric answers effectively (the modality is somewhat different from IMO).

I assume that an "AI model" is being interpreted broadly (e.g., it still counts as an "AI model" if it's hooked up to Lean and which gets to use library_search), but it would be good to clarify.

Frankly I think the easiest criteria would be if any fully automated method is able to achieve 85.0% or above; it's not like any non-AI methods are showing any signs that they might get there.

@bakkot I clarified the criteria to match your suggestion.

opened a Ṁ25,000 YES at 62% order

YES limit order of 25,000M at 62%

bought Ṁ15,000 NO

more at 55%

opened a Ṁ3,000 YES at 60% order

@RyanGreenblatt More at 60%

bought Ṁ10 NO

Would this resolving YES essentially mean that research mathematics can be replaced by AI?

@zsig

No not necessarily. These are problems for which the answer is already known and for which there exists a numerical answer. Most math problems don’t fit these criteria, or involve coming up with the right mathematical notions/concepts to investigate in the first place.

It might still be that this benchmark is highly correlated with the tasks not measured by it though, and so I’d not bet strongly that full research capabilities will take much longer.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules