Will Gemini-1.5-Pro-Exp-0801 Score Above 90.35 (current #1) in Scale AI's Instruction Following Evaluation
Basic
3
Ṁ30Oct 1
53%
chance
1D
1W
1M
ALL
Context:
Gemini-1.5-Pro-Exp-0801 is currently the leading model on the LMYS Arena leaderboard (https://arena.lmsys.org/).
This market is about its potential evaluation by Scale AI (https://scale.com/leaderboard).
Resolution Criteria:
The market resolves as "Yes" if the model is evaluated by Scale AI and It receives a score strictly larger than 90.35 in the Instruction Following category.
The market resolves as "No" if the model is evaluated by Scale AI and it receives a score of 90.35 or less in the Instruction Following category
The market resolves as "N/A" if either
Scale AI doesn't evaluate the model and add it to the leaderboard before October 1, 2024 or
The evaluation methodology changes before the model is evaluated.
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Related questions
Related questions
Will Gemini-1.5-Pro-Exp-0801 Score Above 1165 in Scale AI's Coding Evaluation
28% chance
Will Gemini-1.5-Pro-Exp-0801 Score Above 1165 in Scale AI's Math Evaluation
48% chance
Will Gemini achieve a higher score on the SAT compared to GPT-4?
70% chance
Will Gemini-1.5-Pro-Exp-0801 Score Lower Than 8 (current best) in Scale AI's
Adversarial Robustness
56% chance
By when will AI score >= 80% on FrontierMath
Will Gemini exceed the performance of GPT-4 on the 2022 AMC 10 and AMC 12 exams?
72% chance
Will Gemini outperform GPT-4 at mathematical theorem-proving?
62% chance
Will "Gemini [Ultra, 1.0] smash GPT-4 by 5x"?
18% chance
Will Gemini 1.5 Pro seem to be as good as Gemini 1.0 Ultra for common use cases? [Poll]
70% chance
Will Gemini Ultra outperform GPT-4V on visual reasoning by the end of 2024?
59% chance