Did Gemini 1.5 Pro achieve long-context reasoning through retrieval?
Basic
1
Ṁ44Jan 1
50%
chance
1D
1W
1M
ALL
There is no way an attention network is that good.
1 hour video understanding
Needle in a Haystack 99% accuracy
Learning a language that no one speaks by reading a grammar book "in context"
Resolves YES if we later found out that the long context ability was enhanced by agents/retrieval/search/etc., i.e. it was not achieved merely by extending attention mechanism.
Resolves NA if I can't find out by EOY 2024
Get
1,000
and1.00
Related questions
Related questions
Will Gemini 1.5 Pro seem to be as good as Gemini 1.0 Ultra for common use cases? [Poll]
70% chance
Will Gemini achieve a higher score on the SAT compared to GPT-4?
58% chance
Will Google Gemini perform better (text) than GPT-4?
35% chance
Will Gemini outperform GPT-4 at mathematical theorem-proving?
56% chance
Will Gemini Ultra outperform GPT-4V on visual reasoning by the end of 2024?
65% chance
Will Google Gemini do as well as GPT-4 on Sparks of AGI tasks?
76% chance
Will Gemini-1.5-Pro-Exp-0801 Score Above 90.35 (current #1) in Scale AI's Instruction Following Evaluation
53% chance
Will Gemini 2 ship before GPT-5?
74% chance
Will Gemini exceed the performance of GPT-4 on the 2022 AMC 10 and AMC 12 exams?
72% chance
Will Gemini-1.5-Pro-Exp-0801 Score Above 1165 in Scale AI's Math Evaluation
48% chance