What made gpt2-chatbot smarter (than GPT4)?
➕
Plus
10
Ṁ540
Jan 1
64%
More and higher quality data
64%
Better model architecture design compared to GPT4 (e.g., long context, MoD, etc.)
33%
More parameters
31%
Model self-play/RL
23%
A new unsupervised training paradigm (not next token prediction) (has to be more than 200B token pretraining)
4%
Ilya sitting behind the chat model
Resolved
YES
multimodal training

We will resolve either when OpenAI gives enough information (e.g., a technical report) or based on public opinion by EOY 2024.

Resolve to any number of choices that make the model stronger.

For example, if the question is about how other models get smarter than their previous model, we will have
- Llama 3: data

- Claude 3: data, parameters(? judging on the fact that opus is 10x more expensive than Claude 2), RL, multimodal(? The multimodal trained may not have improved text ability), architecture(?)

- Gemini 1.5 Pro: multimodal, architecture (long context+MoE), data(?)

Get
Ṁ1,000
and
S3.00
Sort by:

John Schulman said most of the progress was from post training.

But i did not have this option

DanboughtṀ25multimodal training YES

This can resolve YES:

With GPT-4o, we trained a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network. Because GPT-4o is our first model combining all of these modalities, we are still just scratching the surface of exploring what the model can do and its limitations.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules