By 2025 end, will it be generally agreed upon that LLM produced text/code > human text/code for training LLMs?
➕
Plus
48
Ṁ3192
2026
22%
chance

By 2025 end , will it be generally agreed upon that LLM produced text/code > human text/code for training LLMs?

Quality.

Get
Ṁ1,000
and
S3.00
Sort by:
predictedNO

https://arxiv.org/abs/2305.15717 perhaps relevant

The False Promise of Imitating Proprietary LLMs
An emerging method to cheaply improve a weaker language model is to finetune it on outputs from a stronger model, such as a proprietary system like ChatGPT (e.g., Alpaca, Self-Instruct, and others). This approach looks to cheaply imitate the proprietary model's capabilities using a weaker open-source model. In this work, we critically analyze this approach. We first finetune a series of LMs that imitate ChatGPT using varying base model sizes (1.5B--13B), data sources, and imitation data amounts (0.3M--150M tokens). We then evaluate the models using crowd raters and canonical NLP benchmarks. Initially, we were surprised by the output quality of our imitation models -- they appear far better at following instructions, and crowd workers rate their outputs as competitive with ChatGPT. However, when conducting more targeted automatic evaluations, we find that imitation models close little to none of the gap from the base LM to ChatGPT on tasks that are not heavily supported in the imitation data. We show that these performance discrepancies may slip past human raters because imitation models are adept at mimicking ChatGPT's style but not its factuality. Overall, we conclude that model imitation is a false promise: there exists a substantial capabilities gap between open and closed LMs that, with current methods, can only be bridged using an unwieldy amount of imitation data or by using more capable base LMs. In turn, we argue that the highest leverage action for improving open-source models is to tackle the difficult challenge of developing better base LMs, rather than taking the shortcut of imitating proprietary systems.

I just sold my NO position since I realised I need more clarification. I could imagine two ends of the spectrum of what would make this resolve YES:

  • LLMs are, in future, trained purely on the output of previous generation LLMs, applied recursively. This means that only some distant ancestor saw any raw human data. This approach is found to be superior (via some benchmarks) to training from human data. (I would bet NO on this.)

  • LLMs are used to sanitise/summarise/filter etc training data in future as a kind of preprocessing pipe before being used to train the next generation LLM. (I would bet YES in this case, depending on the exact wording)

In fact you could water down the second definition further by sanitising only some of the data, which could even be a minority.

Please can you clarify which of these definitions is what you had in mind, or provide another? Thanks!

@Tomoffer The second thing could happen. The first thing would not improve the data, but eventually would make it completely meaningless (by completely detaching the text from contact with reality.)

@DavidBolin totally agree - originally bet NO with the first interpretation in mind, but would bet YES if it's the second.

How do you tell?

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules