Will OpenAI offer a model that updates its weights while running during 2025?
➕
Plus
21
Ṁ1375
2026
26%
chance

"Test-Time Training" or "TTT" is an approach to running neural networks that updates their weights while they solve problems. This is more similar to how biological brains work than the commercially available models seen so far, which are in a frozen state and don't change while they process information.

This market resolves YES if sometime during 2025 OpenAI offers public (presumably paid) access to a model, either through its chat service or through its API, which uses runtime parameter adjustments.

For an example of TTT, see the paper "The Surprising Effectiveness of Test-Time Training for Abstract Reasoning".

Get
Ṁ1,000
and
S3.00
Sort by:

This question is probably unresolvable, for two reasons.

1: They’re incentivized not to share this information. Why announce a competitive advantage publicly?

2: Depending on how broadly you interpret “runtime parameter adjustments,” existing offerings could already be doing this, depending on how their online RLHF or personalization features work.

Suggest rewording as “OpenAI publicly discloses the use of TTT during inference for one of their models…”

@KimberlyWilberLIgt

Why announce a competitive advantage publicly?

To attract investment, perhaps.

depending on how their online RLHF or personalization features work

I hadn't heard about online RLHF being used by OpenAI. I can't find any references to it. Do you have one?

@singer Oh, they don’t use RLHF? I thought they might be because you can rate responses as “bad quality.”

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules