The Multi-Layer Perceptron (MLP) is the standard neural network design in modern deep learning. If a state-of-the-art (SOTA) AI model is confirmed to be created without the use of MLPs by 2029, this market resolves YES, otherwise NO.
Qualifying models
No MLPs
To qualify, a model must not use Multi-Layer Perceptrons at all. For the purposes of this market, an MLP is a series of 3 or more layers of nodes, where the nodes in each middle layer take a weighted sum of the outputs of the previous layer, apply a nonlinearity, and output the result. One example of a non-MLP neural network design is Kolmogorov-Arnold Networks (see this market, which this one is heavily based on).
If the resolution date comes around and we don't know whether or not a SOTA model uses MLPs, the market will resolve NO.
State-of-the-art
If the model is close to the state-of-the-art on a commonly-used eval, that's good enough to count. For instance, Gemini Ultra supposedly got a state-of-the-art, 90.0% score on MMLU, but some argued that they were gaming the eval. Rather than getting into the weeds figuring out whether or not a model counts as SOTA, I'll err on the side of counting it, since the important thing is that a serious attempt was made at achieving SOTA without using MLPs.
If the model focuses on a new domain that doesn't yet have commonly-used evals and it is clearly better than anything that came before, that also counts as state-of-the-art.
Past examples of AI models that qualify as "state-of-the-art" would include AlexNet, AlphaZero, GPT-3, DALL-E 2, Sora, Gemini Ultra, Suno, Udio, and Claude 3 Opus.
Created by humans
The AI model must be created by humans, rather than by an AI. If I believe that less than 50% of the core ideas for the MLP substitute were thought of by a human, it won't count. This lets us ignore the possibility of a "singularity" type of situation where an AI does AI research much faster than a human could.