Major physics discovery thanks to AI by the end of 2025?
➕
Plus
87
Ṁ6570
2026
17%
chance

Will the use of machine learning result in a major breakthrough in physics (including cosmology and astronomy) by Dec 31, 2025?

A major breakthrough is defined as a major change in the consensus or the closing of a long standing open problem. Examples: settling the issue of the existence and nature of dark matter or dark energy, unifying gravity with the other three forces, discovering magnetic monopoles, determining that the correct interpretation of quantum mechanics is superdeterminism, etc.

The breakthrough must be in the science itself, not just a speedup or incremental improvement of existing methods; it should result in us learning something fundamentally new about physical reality. AI must be key for the breakthrough, such that without AI it would conceivably never have occurred on this timescale.

Get
Ṁ1,000
and
S3.00
Sort by:
predictedNO

What do you mean "fundamentally"? Would the existence of a room-temperature-and-pressure superconductor not count as "fundamentally new" because it could in principle be deduced from quantum electrodynamics?

@ArmandodiMatteo Right. The existence of superconductors would qualify; having made a specific superconductor that works at T=300K would not as there is nothing special about that temperature except for human preferences, unless you had to learn something that upends our understanding of superconductivity to be able to make this new superconductor

>such that without AI it would conceivably never have occurred

How should it be understood? This sounds like the AI has to be an AGI more powerful than existing departments full of scientists?

@MrLuke255 Not necessarily, it can be an AI tool that enables a group to obtain radically new results that would otherwise have been impossible without the tool. To make this more concrete, let’s think of general relativity and tensor calculus. Without the ability to manipulate tensor quantities it would still be possible to develop the relevant intuition (principle of equivalence and all that) but not to turn it into an actual testable theory, not in full generality at least. Now imagine that humans are unable to calculate with tensors, which is not that far fetched if you ever tried to compute say the Christoffel symbols given a metric by hand. But Mathematica can do that easily. You get the idea.

@mariopasquato I'm not entirely sure I understand. So at the end of the day you will make a final call, subjectively?

@MrLuke255 To put it bluntly yes. I may consider a poll.

AI must be indispensabile for the breakthrough, such that without AI it would conceivably never have occurred.

This seems likely a far too extreme requirement. Ex: the possible worlds where we stop AI development due to risks and spend effort biologically improving ourselves, and also manually designing far better non-AI problem solving tools can conceivably get much/all of what AI can get possibly bar superintelligence levels.

@Aleph Can we biologically improve ourselves that much in two years?

@mariopasquato I edited the description to make this clearer. “Never have occurred” -> “Never have occurred on this timescale”.

@mariopasquato If it is just for the next two years, then yeah we aren't biologically improving ourselves that much so soon.
(I think there's potential big improvements possible in theorem proving, but probably not in two years either)

I was coming from your other market https://manifold.markets/mariopasquato/conditional-on-a-major-breakthrough?r=bWFyaW9wYXNxdWF0bw which I interpreted as being mostly unbounded but now I realize is also 2025. Oops.

The definition of "AI" in use is also going to play a huge role. If curve fitting is AI, then it is highly likely this will play a role. Less likely but still quite plausible: use of machine-learning based tools to semi-automatically annotate large number of images and the annotations forming a basis for new discoveries (e.g. using Galaxy Zoo crowdsourced annotations as training corpus). Would that count as AI? And would that be considered only a "speedup" (in principle you could just hire thousands of people to do the image analysis manually)? If the requirement is rather that AI is only something that is actually "thinking" and formulating theories/experiments/... then IMHO very likely NO.

@MartinModrak Curve fitting with a limited number of parameters on a handful of data points has been around since Gauss. What makes machine learning machine learning is subjective but at a minimum I would require models where the number of parameters makes it unacceptable to judge goodness of fit on the training data itself, requiring a train/validation split.

Consensus in the community on any of these listed major breakthroughs would require experimental results and AI is not close yet to the ability to design and operate an ambitious experiment.

@WesleyJB I would also resolve yes in the following scenario: an anomaly detection algorithm sifts through the 15 terabyte per night produced by the LSST telescope finding a handful of sources with unexpected behavior, resulting in a refutation of established theory (say abandoning Lambda-CDM in favor of modified gravity).

So this project would qualify, if it pans out and the anomaly they observed genuinely leads to a discovery of a new particle?

https://www.anl.gov/article/machine-learning-could-help-reveal-undiscovered-particles-within-data-from-the-large-hadron-collider

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules