Will I solve the alignment problem in 2024?
➕
Plus
20
Ṁ2783
Dec 31
4%
chance

I had an idea for how to solve the AI alignment problem, which I've been slowly writing up. At the end of 2024, if I still think there's something to my idea, I will do a poll of LessWrong or alignment researchers or something, and ask them how much they think my idea solves the alignment problem. The market will resolve to the mean answer.

If I don't get the writeup released before the end of June, I may push the resolution date.

If I lose belief in the idea, I may just resolve the market NO.

Get
Ṁ1,000
and
S3.00
Sort by:

what threat model are you tackling and uhm

which part of the alignment problem? all of them?

Instead of using LessWrong, why not just submit it to a peer reviewed journal?

@Pykess Because I'm already used to submitting to LessWrong.

Which peer-reviewed journal are you suggesting I should use, and why should I use it?

@tailcalled Solving the alignment problem would be one of the greatest discoveries in at least the past 50 years, but more likely 100 or more. Something like that would belong in Nature or Science.

@Pykess It's also a hard enough problem to solve that the priority should be getting the proposed solution in front of people for review ASAP, rather than dealing with the requirements of a journal. By all means publish it in a journal+conference+etc after that review/vetting, there will be plenty of opportunity.

A theory as to how to solve the problem that ends up being ignored but independently invented in a heavily modified variation by experts is not a solution to the problem

I bet NO, but I sure hope I'm wrong!

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules