Purely private company endeavours do not count. At least 50% of the funding must come from one or more governments.
The focus must be on alignment over capabilites. A dual focus is acceptible as long as there's a serious committment to alignment.
The total amount of funding must be at least $3 billion.
In the event of gradual funding over time, this market can resolve YES if the project ever meets all three criteria at any point in its life.
For comparison, OpenAI's "Superalignment" probably meets the funding criterion, fails the government criterion, and (debatably) fails the alignment criterion.
@EliezerYudkowsky
So if these markets are somewhat correct (and if they are not, you can make some mana) :
There is a substantial chance of billions being invested in a AI alignment project in the near future, but the project will probably not be that good.
So it seems that creating a clear document explaining the criteria these kind of projects should meet and how to spend the money, and then doing the promotion of it, could help.
Also, is something like that any good : https://chriscanal.substack.com/p/the-omega-protocol-another-manhattan ?
It seems to me there are some problems with the idea, like how to prevent the researchers to execute their code while they are programing, or how to prevent the code or ideas to leak (intentionally, or not)
But maybe this can be fixed ?
Also, it seems somewhat orthogonal to more direct way to make the AI aligned, so it could be an added security to a more general project.
@dionisos I haven't read that article in its entirely, but seeing that it's quoting a completely meaningless market as its source for the 11% chance of extinction, I'm skeptical the author is worth listening to.
https://manifold.markets/IsaacKing/will-a-largescale-eliezeryudkowskya
So apparently, getting approved by @EliezerYudkowsky is way harder than getting some billions by the government for these kind of projects (not very surprising).
Do we know what he would do with 3 billions ?
@dionisos He earlier said he would pay the best AI researchers to stop doing that so we would die a little later in expectation.