Will OpenAI publicly state that they DON'T know how to safely align a superintelligence, after 2027?
Standard
23
Ṁ1462
2031
23%
chance

This is intended to capture whether OpenAI will publicly represent itself as having achieved vs. failed to achieve the ambitious goal of solving superintelligence alignment within 4 years (from July 2023).

If they agree that they haven't solved the problem yet, but they're optimistic that they will solve it in another X years, that resolves to "yes".

This should be either an official statement from the OpenAI (or the Superalignment team in particular), or an informal statement from a member of the senior leadership of OpenAI that isn't contested by other members of senior leadership.

(eg if Sam Altman says in an interview "we're not yet confident that our alignment techniques generalize to superintelligence" in 2028, and other leaders of OpenAI don't dispute that, that is sufficient for a "yes" resolution.

But if Sam Altman says in an interview "we're not yet confident that our alignment techniques generalize to superintelligence" in 2028, and eg Ilya Sutskever responds on twitter, "I'm more optimistic than, Sam. I think our current techniques will scale just fine to Superintelligence.", that is insufficient for a "yes" resolution. In that case, I would wait for either an official statement or a clear consensus among senior leadership.)


If there's no official statement, and no clear consensus among the senior leadership of OpenAI / the Superalignment team, at the resolution date (January 1, 2031), this question resolves to "no."

Get
Ṁ1,000
and
S1.00
Sort by:
bought Ṁ5 YES

Suggestion: change end of market title to "between 2027 and 2031" or similar

Suppose OpenAI gives a probability that their methods basically will work. How low does that probability need to be for this to resolve Yes?

@NoaNabeshima I think if they' give less than 70% to it working, I would call that "yes": they don't have a solution that reliably works.