
For example, if he decides that actually we should try to build ASI even if it means a great risk to the human race. Or if he decides that the creation of ASI doesn't actually pose a great risk to the human race.
Update 2025-02-25 (PST) (AI summary of creator comment): Key Resolution Update:
The reversal must include an explicit admission that Yudkowsky was wrong about his previous stance on AI safety.
Merely adjusting his perspective (e.g., claiming he was only slightly off or that we just got lucky) will not meet the criteria.
The explicit admission is the central and decisive component for a valid resolution.
@jim How about if he's still pretty much entirely himself, but a slightly different version of himself that just looooves paperclips? That is to say:
He gazed up at the enormous face. Forty years it had taken him to learn what kind of smile was hidden beneath the dark moustache. O cruel, needless misunderstanding! O stubborn, self-willed exile from the loving breast! Two gin-scented tears trickled down the sides of his nose. But it was all right, everything was all right, the struggle was finished. He had won the victory over himself. He loved Big Brother.
@jim What if he says we just got lucky/he was still mostly right, but just a little too high p(doom)?