In the case that there is no AI that appears to have more control over the world than do humans, will the majority of AI researchers believe that AI alignment is solved?
@YonatanCale maybe some of the really Pollyanna-ish ones like LeCun, I guess? Even Altman says alignment work remains critical, and half of AI capabilities research estimate a P(doom) of 10% or higher.
@connorwilliams97
Wait,
If the only things left to do in "alignment" are things like "get cars to be more reliable", then this is a capabilities problem in my terminology, no?
Sam speaks about alignment, but as things needed to get better capabilities. He says they go hand in hand. (no?)
Do you think Sam personally has a P(doom) of around 10%+ ? I doubt that, I don't hear him worrying about AI killing everyone, he just uses words like "alignment" and "safety" to mean something else. (no?)
@Duncan I think Isaac meant: The majority of AI alignment researchers? Or the majority of all AI researchers?
@Duncan Ah, so this is a grammatical confusion between "AI takeoff in progress" and "AI takeoff completed successfully." I also was confused. I feel like we are currently in "boulder rolling down hill" condition, but have not yet reached "boulder all the way at bottom of hill" condition.