I'll resolve this entirely subjectively. (I'll stay out of betting in this market though.)
Resolves NA if AI safety doesn't become politically divided.
@NathanNguyen So if the left is more pro-regulation than the right because of 99% fairness diversity inclusion reasons and 1% x-risk how would you resolve? What if it’s 90-10 or 50-50?
@mariopasquato I’m not sure it makes sense to quantify things in that way. It’s more of a “I know it when I see it” kind of thing.
@NathanNguyen Any idea on how to make this less arbitrary? Where does x-risk end and concerns about jobs and discrimination begin? Right now if you read the famous open letter signed by Bengio, Wozniak etc… would you conclude that the motive is x-risk or just negative economic and social impact?
@Gigacasting that is why they are the ones trying to outlaw medication, right? oh wait, no
@DerkDicerk I think autonomous weapons aren’t the kind of thing AI safety folks are concerned about ending humanity