Pope Francis just came out with a call for binding global treaty to regulate AI:
Francis called for ethical scrutiny of the "aims and interest of (AI's) owners and developers" warning that some applications of AI "may pose a risk to our survival and endanger our common home," a reference to the earth.
Which other famous people will make similar statements? Feel free to add your own people.
Needs to be a public statement picked up by credible news source, said in interview, personal writings, or similar.
I might bet in this market, but will avoid taking large enough positions to impair my judgment.
"Existential risk" broadly defined as on the EA Forum Wiki:
An existential risk is a risk that threatens the destruction of the long-term potential of life.[1] An existential risk could threaten the extinction of humans (and other sentient beings), or it could threaten some other unrecoverable collapse or permanent failure to achieve a potential good state.
Note that it is a broader definition than extinction risk, and could also cover things like totalitarian lock-in. However, smaller negative effects such as discrimination, bias, significant economic damages, election tampering, etc, would not be enough to resolve YES.
Let me know if further clarifications are needed.
Suggest resolving this as yes. From the 2023 von der Leyen State of the Union address:
"The same should be true for artificial intelligence.
It will improve healthcare, boost productivity, address climate change.
But we also should not underestimate the very real threats.
Hundreds of leading AI developers, academics and experts warned us recently with the following words:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
AI is a general technology that is accessible, powerful and adaptable for a vast range of uses - both civilian and military.
And it is moving faster than even its developers anticipated.
So we have a narrowing window of opportunity to guide this technology responsibly."
@GoodGuesser As in, they don't believe it could be an existential threat but that others do?
I think the spirit of the market is that they mention it as a possibility/something worth thinking about, rather than dismissing it. Does that answer your question? Sorry for late reply.
@HenriThunberg Yes I think so. I wasn’t sure if it was “refer to the concept of ai being an existential threat”, but sounds like it’s “endorse the concept”.
@dominic Hah, that's interesting, but rereading my original criteria I think it's pretty clear:
> Needs to be a public statement picked up by credible news source, said in interview, personal writings, or similar.
So this would not be enough to resolve YES for Tom Cruise. Doesn't seem impossible he'd talk about it in an interview to hype the movie, though.