Donald Trump has started talking about artificial intelligence, recently saying in an interview with Jake Paul: “You know there are those people that say it takes over the human race. It’s really powerful stuff.”
This question resolves YES if Donald Trump, before the end of 2025, repeatedly raises concerns about x-risk from AI.
“repeatedly” means on 3 occasions at least one month apart from each other, not including any he has already said (such as the Jake Paul interview comments)
“raise concerns about existential risk from AI” means a public statement to the effect of “AI might cause human extinction or radical disempowerment in the not-too-distant future”
Uncertainty is Fine: The statement can count even if there is uncertainty in it. For example, “Could AI kill us all? I don’t now, maybe yes, maybe no” would count.
Not Too Distant: If the concerns are raised as far-future, they would not count. For example, “But listen, these things won’t happen for a long time (such that we don’t need to do anything about it for now)” appended to a public statement about AI risk would disqualify the statement.
Third-Person is Fine: Raising concerns in the third person counts. For example, “People are saying AI will end the world” would count, so long as it is not followed with disparaging/dismissive comments (e.g. “... but I don’t believe them”).
Extinction or Disempowerment: “Existential” includes both human extinction (e.g. “We need to watch this, or it’ll kill us all.”) and radical disempowerment (e.g. “Everyone on earth will be left behind by AI” or “It’ll take over humanity.”).
I will use reasonable judgement to interpret statements and decide edge cases.
Given the above criteria, Donald Trump’s statement at the top of this question would have counted since it is sufficiently existential (“takes over the human race” amounts to radical disempowerment).