High Percent answer = Manifold would vote for Trump instead of this person
Low Percent answer = Manifold would vote for this person instead of Trump
Will be resolved by a poll for each option when the market closes.
See other for more options: /strutheo/who-would-have-to-be-running-for-ma-174d8cf13790
@strutheo To be clear, all questions resolve YES/NO, not to poll percentage? And if the poll is tied, does it resolve 50%?
[deleted]
I’m surprised no one has voted on Qin Shi Huang yet
Heads up guys, these active options were unironically literal Nazis (or 'just' in the Nazi Wehrmacht):
Karl Donitz
Erich von Manstein
Albert Speer
Otto Skorzeny
Hasso von Manteuffel
Erwin Rommel
and these options were suspiciously Nazi-adjacent/enabling:
Paul Von Hindenburg
Erich Ludendorff
Jozef Tiso
Phillipe Petain
Miklos Horthy
Franz von Papen
and these options were confederates:
James Longstreet
You don't get a second chance with AI X-risk, and Yann seems quite likely to consistently and competently sabotage the AI safety efforts of others
Moreover, even if you think Trump would deal with it even worse, Yann's likely policies could make it much more likely that the take-off happens within his own presidency rather than whoever gets elected next
To quote @BrunoParga's comment:
"
Linus Pauling was a Nobel Prize-winning chemist, just like LeCun is a Turing Award winner for AI capabilities research.
Linus Pauling was really, really dumb about a closely related but distinct subject than the one he was an expert in, biochemistry - he believed in consuming massive amounts of vitamin C for your health.
Yann LeCun is really, really dumb about a closely related but distinct subject than the one he's an expert in, AI safety - he believes corporate law could control AGI (among other batshittery).
Both of them persevere because they are not aware of their ignorance in the relevant field.
The difference is that Pauling made expensive pee, and LeCun might get humanity
extinct
"
If Yann causes superintelligence alignment to fail, it will kill us and everyone we love.
https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
"We are not ready. We are not on track to be significantly readier in the foreseeable future. If we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong." - Eliezer Yudkowsky
Obviously it's not guaranteed, but I personally believe the chances are 20-40% of death.