Regardless of whether future AI technology is safe or a threat to humanity, aligned or unaligned, will it successfully attempt to wrest control from humans?
Argument in favour: https://www.facebook.com/jeffladish/posts/pfbid0RYtscxJ7UoRhAM2ajXBHWHcwDjCfUWS98i7mvGQB1cSfazx3TpJjAhYk3a6pN8ohl
@IsaacKing If AI kills all humans or wires them up to the Matrix, then it's up to the AI to resolve. If it keeps humans as pets or subjects, they can resolve it I guess.
@RobinGreen Could you put these clarifications in the description so that they're easily accessible to traders?
They won't need to attempt to wrest control from human. AI will be so useful, we will rely on them more and more, people won't work anymore, and the AIs will have effective control.
Human will still rule the world in theory, have presidents, CEOs, kings, but real power will be elsewhere. Humans will be like the little dog who thinks he is the master of the household.
AI will keep human around because we are harmless and more interesting than our weight in paperclip.
@YonatanCale Also, what if one person creates an autonomous AI and that AI then performs lots of jobs productively to the point where it earns control over the world entirely through legal means?
@YonatanCale If we deliberately give it control, that won't resolve this market. If, subsequently, another AI takes over the world from the other AI that we gave the power to, then it will resolve as, Yes; otherwise, as No.
@tailcalled I don't see how that would be possible - we don't have a one-world government that it could capture "legally".