Intended as a survey of individuals in Manifold about which, possibly controversial, propositions do you think are important to be included in a constitution designed to steer the general behavior of decentralized AI.
You can think of it as betting on which Asimov laws an ideal society would make sure to include.
You can treat it like you're betting on a constitution that will run your government.
I'd treat it like you're betting on what philosophy the rest of the world is going to care about.
What is the most important thing you want the AI to believe?
The market of truth never resolves.
The point of this exercise is to define the term 'friendly' as a property of AI that accepts/denies these particular beliefs.
You see, the robot is trying to figure out what 'friendly' means.
Because that's what it was told to be.
Answering these questions to the best of your ability (using that really complicated version of 'friendly' that you have in your head) will help it understand what we mean by the word.
It's how 'we' tell 'it' what the word friendly means.
I wouldn't recommend letting it define the term for you.
Re "God exists", what if the AI's point of view is "he/she/it does now"?