Similarly to my comment below, if you ask the public about AI ethics and risks, they can first think about themselves, even suppressing their reasoning due to fear. One should also not bore/deter people by ‘what should AI do to be nice to your friends, even those who are not’ but keep an understanding of prestige and importance in answering this question.
Thus, the question could be presented as a mental challenge that can inform policy regulating safe AI development, considering that the result would be superhumanly intelligent AI with extensive productive capacity and decisionmaking power and possibly understanding and being able to influence/motivate the needs of all living beings.
Then, one can think about, in the scenario of abundance and actually being able to enact good decisions, what would an intelligent entity seek to motivate so that the needs of all living beings are catered to. Maybe the needs being cooperation and skillfully increasing others wellbeing?
Similarly to my comment below, if you ask the public about AI ethics and risks, they can first think about themselves, even suppressing their reasoning due to fear. One should also not bore/deter people by ‘what should AI do to be nice to your friends, even those who are not’ but keep an understanding of prestige and importance in answering this question.
Thus, the question could be presented as a mental challenge that can inform policy regulating safe AI development, considering that the result would be superhumanly intelligent AI with extensive productive capacity and decisionmaking power and possibly understanding and being able to influence/motivate the needs of all living beings.
Then, one can think about, in the scenario of abundance and actually being able to enact good decisions, what would an intelligent entity seek to motivate so that the needs of all living beings are catered to. Maybe the needs being cooperation and skillfully increasing others wellbeing?