What are the public’s views and concerns on AI, AI ethics and AI risks?
I regulation is going to happen. A better understanding of the public’s attitudes would be useful for helping EA-aligned policy advocates to ensure that the regulation designed is effective at both addressing public need and ensuring that AI development is done in a safe way.
Similarly to my comment below, if you ask the public about AI ethics and risks, they can first think about themselves, even suppressing their reasoning due to fear. One should also not bore/deter people by ‘what should AI do to be nice to your friends, even those who are not’ but keep an understanding of prestige and importance in answering this question.
Thus, the question could be presented as a mental challenge that can inform policy regulating safe AI development, considering that the result would be superhumanly intelligent AI with extensive productive capacity and decisionmaking power and possibly understanding and being able to influence/motivate the needs of all living beings.
Then, one can think about, in the scenario of abundance and actually being able to enact good decisions, what would an intelligent entity seek to motivate so that the needs of all living beings are catered to. Maybe the needs being cooperation and skillfully increasing others wellbeing?
What are the public’s views and concerns on AI, AI ethics and AI risks?
I regulation is going to happen. A better understanding of the public’s attitudes would be useful for helping EA-aligned policy advocates to ensure that the regulation designed is effective at both addressing public need and ensuring that AI development is done in a safe way.
Similarly to my comment below, if you ask the public about AI ethics and risks, they can first think about themselves, even suppressing their reasoning due to fear. One should also not bore/deter people by ‘what should AI do to be nice to your friends, even those who are not’ but keep an understanding of prestige and importance in answering this question.
Thus, the question could be presented as a mental challenge that can inform policy regulating safe AI development, considering that the result would be superhumanly intelligent AI with extensive productive capacity and decisionmaking power and possibly understanding and being able to influence/motivate the needs of all living beings.
Then, one can think about, in the scenario of abundance and actually being able to enact good decisions, what would an intelligent entity seek to motivate so that the needs of all living beings are catered to. Maybe the needs being cooperation and skillfully increasing others wellbeing?