I think it’s interesting to assess how popular or unpopular these views are within the EA community. This year and last year, we asked people in the EA Survey about the extent to which they agreed or disagreed that:
Most expected value in the future comes from digital minds’ experiences, or the experiences of other nonbiological entities.
This year about 47% (strongly or somewhat) disagreed, while 22.2% agreed (roughly a 2:1 ratio).
However, among people who rated AI risks a top priority, respondents leaned towards agreement, with 29.6% disagreeing and 36.6% agreeing (a 0.8:1 ratio).[1]
Similarly, among the most highly engaged EAs, attitudes were roughly evenly split between 33.6% disagreement and 32.7% agreement (1.02:1), with much lower agreement among everyone else.
This suggests to me that the collective opinion of EAs, among those who strongly prioritise AI risks and the most highly engaged is not so hostile to digital minds. Of course, for practical purposes, what matters most might be the attitudes of a small number of decisionmakers, but I think the attitudes of the engaged EAs matters for epistemic reasons.
Interestingly, among people who merely rated AI risks a near-top priority, attitudes towards digital minds were similar to the sample as a whole. Lower prioritisation of AI risks were associated with yet lower agreement with the digital minds item.
Thanks for writing on this important topic!
I think it’s interesting to assess how popular or unpopular these views are within the EA community. This year and last year, we asked people in the EA Survey about the extent to which they agreed or disagreed that:
This year about 47% (strongly or somewhat) disagreed, while 22.2% agreed (roughly a 2:1 ratio).
However, among people who rated AI risks a top priority, respondents leaned towards agreement, with 29.6% disagreeing and 36.6% agreeing (a 0.8:1 ratio).[1]
Similarly, among the most highly engaged EAs, attitudes were roughly evenly split between 33.6% disagreement and 32.7% agreement (1.02:1), with much lower agreement among everyone else.
This suggests to me that the collective opinion of EAs, among those who strongly prioritise AI risks and the most highly engaged is not so hostile to digital minds. Of course, for practical purposes, what matters most might be the attitudes of a small number of decisionmakers, but I think the attitudes of the engaged EAs matters for epistemic reasons.
Interestingly, among people who merely rated AI risks a near-top priority, attitudes towards digital minds were similar to the sample as a whole. Lower prioritisation of AI risks were associated with yet lower agreement with the digital minds item.