I agree that people often underestimate the value of strategic value spreading. Oftentimes, proposed moral models that AI agents will follow have some lingering narrowness to them, even when they attempt to apply the broadest of moral principles. For instance, in Chapter 14 of Superintelligence, Bostrom highlights his common good principle:
Superintelligence should be developed only for the benefit of all of humanity and in the service of widely shared ethical ideals.
Clearly, even something as broad as that can be controversial. Specifically, it doesn’t speak at all about any non-human interests except insofar as humans express widely held beliefs to protect them.
I think one thing to add is that AIA researchers who hold more traditional moral beliefs (as opposed to wide moral circles and transhumanist beliefs) are probably less likely to believe that moral value spreading is worth much. The reason for this is obvious: if everyone around you holds, more or less, the same values that you do, then why change anyone’s mind? This may explain why many people dismiss the activity you proposed.
I think one thing to add is that AIA researchers who hold more traditional moral beliefs (as opposed to wide moral circles and transhumanist beliefs) are probably less likely to believe that moral value spreading is worth much.
Historically it doesn’t seem to be true. As AIA becomes more mainstream, it’ll be attracting a wider diversity of people, which may induce a form of common grounding and normalization of the values in the community. We should be looking for opportunities to collect data on this in the future to see how attitudes within AIA change. Of course this could lead to attempts to directly influence the proportionate representation of different values within EA. That’d be prone to all the hazards of an internal tug of war pointed out in other comments on this post. Because the vast majority of the EA movement focused on the impact of advanced AI on the far future are relatively coordinated and with sufficiently similar goals there isn’t much risk of internal fraction in the near future. I think organizations from MIRI to FRI are also averse to growing AIA in ways which drive the trajectory of the field away from what EA currently values.
A very interesting and engaging article indeed.
I agree that people often underestimate the value of strategic value spreading. Oftentimes, proposed moral models that AI agents will follow have some lingering narrowness to them, even when they attempt to apply the broadest of moral principles. For instance, in Chapter 14 of Superintelligence, Bostrom highlights his common good principle:
Clearly, even something as broad as that can be controversial. Specifically, it doesn’t speak at all about any non-human interests except insofar as humans express widely held beliefs to protect them.
I think one thing to add is that AIA researchers who hold more traditional moral beliefs (as opposed to wide moral circles and transhumanist beliefs) are probably less likely to believe that moral value spreading is worth much. The reason for this is obvious: if everyone around you holds, more or less, the same values that you do, then why change anyone’s mind? This may explain why many people dismiss the activity you proposed.
Historically it doesn’t seem to be true. As AIA becomes more mainstream, it’ll be attracting a wider diversity of people, which may induce a form of common grounding and normalization of the values in the community. We should be looking for opportunities to collect data on this in the future to see how attitudes within AIA change. Of course this could lead to attempts to directly influence the proportionate representation of different values within EA. That’d be prone to all the hazards of an internal tug of war pointed out in other comments on this post. Because the vast majority of the EA movement focused on the impact of advanced AI on the far future are relatively coordinated and with sufficiently similar goals there isn’t much risk of internal fraction in the near future. I think organizations from MIRI to FRI are also averse to growing AIA in ways which drive the trajectory of the field away from what EA currently values.