That sounds to me like, “Don’t talk about gun violence in public or you’ll enable people who want to overthrow the whole US constitution.” Directionally correct but entirely disproportionate. Just consider that non-negative utilitarians might hypothetically try to kill everyone to replace them with beings with greater capacity for happiness, but we’re not self-censoring any talk of happiness as a result. I find this concern to be greatly exaggerated.
In fact, moral cooperativeness is at the core of why I think work on s-risks is a much stronger option than alignment, as explained in the tractability section above. So concern for s-risks could even be a concomitant of moral cooperativeness and can thus even counter any undemocratic, unilateralist actions by one moral system.
Note also that there is a huge chasm between axiology and morality. I have pretty strong axiological intuitions but what morality follows from that (even just assuming the axiology axiomatically – no pun intended) is an unsolved research question that would take decades and whole think tanks to figure out. So even if someone values empty space over earth today, they’re probably still not omnicidal. The suffering-focused EAs I know are deeply concerned about the causal and acausal moral cooperativeness of their actions. (Who wants to miss out on moral gains from trade after all!) And chances are this volume of space will be filled by some grabby aliens eventually, so assured permanent nonexistence is not even on the table.
That sounds to me like, “Don’t talk about gun violence in public or you’ll enable people who want to overthrow the whole US constitution.” Directionally correct but entirely disproportionate. Just consider that non-negative utilitarians might hypothetically try to kill everyone to replace them with beings with greater capacity for happiness, but we’re not self-censoring any talk of happiness as a result. I find this concern to be greatly exaggerated.
In fact, moral cooperativeness is at the core of why I think work on s-risks is a much stronger option than alignment, as explained in the tractability section above. So concern for s-risks could even be a concomitant of moral cooperativeness and can thus even counter any undemocratic, unilateralist actions by one moral system.
Note also that there is a huge chasm between axiology and morality. I have pretty strong axiological intuitions but what morality follows from that (even just assuming the axiology axiomatically – no pun intended) is an unsolved research question that would take decades and whole think tanks to figure out. So even if someone values empty space over earth today, they’re probably still not omnicidal. The suffering-focused EAs I know are deeply concerned about the causal and acausal moral cooperativeness of their actions. (Who wants to miss out on moral gains from trade after all!) And chances are this volume of space will be filled by some grabby aliens eventually, so assured permanent nonexistence is not even on the table.