The first words of my comment were “I don’t identify as a utilitarian” (among other reasons because I reject the idea of things like feeding all existing beings to utility monsters for a trivial proportional gains to the latter, even absent all the pragmatic reasons not to; even if I thought such things more plausible it would require extreme certainty or non-pluralism to get such fanatical behavior).
I don’t think a 100% utilitarian dictator with local charge of a society on Earth removes pragmatic considerations, e.g. what if they are actually a computer simulation designed to provide data about and respond to other civilizations, or the principle of their action provides evidence about what other locally dominant dictators on other planets will do including for other ideologies, or if they contact alien life?
But you could elaborate on the scenario to stipulate such things not existing in the hypothetical, and get a situation where your character would commit atrocities, and measures to prevent the situation hadn’t been taken when the risk was foreseeable.
That’s reason for everyone else to prevent and deter such a person or ideology from gaining the power to commit such atrocities while we can, such as in our current situation. That would go even more strongly for negative utilitarianism, since it doesn’t treat any life or part of life as being intrinsically good, regardless of the being in question valuing it, and is therefore even more misaligned with the rest of the world (in valuation of the lives of everyone else, and in the lives of their descendants). And such responses give reason even for utilitarian extremists to take actions that reduce such conflicts.
Insofar as purely psychological self-binding is hard, there are still externally available actions, such as visibly refraining from pursuit of unaccountable power to harm others, and taking actions to make it more difficult to do so, such as transferring power to those with less radical ideologies, or ensuring transparency and accountability to them.
Carl, you write that you are “more sympathetic to consequentialism than the vast majority of people.” The original post by Richard is about utilitarianism and replacement thought experiments but I guess he is also interested in other forms of consequentialism since the kind of objection he talks about can be made against other forms of consequentialism too.
The following you write seems relevant to both utilitarianism and other forms of consequentialism:
I don’t think a 100% utilitarian dictator with local charge of a society on Earth removes pragmatic considerations, e.g. what if they are actually a computer simulation designed to provide data about and respond to other civilizations, or the principle of their action provides evidence about what other locally dominant dictators on other planets will do including for other ideologies, or if they contact alien life?
Even if these other pragmatic considerations you mention would not be removed by having control of Earth, the question remains whether they (together with other considerations) are sufficient to make it suboptimal to kill and replace everyone. What if the likelihood that they are in a simulation is not high enough? What if new scientific discoveries about the universe or multiverse indicate that taking into account agents far away from Earth is not so important?
You say,
But you could elaborate on the scenario to stipulate such things not existing in the hypothetical, and get a situation where your character would commit atrocities, and measures to prevent the situation hadn’t been taken when the risk was foreseeable.
I don’t mean that the only way to object to the form of consequentialism under consideration is to stipulate away such things and assume they do not exist. One can also object that what perhaps make it suboptimal to kill and replace everyone are complicated and speculative considerations about living in a simulation or what beings on other planets will do. Maybe your reasoning about such things is flawed somewhere or maybe new scientific discoveries will speak against such considerations. In which case (as I understand you) it may become optimal for the leader we are talking about to kill and replace everyone.
You bring up negative utilitarianism. As I write in my paper, I don’t think negative utilitarianism is worse off than traditional utilitarianism when it comes to these scenarios that involve killing everyone. The same goes for negative vs. traditional consequentialism or the comparison negative vs. traditional consequentialist-leaning morality. I would be happy to discuss that more, but I guess it would be too off-topic given the original post. Perhaps a new separate thread would be appropriate for that.
You write,
That’s reason for everyone else to prevent and deter such a person or ideology from gaining the power to commit such atrocities while we can, such as in our current situation.
In that case the ideology (I would say morality) is not restricted to forms of utilitarianism but also include many forms of consequentialism and views that are consequentialist-leaning. It may also include views that are non-consequentialist but open to that killing is sometimes right if it is done to accomplish a greater goal, and that, for example, place huge importance on the far future so that far future concerns make what happens to the few billion humans on Earth a minor consideration. My point is that I think it’s a mistake to merely talk about utilitarianism or consequentialism here. The range of views about which one can reasonable ask ‘would it be right to kill everyone in this situation, according to this theory?’ is much wider.
The first words of my comment were “I don’t identify as a utilitarian” (among other reasons because I reject the idea of things like feeding all existing beings to utility monsters for a trivial proportional gains to the latter, even absent all the pragmatic reasons not to; even if I thought such things more plausible it would require extreme certainty or non-pluralism to get such fanatical behavior).
I don’t think a 100% utilitarian dictator with local charge of a society on Earth removes pragmatic considerations, e.g. what if they are actually a computer simulation designed to provide data about and respond to other civilizations, or the principle of their action provides evidence about what other locally dominant dictators on other planets will do including for other ideologies, or if they contact alien life?
But you could elaborate on the scenario to stipulate such things not existing in the hypothetical, and get a situation where your character would commit atrocities, and measures to prevent the situation hadn’t been taken when the risk was foreseeable.
That’s reason for everyone else to prevent and deter such a person or ideology from gaining the power to commit such atrocities while we can, such as in our current situation. That would go even more strongly for negative utilitarianism, since it doesn’t treat any life or part of life as being intrinsically good, regardless of the being in question valuing it, and is therefore even more misaligned with the rest of the world (in valuation of the lives of everyone else, and in the lives of their descendants). And such responses give reason even for utilitarian extremists to take actions that reduce such conflicts.
Insofar as purely psychological self-binding is hard, there are still externally available actions, such as visibly refraining from pursuit of unaccountable power to harm others, and taking actions to make it more difficult to do so, such as transferring power to those with less radical ideologies, or ensuring transparency and accountability to them.
Carl, you write that you are “more sympathetic to consequentialism than the vast majority of people.” The original post by Richard is about utilitarianism and replacement thought experiments but I guess he is also interested in other forms of consequentialism since the kind of objection he talks about can be made against other forms of consequentialism too.
The following you write seems relevant to both utilitarianism and other forms of consequentialism:
Even if these other pragmatic considerations you mention would not be removed by having control of Earth, the question remains whether they (together with other considerations) are sufficient to make it suboptimal to kill and replace everyone. What if the likelihood that they are in a simulation is not high enough? What if new scientific discoveries about the universe or multiverse indicate that taking into account agents far away from Earth is not so important?
You say,
I don’t mean that the only way to object to the form of consequentialism under consideration is to stipulate away such things and assume they do not exist. One can also object that what perhaps make it suboptimal to kill and replace everyone are complicated and speculative considerations about living in a simulation or what beings on other planets will do. Maybe your reasoning about such things is flawed somewhere or maybe new scientific discoveries will speak against such considerations. In which case (as I understand you) it may become optimal for the leader we are talking about to kill and replace everyone.
You bring up negative utilitarianism. As I write in my paper, I don’t think negative utilitarianism is worse off than traditional utilitarianism when it comes to these scenarios that involve killing everyone. The same goes for negative vs. traditional consequentialism or the comparison negative vs. traditional consequentialist-leaning morality. I would be happy to discuss that more, but I guess it would be too off-topic given the original post. Perhaps a new separate thread would be appropriate for that.
You write,
In that case the ideology (I would say morality) is not restricted to forms of utilitarianism but also include many forms of consequentialism and views that are consequentialist-leaning. It may also include views that are non-consequentialist but open to that killing is sometimes right if it is done to accomplish a greater goal, and that, for example, place huge importance on the far future so that far future concerns make what happens to the few billion humans on Earth a minor consideration. My point is that I think it’s a mistake to merely talk about utilitarianism or consequentialism here. The range of views about which one can reasonable ask ‘would it be right to kill everyone in this situation, according to this theory?’ is much wider.