Michal—thinking further on this, I think one issue that troubles me is the potential overlap between negative utilitarianism, dangerous technologies, and X risk—an overlap that makes negative utilitarianism a much more dangerous information hazard than we might realize.
As many EAs have pointed out, bioweapons, nuclear weapons, and advanced AI might be especially dangerous if they fall into the hands of people who would quite like humanity to go extinct. This could include religious apocalypse cults, nihilistic terrorists, radical Earth-First-style eco-terrorists, etc. But it could also include people inspired by negative utilitarianism, who take it upon themselves to ‘end humanity’s net suffering’ by any means necessary.
So, in my view, negative utilitarianism is an X-risk amplifier, and that makes it much more dangerous than it being ‘just another perspective in moral philosophy’ (as it’s often viewed.)
Michal—thinking further on this, I think one issue that troubles me is the potential overlap between negative utilitarianism, dangerous technologies, and X risk—an overlap that makes negative utilitarianism a much more dangerous information hazard than we might realize.
As many EAs have pointed out, bioweapons, nuclear weapons, and advanced AI might be especially dangerous if they fall into the hands of people who would quite like humanity to go extinct. This could include religious apocalypse cults, nihilistic terrorists, radical Earth-First-style eco-terrorists, etc. But it could also include people inspired by negative utilitarianism, who take it upon themselves to ‘end humanity’s net suffering’ by any means necessary.
So, in my view, negative utilitarianism is an X-risk amplifier, and that makes it much more dangerous than it being ‘just another perspective in moral philosophy’ (as it’s often viewed.)