Hi Geoffrey—I’ve found your work very interesting and hence I respect your authority, but at the same time I can’t fully agree. For me, reading Perry felt honestly great, that someone perhaps could hold similar views that I hold, that someone would actually agree with me on certain things, that I was not all alone in the world. And in the end—both Perry and me lead a fairly happy life, I think. No one would arrive at her or Benatar’s writings accidentally—and if they did, they wouldn’t find them appealing.
But that was a sidenote. My major arguement is: I don’t deny that most people are net happy. I just think that the price of those suffering is a really high one to pay—one unworthy paying.
Michal—thinking further on this, I think one issue that troubles me is the potential overlap between negative utilitarianism, dangerous technologies, and X risk—an overlap that makes negative utilitarianism a much more dangerous information hazard than we might realize.
As many EAs have pointed out, bioweapons, nuclear weapons, and advanced AI might be especially dangerous if they fall into the hands of people who would quite like humanity to go extinct. This could include religious apocalypse cults, nihilistic terrorists, radical Earth-First-style eco-terrorists, etc. But it could also include people inspired by negative utilitarianism, who take it upon themselves to ‘end humanity’s net suffering’ by any means necessary.
So, in my view, negative utilitarianism is an X-risk amplifier, and that makes it much more dangerous than it being ‘just another perspective in moral philosophy’ (as it’s often viewed.)
Hi Geoffrey—I’ve found your work very interesting and hence I respect your authority, but at the same time I can’t fully agree. For me, reading Perry felt honestly great, that someone perhaps could hold similar views that I hold, that someone would actually agree with me on certain things, that I was not all alone in the world. And in the end—both Perry and me lead a fairly happy life, I think. No one would arrive at her or Benatar’s writings accidentally—and if they did, they wouldn’t find them appealing.
But that was a sidenote. My major arguement is: I don’t deny that most people are net happy. I just think that the price of those suffering is a really high one to pay—one unworthy paying.
Michal—thinking further on this, I think one issue that troubles me is the potential overlap between negative utilitarianism, dangerous technologies, and X risk—an overlap that makes negative utilitarianism a much more dangerous information hazard than we might realize.
As many EAs have pointed out, bioweapons, nuclear weapons, and advanced AI might be especially dangerous if they fall into the hands of people who would quite like humanity to go extinct. This could include religious apocalypse cults, nihilistic terrorists, radical Earth-First-style eco-terrorists, etc. But it could also include people inspired by negative utilitarianism, who take it upon themselves to ‘end humanity’s net suffering’ by any means necessary.
So, in my view, negative utilitarianism is an X-risk amplifier, and that makes it much more dangerous than it being ‘just another perspective in moral philosophy’ (as it’s often viewed.)