Ishaan—I can imagine some potentially persuasive arguments that negative utilitarianism might describe the situation for many wild animal species, and perhaps for many humans in prehistory.
However, our species has been extraordinarily successful at re-engineering our environments, creating our own eco-niches, and inventing technologies that maximize positive well-being and minimize suffering. The result is that, according to all the research I’ve seen on happiness, subjective well-being, and flourishing, most humans in the modern world are well above ‘neutral’ in terms of utility.
So the central claims of negative utilitarianism—which we could caricature/summarize as ‘life is suffering’ and ‘happiness is irrelevant’—simply aren’t true, empirically, for most modern humans.
Another way to frame this is to ask real people whether they’d be content to accept a painless suicide. The vast majority will say no. Why do we think that if we aggregate this at the species level that we’d be content to accept a painless mass extinction event?
On a more personal note, as a psychology professor, I’m deeply concerned that writers such as Perry and Benatar can undermine the mental health of young adults who take philosophical questions seriously. I think their writings are basically ‘information hazards’ for those prone to dysthymia, depression, or psychosis. So, I think their ideas are empirically false, theoretically incoherent, and psychologically dangerous to many vulnerable people.
Hi Geoffrey—I’ve found your work very interesting and hence I respect your authority, but at the same time I can’t fully agree. For me, reading Perry felt honestly great, that someone perhaps could hold similar views that I hold, that someone would actually agree with me on certain things, that I was not all alone in the world. And in the end—both Perry and me lead a fairly happy life, I think. No one would arrive at her or Benatar’s writings accidentally—and if they did, they wouldn’t find them appealing.
But that was a sidenote. My major arguement is: I don’t deny that most people are net happy. I just think that the price of those suffering is a really high one to pay—one unworthy paying.
Michal—thinking further on this, I think one issue that troubles me is the potential overlap between negative utilitarianism, dangerous technologies, and X risk—an overlap that makes negative utilitarianism a much more dangerous information hazard than we might realize.
As many EAs have pointed out, bioweapons, nuclear weapons, and advanced AI might be especially dangerous if they fall into the hands of people who would quite like humanity to go extinct. This could include religious apocalypse cults, nihilistic terrorists, radical Earth-First-style eco-terrorists, etc. But it could also include people inspired by negative utilitarianism, who take it upon themselves to ‘end humanity’s net suffering’ by any means necessary.
So, in my view, negative utilitarianism is an X-risk amplifier, and that makes it much more dangerous than it being ‘just another perspective in moral philosophy’ (as it’s often viewed.)
Ishaan—I can imagine some potentially persuasive arguments that negative utilitarianism might describe the situation for many wild animal species, and perhaps for many humans in prehistory.
However, our species has been extraordinarily successful at re-engineering our environments, creating our own eco-niches, and inventing technologies that maximize positive well-being and minimize suffering. The result is that, according to all the research I’ve seen on happiness, subjective well-being, and flourishing, most humans in the modern world are well above ‘neutral’ in terms of utility.
So the central claims of negative utilitarianism—which we could caricature/summarize as ‘life is suffering’ and ‘happiness is irrelevant’—simply aren’t true, empirically, for most modern humans.
Another way to frame this is to ask real people whether they’d be content to accept a painless suicide. The vast majority will say no. Why do we think that if we aggregate this at the species level that we’d be content to accept a painless mass extinction event?
On a more personal note, as a psychology professor, I’m deeply concerned that writers such as Perry and Benatar can undermine the mental health of young adults who take philosophical questions seriously. I think their writings are basically ‘information hazards’ for those prone to dysthymia, depression, or psychosis. So, I think their ideas are empirically false, theoretically incoherent, and psychologically dangerous to many vulnerable people.
Hi Geoffrey—I’ve found your work very interesting and hence I respect your authority, but at the same time I can’t fully agree. For me, reading Perry felt honestly great, that someone perhaps could hold similar views that I hold, that someone would actually agree with me on certain things, that I was not all alone in the world. And in the end—both Perry and me lead a fairly happy life, I think. No one would arrive at her or Benatar’s writings accidentally—and if they did, they wouldn’t find them appealing.
But that was a sidenote. My major arguement is: I don’t deny that most people are net happy. I just think that the price of those suffering is a really high one to pay—one unworthy paying.
Michal—thinking further on this, I think one issue that troubles me is the potential overlap between negative utilitarianism, dangerous technologies, and X risk—an overlap that makes negative utilitarianism a much more dangerous information hazard than we might realize.
As many EAs have pointed out, bioweapons, nuclear weapons, and advanced AI might be especially dangerous if they fall into the hands of people who would quite like humanity to go extinct. This could include religious apocalypse cults, nihilistic terrorists, radical Earth-First-style eco-terrorists, etc. But it could also include people inspired by negative utilitarianism, who take it upon themselves to ‘end humanity’s net suffering’ by any means necessary.
So, in my view, negative utilitarianism is an X-risk amplifier, and that makes it much more dangerous than it being ‘just another perspective in moral philosophy’ (as it’s often viewed.)