Hi Nick, Iâm reacting especially to the influential post, Open Phil Should Allocate Most Neartermist Funding to Animal Welfare, which seems to me to frame the issues in the ways I describe here as âorthodoxâ. (But fair point that many supporters of GHD would reject that framing! Iâm with you on that; Iâm just suggesting that we need to do a better job of elucidating an alternative framing of the crucial questions.)
I currently think the experience of being human might be many orders of magnitude more valuable than any other animal (I reject hedonism)
Thanks, yeah, this could be another crucial question: whether there are distinctive goods, intrinsic to (typical) human lives, that are just vastly more important than relieving suffering. I have some sympathy for this view, too. But it faces the challenge that most people would prioritize reducing their own suffering over gaining more âdistinctive goodsâ (they wouldnât want to extend their life further if half the time would be spent in terrible suffering, for example). So either you have to claim that most people are making a prudential error here (and really they should care less about their own suffering, relative to distinctive human goods), or human suffering is orders of magnitude more severe than non-human suffering (which I donât really see a non-speciesist basis for confidently believing), or distinctive human goods arenât orders of magnitude more important than suffering after all. Seems tricky!
I do care about both present and future human beings, and am into longtermism as a concept, but am dubious right now about our ability to predictably and positively influence the long-term future
Great! Thatâs exactly what I think supporters of GHD ought to believe. And it seems to support my reconceptualization: GHD is preferred over AI risk on grounds of reliability and robustness, not on grounds of âneartermismâ.
Hi Nick, Iâm reacting especially to the influential post, Open Phil Should Allocate Most Neartermist Funding to Animal Welfare, which seems to me to frame the issues in the ways I describe here as âorthodoxâ. (But fair point that many supporters of GHD would reject that framing! Iâm with you on that; Iâm just suggesting that we need to do a better job of elucidating an alternative framing of the crucial questions.)
Thanks, yeah, this could be another crucial question: whether there are distinctive goods, intrinsic to (typical) human lives, that are just vastly more important than relieving suffering. I have some sympathy for this view, too. But it faces the challenge that most people would prioritize reducing their own suffering over gaining more âdistinctive goodsâ (they wouldnât want to extend their life further if half the time would be spent in terrible suffering, for example). So either you have to claim that most people are making a prudential error here (and really they should care less about their own suffering, relative to distinctive human goods), or human suffering is orders of magnitude more severe than non-human suffering (which I donât really see a non-speciesist basis for confidently believing), or distinctive human goods arenât orders of magnitude more important than suffering after all. Seems tricky!
Great! Thatâs exactly what I think supporters of GHD ought to believe. And it seems to support my reconceptualization: GHD is preferred over AI risk on grounds of reliability and robustness, not on grounds of âneartermismâ.