Thanks for these ideas, this is an interesting perspective.
I’m a little uncertain about one of your baseline assumptions here.
”We’re told that how to weigh these cause areas against each other “hinge[s] on very debatable, uncertain questions.” (True enough!) But my impression is that EAs often take the relevant questions to be something like, should we be speciesist? and should we only care about present beings? Neither of which strikes me as especially uncertain (though I know others disagree).”
I think I disagree with this framing and/or perhaps there might be a bit of unintentional strawmanning here? Can you point out the EAs or EA arguments (perhaps on the forum) that distinguish between the strength of these worldviews that are explicitly speciesist? Or only care about present beings?
Personally I’m focused largely on GHD (while deeply respecting other worldviews) not because I’m speciest, but because I currently think the experience of being human might be many orders of magnitude more valuable than any other animal (I reject hedonism), and also that even assuming hedonism I’m not yet convinced by Rethink Priorities amazing research which places the moral weights of pigs, chickens and other animals extremely close to humans. Of course you could argue I think that because I’m a biased speciest human and you might be right—but that’s not my intention.
And I do care about both present and future human beings, and am into longtermism as a concept, but am dubious right now about our ability to predictably and positively influence the long-term future, especially in the field of AI given EA’s track record so far—with the exception of policy and advocacy work by EAs which I think has been hugely valuable.
Others may have very different (but valid) reasons than me for distinguishing between the importance of these worldviews but I’m not sure that you are right when you say “EAs often take the relevant questions to be something like, should we be speciesist? and should we only care about present beings? Neither of which strikes me as especially uncertain (though I know others disagree).”
I probably agree with OpenPhil that there are indeed a range of important, uncertain questions different from the somewhat obvious ones you stated which can swing someone between caring more about current humans, animals or longtermism.
But even setting this issue aside, I see merit in your framework as well.
I would be interested to see if other people thing that those two not “especially uncertain questions” are what pushes people towards one or another worldview.
Hi Nick, I’m reacting especially to the influential post, Open Phil Should Allocate Most Neartermist Funding to Animal Welfare, which seems to me to frame the issues in the ways I describe here as “orthodox”. (But fair point that many supporters of GHD would reject that framing! I’m with you on that; I’m just suggesting that we need to do a better job of elucidating an alternative framing of the crucial questions.)
I currently think the experience of being human might be many orders of magnitude more valuable than any other animal (I reject hedonism)
Thanks, yeah, this could be another crucial question: whether there are distinctive goods, intrinsic to (typical) human lives, that are just vastly more important than relieving suffering. I have some sympathy for this view, too. But it faces the challenge that most people would prioritize reducing their own suffering over gaining more “distinctive goods” (they wouldn’t want to extend their life further if half the time would be spent in terrible suffering, for example). So either you have to claim that most people are making a prudential error here (and really they should care less about their own suffering, relative to distinctive human goods), or human suffering is orders of magnitude more severe than non-human suffering (which I don’t really see a non-speciesist basis for confidently believing), or distinctive human goods aren’t orders of magnitude more important than suffering after all. Seems tricky!
I do care about both present and future human beings, and am into longtermism as a concept, but am dubious right now about our ability to predictably and positively influence the long-term future
Great! That’s exactly what I think supporters of GHD ought to believe. And it seems to support my reconceptualization: GHD is preferred over AI risk on grounds of reliability and robustness, not on grounds of “neartermism”.
Thanks for these ideas, this is an interesting perspective.
I’m a little uncertain about one of your baseline assumptions here.
”We’re told that how to weigh these cause areas against each other “hinge[s] on very debatable, uncertain questions.” (True enough!) But my impression is that EAs often take the relevant questions to be something like, should we be speciesist? and should we only care about present beings? Neither of which strikes me as especially uncertain (though I know others disagree).”
I think I disagree with this framing and/or perhaps there might be a bit of unintentional strawmanning here? Can you point out the EAs or EA arguments (perhaps on the forum) that distinguish between the strength of these worldviews that are explicitly speciesist? Or only care about present beings?
Personally I’m focused largely on GHD (while deeply respecting other worldviews) not because I’m speciest, but because I currently think the experience of being human might be many orders of magnitude more valuable than any other animal (I reject hedonism), and also that even assuming hedonism I’m not yet convinced by Rethink Priorities amazing research which places the moral weights of pigs, chickens and other animals extremely close to humans. Of course you could argue I think that because I’m a biased speciest human and you might be right—but that’s not my intention.
And I do care about both present and future human beings, and am into longtermism as a concept, but am dubious right now about our ability to predictably and positively influence the long-term future, especially in the field of AI given EA’s track record so far—with the exception of policy and advocacy work by EAs which I think has been hugely valuable.
Others may have very different (but valid) reasons than me for distinguishing between the importance of these worldviews but I’m not sure that you are right when you say “EAs often take the relevant questions to be something like, should we be speciesist? and should we only care about present beings? Neither of which strikes me as especially uncertain (though I know others disagree).”
I probably agree with OpenPhil that there are indeed a range of important, uncertain questions different from the somewhat obvious ones you stated which can swing someone between caring more about current humans, animals or longtermism.
But even setting this issue aside, I see merit in your framework as well.
I would be interested to see if other people thing that those two not “especially uncertain questions” are what pushes people towards one or another worldview.
Hi Nick, I’m reacting especially to the influential post, Open Phil Should Allocate Most Neartermist Funding to Animal Welfare, which seems to me to frame the issues in the ways I describe here as “orthodox”. (But fair point that many supporters of GHD would reject that framing! I’m with you on that; I’m just suggesting that we need to do a better job of elucidating an alternative framing of the crucial questions.)
Thanks, yeah, this could be another crucial question: whether there are distinctive goods, intrinsic to (typical) human lives, that are just vastly more important than relieving suffering. I have some sympathy for this view, too. But it faces the challenge that most people would prioritize reducing their own suffering over gaining more “distinctive goods” (they wouldn’t want to extend their life further if half the time would be spent in terrible suffering, for example). So either you have to claim that most people are making a prudential error here (and really they should care less about their own suffering, relative to distinctive human goods), or human suffering is orders of magnitude more severe than non-human suffering (which I don’t really see a non-speciesist basis for confidently believing), or distinctive human goods aren’t orders of magnitude more important than suffering after all. Seems tricky!
Great! That’s exactly what I think supporters of GHD ought to believe. And it seems to support my reconceptualization: GHD is preferred over AI risk on grounds of reliability and robustness, not on grounds of “neartermism”.