Re point A and B: one question is how sensitive is the claim to scope? It seems to me that we’re in a number of ongoing moral catastrophes (including current commonly discussed EA cause areas like hundreds of thousands of people dying annually from malaria, but also stuff like unjust penal systems, or racism in the developing world, or civil wars, or people dying from poisoned air, or genital mutilation of children), so I see two possible cluster of beliefs:
1. You are scope sensitive, in which case this just reduces back to an argument about expected value. Certainly the hypothetical of humans in battery cages is unusually bad, but how it compares to longtermism isn’t structurallydissimilar to how e.g. we should relate to clean water access vs longtermism.
2. You take more of a justice-based, “any moral catastrophe is too much” approach. In that case I’m not sure how you can prioritize (“How can you focus on chickens when thousands of innocents are unjustly imprisoned?”)
Re Point C: I’m not sure I understand this point. I think the argument is that there’s self-serving bias in the form of “motivated reasoning is more likely to push us towards incorrectly believing that things that benefit us and is costly to others is worth the cost-benefits tradeoff.” I basically think this is correct. So, on the margin this should push us to be slightly more willing to be self-sacrificial than if we didn’t factor in this bias. In particular, this should slightly push us to favor being (e.g.) slightly more frugal on personal consumption than we otherwise would be, or slightly more willing to do hard/unpleasant work, for longer hours. Though of course there are also good arguments against frugality or overwork.
But the bias claim should basically be neutral towards the question of how we should relate to how we ought to judge what sacrifices others ought to make. This particular angle doesn’t (to me) clearly address why we should expect there’s more biases in favor of us overrepresenting future generations’ interests over the interests of existing animals.
Re point A and B: one question is how sensitive is the claim to scope? It seems to me that we’re in a number of ongoing moral catastrophes (including current commonly discussed EA cause areas like hundreds of thousands of people dying annually from malaria, but also stuff like unjust penal systems, or racism in the developing world, or civil wars, or people dying from poisoned air, or genital mutilation of children), so I see two possible cluster of beliefs:
1. You are scope sensitive, in which case this just reduces back to an argument about expected value. Certainly the hypothetical of humans in battery cages is unusually bad, but how it compares to longtermism isn’t structurally dissimilar to how e.g. we should relate to clean water access vs longtermism.
2. You take more of a justice-based, “any moral catastrophe is too much” approach. In that case I’m not sure how you can prioritize (“How can you focus on chickens when thousands of innocents are unjustly imprisoned?”)
Re Point C: I’m not sure I understand this point. I think the argument is that there’s self-serving bias in the form of “motivated reasoning is more likely to push us towards incorrectly believing that things that benefit us and is costly to others is worth the cost-benefits tradeoff.” I basically think this is correct. So, on the margin this should push us to be slightly more willing to be self-sacrificial than if we didn’t factor in this bias. In particular, this should slightly push us to favor being (e.g.) slightly more frugal on personal consumption than we otherwise would be, or slightly more willing to do hard/unpleasant work, for longer hours. Though of course there are also good arguments against frugality or overwork.
But the bias claim should basically be neutral towards the question of how we should relate to how we ought to judge what sacrifices others ought to make. This particular angle doesn’t (to me) clearly address why we should expect there’s more biases in favor of us overrepresenting future generations’ interests over the interests of existing animals.