I think the question of prioritization of human welfare versus animal welfare should be approached from a “philosopher” mindset. We must determine the meaning and moral weight of suffering in humans and non-humans before we can know how to weigh the causes relative to each other.
There are plenty of in-depth discussions on the topic of moral weights. But it seems your preferred moral theory is contractualism which I understand leaves the question of moral weights somewhat redundant.
There was this post on contractualism arguing it leads to global health beating animal welfare. The problem for you is that many are attracted to EA precisely because of impartiality and so have already decided they don’t like contractualism and its conclusions. Check out this comment which points out that contractualism can favor spending a billion dollars saving one life for certain over spending the same amount of money to almost certainly save far more lives. A conclusion like this just seems antithetical to EA.
If you want to argue what we should do under a contractualist moral theory you can do it here, you just might not get as much engagement as on other philosophy-related forums as a lot of people here have already decided they are consequentialist (often after deep reflection).
I’m personally happy to discuss underlying moral theories. This is why I’m looking forward to your answer to MichaelStJules’ question which points out your contractualist theory may lead to special moral concern for, as he puts it, “fetuses, embryos, zygotes and even uncombined sperm cells and eggs”. This would then have a whole host of strongly pro-life and pro-natalist implications.
Check out this comment which points out that contractualism can favor spending a billion dollars saving one life for certain over spending the same amount of money to almost certainly save far more lives. A conclusion like this just seems antithetical to EA.
FWIW, this is a consequence of non-aggregation. You can have a fully aggregative or even additive contractualist view, and it would not have this implication. It could be basically utilitarian with respect to moral agents (and excluding conscious beings who aren’t also moral agents). But contractualism is usually not aggregative, AFAIK.
There are plenty of in-depth discussions on the topic of moral weights. But it seems your preferred moral theory is contractualism which I understand leaves the question of moral weights somewhat redundant.
There was this post on contractualism arguing it leads to global health beating animal welfare. The problem for you is that many are attracted to EA precisely because of impartiality and so have already decided they don’t like contractualism and its conclusions. Check out this comment which points out that contractualism can favor spending a billion dollars saving one life for certain over spending the same amount of money to almost certainly save far more lives. A conclusion like this just seems antithetical to EA.
If you want to argue what we should do under a contractualist moral theory you can do it here, you just might not get as much engagement as on other philosophy-related forums as a lot of people here have already decided they are consequentialist (often after deep reflection).
I’m personally happy to discuss underlying moral theories. This is why I’m looking forward to your answer to MichaelStJules’ question which points out your contractualist theory may lead to special moral concern for, as he puts it, “fetuses, embryos, zygotes and even uncombined sperm cells and eggs”. This would then have a whole host of strongly pro-life and pro-natalist implications.
FWIW, this is a consequence of non-aggregation. You can have a fully aggregative or even additive contractualist view, and it would not have this implication. It could be basically utilitarian with respect to moral agents (and excluding conscious beings who aren’t also moral agents). But contractualism is usually not aggregative, AFAIK.