I favour animal welfare, but some (near-term future) considerations that I’m most sympathetic to that could favour global health are:
I’m not a hedonist. I care about every way any being can care consciously and terminally about anything. So, I care about others’ (conscious or dispositionally conscious) hedonic states, desires, preferences, moral intuitions and other attitudes on their behalf. I’d guess that humans are much more willing to endure suffering, including fairly intense suffering, for their children and other goals than other animals are for anything. So human preferences might often be much stronger than other animals’, if we normalize preferences by preferences about one’s own suffering, say.
This has some directly intuitive appeal, but my best guess is that this involves some wrong or unjustifiable assumptions, and I doubt that such preferences are even interpersonally comparable.[1]
This reasoning could lead to large discrepancies between humans, because some humans are much more willing to suffer for things than others. The most fanatical humans might dominate. That could be pretty morally repugnant.
Arguments for weighing ~proportionally with neuron counts:
The only measures of subjective welfare that seem to me like they could ground interpersonal comparisons are based on attention (and alertness), e.g. how hard attention is pulled towards something important (motivational salience) or “how much” attention is used. I could imagine the “size” of attention, e.g. the number of distinguishable items in it, to scale with neuron counts, maybe even proportionally, which could favour global health on the margin.
But probably with decreasing marginal returns to additional neurons, and I give substantial weight to the number of neurons not really mattering at all, once you have the right kind of attention.
Some very weird and speculative possibilities of large numbers of conscious or value-generating subsystems in each brain could support weighing ~proportionally with neuron counts in expectation, even if you assign the possibilities fairly low but non-negligible probabilities (Fischer, Shriver & St. Jules, 2022).
Maybe even faster scaling than proportional in expectation, but I think that leads to double counting I’d reject if it’s even modestly faster than proportional.
Animal welfare work has more steeply decreasing marginal cost-effectiveness.
Cost-effectiveness estimates for marginal animal welfare work are more speculative than GiveWell’s (RCT- and meta-analysis-based) estimates, at least for the more direct impacts considered. Maybe we’re not skeptical enough of the causal effects of animal welfare work, and the welfare reforms would have happened soon anyway or aren’t as likely to actually materialize as we think. I’m also inclined to give less weight to more extreme impacts when they’re more ambiguous/speculative, similar to difference-making ambiguity aversion.
I worry about lots of animal welfare work backfiring, and support for apparently safer work funging with work that backfires, so also backfiring.
My best guess is that animal agriculture is good for wild animals, especially invertebrates, because it reduces their populations and I have very asymmetric views. So plant-based substitutes, cultured meat and other diet change work could backfire, if and because it harms wild invertebrates more than it helps animals used for food.
I worry that nest deprivation for caged laying hens could be much less intensely painful than the long-term pain from keel bone fractures, so cage-free could be worse because of the apparent increase in keel bone fractures.
Saving human lives, e.g. through AMF, probably reduces wild animal populations, so seems good for animals overall if you care enough about invertebrates (relative to animals used for food) and think they’d be better off not existing.
Maybe farmed insect welfare work is even better, though.
People probably just have different beliefs/preferences about how much their own suffering matters, and those preferences are plausibly not interpersonally comparable at all.
Some people may find it easier to reflectively dismiss or discount their own suffering than others for various reasons, like particular beliefs or greater self-control. If interpersonal comparisons are warranted, it could just mean these people care less about their own suffering in absolute terms on average, not that they care more about other things than average. Other animals probably can’t easily dismiss or discount their own suffering much, and their actions follow pretty directly from their suffering and other felt desires, so they might even care more about their own suffering in absolute terms on average.
We can also imagine moral patients with conscious preferences who can’t suffer at all, so we’d have to find something else to normalize by to make interpersonal comparisons with them.
(Edited)
I favour animal welfare, but some (near-term future) considerations that I’m most sympathetic to that could favour global health are:
I’m not a hedonist. I care about every way any being can care consciously and terminally about anything. So, I care about others’ (conscious or dispositionally conscious) hedonic states, desires, preferences, moral intuitions and other attitudes on their behalf. I’d guess that humans are much more willing to endure suffering, including fairly intense suffering, for their children and other goals than other animals are for anything. So human preferences might often be much stronger than other animals’, if we normalize preferences by preferences about one’s own suffering, say.
This has some directly intuitive appeal, but my best guess is that this involves some wrong or unjustifiable assumptions, and I doubt that such preferences are even interpersonally comparable.[1]
This reasoning could lead to large discrepancies between humans, because some humans are much more willing to suffer for things than others. The most fanatical humans might dominate. That could be pretty morally repugnant.
Arguments for weighing ~proportionally with neuron counts:
The only measures of subjective welfare that seem to me like they could ground interpersonal comparisons are based on attention (and alertness), e.g. how hard attention is pulled towards something important (motivational salience) or “how much” attention is used. I could imagine the “size” of attention, e.g. the number of distinguishable items in it, to scale with neuron counts, maybe even proportionally, which could favour global health on the margin.
But probably with decreasing marginal returns to additional neurons, and I give substantial weight to the number of neurons not really mattering at all, once you have the right kind of attention.
Some very weird and speculative possibilities of large numbers of conscious or value-generating subsystems in each brain could support weighing ~proportionally with neuron counts in expectation, even if you assign the possibilities fairly low but non-negligible probabilities (Fischer, Shriver & St. Jules, 2022).
Maybe even faster scaling than proportional in expectation, but I think that leads to double counting I’d reject if it’s even modestly faster than proportional.
Animal welfare work has more steeply decreasing marginal cost-effectiveness.
Cost-effectiveness estimates for marginal animal welfare work are more speculative than GiveWell’s (RCT- and meta-analysis-based) estimates, at least for the more direct impacts considered. Maybe we’re not skeptical enough of the causal effects of animal welfare work, and the welfare reforms would have happened soon anyway or aren’t as likely to actually materialize as we think. I’m also inclined to give less weight to more extreme impacts when they’re more ambiguous/speculative, similar to difference-making ambiguity aversion.
I worry about lots of animal welfare work backfiring, and support for apparently safer work funging with work that backfires, so also backfiring.
My best guess is that animal agriculture is good for wild animals, especially invertebrates, because it reduces their populations and I have very asymmetric views. So plant-based substitutes, cultured meat and other diet change work could backfire, if and because it harms wild invertebrates more than it helps animals used for food.
I worry that nest deprivation for caged laying hens could be much less intensely painful than the long-term pain from keel bone fractures, so cage-free could be worse because of the apparent increase in keel bone fractures.
I think we should support more work to reduce keel bone fractures in laying hens, and CE/AIM wants to start a new charity for this.
Saving human lives, e.g. through AMF, probably reduces wild animal populations, so seems good for animals overall if you care enough about invertebrates (relative to animals used for food) and think they’d be better off not existing.
Maybe farmed insect welfare work is even better, though.
People probably just have different beliefs/preferences about how much their own suffering matters, and those preferences are plausibly not interpersonally comparable at all.
Some people may find it easier to reflectively dismiss or discount their own suffering than others for various reasons, like particular beliefs or greater self-control. If interpersonal comparisons are warranted, it could just mean these people care less about their own suffering in absolute terms on average, not that they care more about other things than average. Other animals probably can’t easily dismiss or discount their own suffering much, and their actions follow pretty directly from their suffering and other felt desires, so they might even care more about their own suffering in absolute terms on average.
We can also imagine moral patients with conscious preferences who can’t suffer at all, so we’d have to find something else to normalize by to make interpersonal comparisons with them.
I discuss interpersonal comparisons more here.