I’d be interested in hearing more of why you believe global health beats animal welfare on your views. It sounds like it’s about placing a lot of value on people’s desires to live. How are you making comparisons of desire strength in general between individuals, including a) between humans and other animals, and b) between different desires, especially the desire to live and other desires?
Personally, I think there’s a decent case for nonhuman animals mattering substantially in expectation on non-hedonic views, including desire and preference views:
I think it’s not too unlikely that nonhuman animals have access to whatever general non-hedonic values you care about, e.g. chickens probably have (conscious) desires and preferences, and there’s a decent chance shrimp and insects do, too (more here on sophisticated versions of desires and preferences in other animals), and
if they do have access to them, it’s not too unlikely that
interpersonal comparisons aren’t possible for those non-hedonic values, between species and maybe even just between humans, anyway (more here and here), so
we can’t particularly justify favouring humans or justify favouring nonhumans, and so we just aim for something like Pareto efficiency, across species or even across all individuals, or
we normalize welfare ranges or capacities for welfare based on their statistical properties, e.g. variance or range, which I’d guess favours animal welfare, because
it will treat all individuals — humans and other animals — as if they have similar welfare ranges or capacities for welfare or individual value at stake, and
far greater numbers of life-years and individuals are helped per $ with animal welfare interventions.
Sorry for putting off responding to this. I wrote this post quickly on a Sunday night, so naturally work got in the way of spending the time to put this together. Also, I just expect people to get very upset with me here regardless of what I say, which I understand—from their point of view I’m potentially causing a lot of harm—but naturally causes procrastination.
I still don’t have a comprehensive response, but I think there are now a few things I can flag for where I’m diverging here. I found titotal’s post helpful for establishing the starting point under hedonism:
For the intervention of cage free campaigns, using RP’s moral weights, the intervention saves 1996 DALY’s per thousand dollars, about a 100 times as effective as AMF.
However, even before we get into moral uncertainty I think this still overstates the case:
Animal welfare (AW) interventions are much less robust than the Global Health and Development (GHD) interventions animal welfare advocates tend to compare them to. Most of them are fundamentally advocacy interventions, which I think advocates tend to overrate heavily.
How to deal with such uncertainty has been the topic of much debate, which I can’t do justice to here. But one thing I try to do is compare apples-to-apples for robustness where possible; if I relax my standards for robustness and look at advocacy, how much more estimated cost-effectiveness do I get in the GHD space? Conveniently, I currently donate to Giving What We Can as ‘Effective Giving Advocacy’ and have looked into their forward-looking marginal multiplier a fair bit;I think it’s about 10x. Joel Tan looked and concluded 13x. I’ve checked with others who have looked at GWWC in detail; they’re also around there. I’ve also seen 5x-20x claims for things like lead elimination advocacy, but I haven’t looked into those claims in nearly as much detail.
Overall I think that if you’re comfortable donating to animal welfare interventions, comparing to AMF/Givewell ‘Top Charities’ is just a mistake; you should be comparing to the actual best GHD interventions under your tolerance for shaky evidence, which will have estimated cost-effectiveness 10x higher or possibly even more.
Also, I subjectively feel like AW is quite a bit less robust than even GHD advocacy; there’s a robustness issue from advocacy in both cases, but AW also really struggles with a lack of feedback loops—we can’t ask the animals how they feel—and so I think is much more likely to end up causing harm on its own terms. I don’t know how to quantify this issue, and it doesn’t seem like a huge issue for cage-free specifically, so will set this aside. Back when AW interventions were more about trying to end factory farming rather than improving conditions on factory farms it did worry me quite a bit.
As I noted in my comment under that post, Open Phil thinks the marginal FAW opportunity going forward is around 20% of Saulius, not 60% of Saulius; I haven’t seen anything that would cause me to argue with them on this, and this cuts the gap by 3x.
Another issue is around ‘pay it forward’ or ‘ripple’ effects, where helping someone enables them to help others, which seem to only apply to humans not animals. I’m not looking at the long-term future here, just the next generation or so; after that I tend to think the ripples fade out. But even over that short time, the amount of follow-on good a life saved can do seems significant, and probably moves my sense of things by small amount. Still, it’s hard to quantify and I’ll set this aside as well.
After the two issues I am willing to quantify we’re down to around 3.3x, and we’re still assuming hedonism.
On the other hand, I have the impression that RP made an admirable effort to tend towards conservatism in some empirical assumptions, if not moral ones. I think Open Phil also tends this way sometimes. So I’m not as sure as I usually would what happens if somebody looks more deeply; overwhelmingly I would say EA has found that interventions get worse the more you look at them, which is a lot of why I penalise non-robustness in the first place, but perhaps Open Phil + RP have been conservative enough that this isn’t the case?
***
Still, my overall guess is that if you assume hedonism AW comes out ahead. I am not a moral realist; if people want to go all-in on hedonism and donate to AW on those grounds, I don’t see that I have any grounds to argue with them. But as my OP alluded to, I tend to think there is more at stake / humans are ‘worth more’ in the non-hedonic worlds. So when I work through this I end up underwhelmed by the overall case.
***
This brings us to the much thornier territory of moral uncertainty. While continuing to observe that I’m out of my depth philosophically, and am correspondingly uncertain how best to approach this, some notes on how I think about this and where I seem to be differing:
I find experience machine thought experiments, and people’s lack of enthusiasm for them, much more compelling than ‘tortured Tim’ thought experiements for trying to get a handle on how much of what matters is pleasure/suffering. The issue I see with modelling extreme suffering is that it tends to heavily disrupt non-hedonic goods, and so it’s hard to figure out how much of the badness is the suffering versus the disruption. We can get a sense of how much people care about this disruption from their refusal to enter the experience machine; a lot of the rejections I see and personally feel boil down to “I’m maxing out pleasure but losing everything that ‘actually matters’”.
RP did mention this but I found their handling unconvincing; they seem to have very different intuitions to me for how much torture compromises human ability to experience what ‘actually matters’. Empirical evidence from people with chronic nerve damage is similarly tainted by the fact that e.g. friends often abandon you when you’re chronically in pain, you may have to drop hobbies that meant a lot to you, and so on.
I’ve been lucky enough never to experience anything that severe, but if I look at the worst periods of my life it certainly seemed like a lot more impact came from these ‘secondary’ effects—interference with non-hedonic goods—than from the primary suffering. My heart goes out to people who are dealing with worse conditions and very likely taking larger ‘secondary’ hits.
I also just felt like the Tortured Tim thought experiment didn’t ‘land’ even on its own terms for me, similar to the sentiments expressed in this comment and this comment.
I mostly agree with your reasoning before even getting into moral uncertainty and up to and including this:
After the two issues I am willing to quantify we’re down to around 3.3x, and we’re still assuming hedonism.
However, if we’re assuming hedonism, I think your starting point is plausibly too low for animal welfare interventions, because it underestimates the disvalue of pain relative to life in full health, as I argue here.
I also think your response to the Tortured Tim thought experiment is reasonable. Still, I would say:
If you weigh desires/preferences by attention or their effects on attention (e.g. motivational salience), then the fact that intense suffering is so disruptive and would take priority over attention to other things in your life means it would matter a lot, supporting RP’s take. And if you weigh desires/preferences by attention or their effects on attention, it seems nonhuman animals matter a lot (but something like neuron count weighting isn’t unreasonable).
I assume this is not how you weigh desires/preferences, though, or else you probably wouldn’t disagree with RP here, and especially in the ways you do!
If you don’t weigh desires by attention or their effects on attention, I don’t see how you can ground interpersonal utility comparisons at all, especially between humans and other animals but even between humans, who may differ dramatically in their values. I still don’t see a positive case for animals not mattering much.
If you weigh desires/preferences by attention or their effects on attention (e.g. motivational salience), then the fact that intense suffering is so disruptive and would take priority over attention to other things in your life means it would matter a lot.
Recently I failed to complete a dental procedure because I kept flinching whenever the dentist hit a particularly sensitive spot. They needed me to stay still. I promise you I would have preferred to stay still, not least because what ended up happening was I had to have it redone and endured more pain overall. My forebrain understood this, my hindbrain is dumb.
(FWIW the dentist was very understanding, and apologetic that the anesthetic didn’t do its job. I did not get the impression that my failure was unusual given that.)
When I talk about suffering disrupting enjoyment of non-hedonic goods I mean something like that flinch; a forced ‘eliminate the pain!’ response that likely made good sense back in the ancestral environment, but not a choice or preference in the usual sense of that term. This is particularly easy to see in cases like my flinch where the hindbrain’s ‘preference’ is self-defeating, but I would make similar observations in some other cases, e.g. addiction.
If you don’t weigh desires by attention or their effects on attention, I don’t see how you can ground interpersonal utility comparisons at all
I don’t quite see what you’re driving at with this line of argument.
I can see how being able to firmly ‘ground’ things is a nice/helpful property for an theory of ‘what is good?’ to have. I like being able to quantify things too. But to imply that measuring good must be this way seems like a case of succumbing to the Streetlight Effect, or perhaps even the McNamara fallacy if you then downgrade other conceptions of good in the style of below quote.
Put another way, it seems like you prefer to weight by attention because it makes answers easier to find, but what if such answers are just difficult to find?
The fact that ‘what is good?’ has been debated for literally millenia with no resolution in sight suggests to me that it just is difficult to find, in the same way that after some amount of time you should acknowledge your keys just aren’t under the streetlight.
But when the McNamara discipline is applied too literally, the first step is to measure whatever can be easily measured. The second step is to disregard that which can’t easily be measured or given a quantitative value. The third step is to presume that what can’t be measured easily really isn’t important. The fourth step is to say that what can’t be easily measured really doesn’t exist.
To avoid the above pitfall, which I think all STEM types should keep in mind, when I suspect my numbers are failing to capture the (morally) important things my default response is to revert in the direction of Common sense (morality). I think STEM people who fail to check themselves this way often end up causing serious harm[1]. In this case that would make me less inclined to trade human lives for aminal welfare, not more.
I’ll probably leave this post at this point unless I see a pressing need for further clarification of my views. I do appreciate you taking the time to engage politely.
It’s worth distinguishing different attentional mechanisms, like motivational salience from stimulus-driven attention. The flinch might be stimulus-driven. Being unable to stop thinking about something, like being madly in love or grieving, is motivational salience. And then there’s top-down/voluntary/endogenous attention, the executive function you use to intentionally focus on things.
We could pick any of these and measure their effects on attention. Motivational salience and top-down attention seem morally relevant, but stimulus-driven attention doesn’t.
I don’t mean to discount preferences if interpersonal comparisons can’t be grounded. I mean that if animals have such preferences, you can’t say they’re less important (there’s no fact of the matter either way), as I said in my top-level comment.
I’d be interested in hearing more of why you believe global health beats animal welfare on your views. It sounds like it’s about placing a lot of value on people’s desires to live. How are you making comparisons of desire strength in general between individuals, including a) between humans and other animals, and b) between different desires, especially the desire to live and other desires?
Personally, I think there’s a decent case for nonhuman animals mattering substantially in expectation on non-hedonic views, including desire and preference views:
I think it’s not too unlikely that nonhuman animals have access to whatever general non-hedonic values you care about, e.g. chickens probably have (conscious) desires and preferences, and there’s a decent chance shrimp and insects do, too (more here on sophisticated versions of desires and preferences in other animals), and
if they do have access to them, it’s not too unlikely that
their importance reaches heights in nonhumans that are at least a modest fraction of what they do in humans, e.g. by measuring their strength using measures of attention or effects on attention or human-based units, or
interpersonal comparisons aren’t possible for those non-hedonic values, between species and maybe even just between humans, anyway (more here and here), so
we can’t particularly justify favouring humans or justify favouring nonhumans, and so we just aim for something like Pareto efficiency, across species or even across all individuals, or
we normalize welfare ranges or capacities for welfare based on their statistical properties, e.g. variance or range, which I’d guess favours animal welfare, because
it will treat all individuals — humans and other animals — as if they have similar welfare ranges or capacities for welfare or individual value at stake, and
far greater numbers of life-years and individuals are helped per $ with animal welfare interventions.
I also discuss this and other views, including rights-based theories, contractualism, virtue ethics and special obligations, in this section of the piece of mine that you cited.
Hi Michael,
Sorry for putting off responding to this. I wrote this post quickly on a Sunday night, so naturally work got in the way of spending the time to put this together. Also, I just expect people to get very upset with me here regardless of what I say, which I understand—from their point of view I’m potentially causing a lot of harm—but naturally causes procrastination.
I still don’t have a comprehensive response, but I think there are now a few things I can flag for where I’m diverging here. I found titotal’s post helpful for establishing the starting point under hedonism:
However, even before we get into moral uncertainty I think this still overstates the case:
Animal welfare (AW) interventions are much less robust than the Global Health and Development (GHD) interventions animal welfare advocates tend to compare them to. Most of them are fundamentally advocacy interventions, which I think advocates tend to overrate heavily.
How to deal with such uncertainty has been the topic of much debate, which I can’t do justice to here. But one thing I try to do is compare apples-to-apples for robustness where possible; if I relax my standards for robustness and look at advocacy, how much more estimated cost-effectiveness do I get in the GHD space? Conveniently, I currently donate to Giving What We Can as ‘Effective Giving Advocacy’ and have looked into their forward-looking marginal multiplier a fair bit; I think it’s about 10x. Joel Tan looked and concluded 13x. I’ve checked with others who have looked at GWWC in detail; they’re also around there. I’ve also seen 5x-20x claims for things like lead elimination advocacy, but I haven’t looked into those claims in nearly as much detail.
Overall I think that if you’re comfortable donating to animal welfare interventions, comparing to AMF/Givewell ‘Top Charities’ is just a mistake; you should be comparing to the actual best GHD interventions under your tolerance for shaky evidence, which will have estimated cost-effectiveness 10x higher or possibly even more.
Also, I subjectively feel like AW is quite a bit less robust than even GHD advocacy; there’s a robustness issue from advocacy in both cases, but AW also really struggles with a lack of feedback loops—we can’t ask the animals how they feel—and so I think is much more likely to end up causing harm on its own terms. I don’t know how to quantify this issue, and it doesn’t seem like a huge issue for cage-free specifically, so will set this aside. Back when AW interventions were more about trying to end factory farming rather than improving conditions on factory farms it did worry me quite a bit.
As I noted in my comment under that post, Open Phil thinks the marginal FAW opportunity going forward is around 20% of Saulius, not 60% of Saulius; I haven’t seen anything that would cause me to argue with them on this, and this cuts the gap by 3x.
Another issue is around ‘pay it forward’ or ‘ripple’ effects, where helping someone enables them to help others, which seem to only apply to humans not animals. I’m not looking at the long-term future here, just the next generation or so; after that I tend to think the ripples fade out. But even over that short time, the amount of follow-on good a life saved can do seems significant, and probably moves my sense of things by small amount. Still, it’s hard to quantify and I’ll set this aside as well.
After the two issues I am willing to quantify we’re down to around 3.3x, and we’re still assuming hedonism.
On the other hand, I have the impression that RP made an admirable effort to tend towards conservatism in some empirical assumptions, if not moral ones. I think Open Phil also tends this way sometimes. So I’m not as sure as I usually would what happens if somebody looks more deeply; overwhelmingly I would say EA has found that interventions get worse the more you look at them, which is a lot of why I penalise non-robustness in the first place, but perhaps Open Phil + RP have been conservative enough that this isn’t the case?
***
Still, my overall guess is that if you assume hedonism AW comes out ahead. I am not a moral realist; if people want to go all-in on hedonism and donate to AW on those grounds, I don’t see that I have any grounds to argue with them. But as my OP alluded to, I tend to think there is more at stake / humans are ‘worth more’ in the non-hedonic worlds. So when I work through this I end up underwhelmed by the overall case.
***
This brings us to the much thornier territory of moral uncertainty. While continuing to observe that I’m out of my depth philosophically, and am correspondingly uncertain how best to approach this, some notes on how I think about this and where I seem to be differing:
I find experience machine thought experiments, and people’s lack of enthusiasm for them, much more compelling than ‘tortured Tim’ thought experiements for trying to get a handle on how much of what matters is pleasure/suffering. The issue I see with modelling extreme suffering is that it tends to heavily disrupt non-hedonic goods, and so it’s hard to figure out how much of the badness is the suffering versus the disruption. We can get a sense of how much people care about this disruption from their refusal to enter the experience machine; a lot of the rejections I see and personally feel boil down to “I’m maxing out pleasure but losing everything that ‘actually matters’”.
RP did mention this but I found their handling unconvincing; they seem to have very different intuitions to me for how much torture compromises human ability to experience what ‘actually matters’. Empirical evidence from people with chronic nerve damage is similarly tainted by the fact that e.g. friends often abandon you when you’re chronically in pain, you may have to drop hobbies that meant a lot to you, and so on.
I’ve been lucky enough never to experience anything that severe, but if I look at the worst periods of my life it certainly seemed like a lot more impact came from these ‘secondary’ effects—interference with non-hedonic goods—than from the primary suffering. My heart goes out to people who are dealing with worse conditions and very likely taking larger ‘secondary’ hits.
I also just felt like the Tortured Tim thought experiment didn’t ‘land’ even on its own terms for me, similar to the sentiments expressed in this comment and this comment.
I mostly agree with your reasoning before even getting into moral uncertainty and up to and including this:
However, if we’re assuming hedonism, I think your starting point is plausibly too low for animal welfare interventions, because it underestimates the disvalue of pain relative to life in full health, as I argue here.
I also think your response to the Tortured Tim thought experiment is reasonable. Still, I would say:
If you weigh desires/preferences by attention or their effects on attention (e.g. motivational salience), then the fact that intense suffering is so disruptive and would take priority over attention to other things in your life means it would matter a lot, supporting RP’s take. And if you weigh desires/preferences by attention or their effects on attention, it seems nonhuman animals matter a lot (but something like neuron count weighting isn’t unreasonable).
I assume this is not how you weigh desires/preferences, though, or else you probably wouldn’t disagree with RP here, and especially in the ways you do!
If you don’t weigh desires by attention or their effects on attention, I don’t see how you can ground interpersonal utility comparisons at all, especially between humans and other animals but even between humans, who may differ dramatically in their values. I still don’t see a positive case for animals not mattering much.
Recently I failed to complete a dental procedure because I kept flinching whenever the dentist hit a particularly sensitive spot. They needed me to stay still. I promise you I would have preferred to stay still, not least because what ended up happening was I had to have it redone and endured more pain overall. My forebrain understood this, my hindbrain is dumb.
(FWIW the dentist was very understanding, and apologetic that the anesthetic didn’t do its job. I did not get the impression that my failure was unusual given that.)
When I talk about suffering disrupting enjoyment of non-hedonic goods I mean something like that flinch; a forced ‘eliminate the pain!’ response that likely made good sense back in the ancestral environment, but not a choice or preference in the usual sense of that term. This is particularly easy to see in cases like my flinch where the hindbrain’s ‘preference’ is self-defeating, but I would make similar observations in some other cases, e.g. addiction.
I don’t quite see what you’re driving at with this line of argument.
I can see how being able to firmly ‘ground’ things is a nice/helpful property for an theory of ‘what is good?’ to have. I like being able to quantify things too. But to imply that measuring good must be this way seems like a case of succumbing to the Streetlight Effect, or perhaps even the McNamara fallacy if you then downgrade other conceptions of good in the style of below quote.
Put another way, it seems like you prefer to weight by attention because it makes answers easier to find, but what if such answers are just difficult to find?
The fact that ‘what is good?’ has been debated for literally millenia with no resolution in sight suggests to me that it just is difficult to find, in the same way that after some amount of time you should acknowledge your keys just aren’t under the streetlight.
To avoid the above pitfall, which I think all STEM types should keep in mind, when I suspect my numbers are failing to capture the (morally) important things my default response is to revert in the direction of Common sense (morality). I think STEM people who fail to check themselves this way often end up causing serious harm[1]. In this case that would make me less inclined to trade human lives for aminal welfare, not more.
I’ll probably leave this post at this point unless I see a pressing need for further clarification of my views. I do appreciate you taking the time to engage politely.
SBF is the obvious example here, but really I’ve seen this so often in EA. Big fan of Warren Buffet’s quote here:
It’s worth distinguishing different attentional mechanisms, like motivational salience from stimulus-driven attention. The flinch might be stimulus-driven. Being unable to stop thinking about something, like being madly in love or grieving, is motivational salience. And then there’s top-down/voluntary/endogenous attention, the executive function you use to intentionally focus on things.
We could pick any of these and measure their effects on attention. Motivational salience and top-down attention seem morally relevant, but stimulus-driven attention doesn’t.
I don’t mean to discount preferences if interpersonal comparisons can’t be grounded. I mean that if animals have such preferences, you can’t say they’re less important (there’s no fact of the matter either way), as I said in my top-level comment.
Just to flag that Derek posted on this very recently. It’s directly connected to both the present post and Michael’s.