Sorry for putting off responding to this. I wrote this post quickly on a Sunday night, so naturally work got in the way of spending the time to put this together. Also, I just expect people to get very upset with me here regardless of what I say, which I understandâfrom their point of view Iâm potentially causing a lot of harmâbut naturally causes procrastination.
I still donât have a comprehensive response, but I think there are now a few things I can flag for where Iâm diverging here. I found titotalâs post helpful for establishing the starting point under hedonism:
For the intervention of cage free campaigns, using RPâs moral weights, the intervention saves 1996 DALYâs per thousand dollars, about a 100 times as effective as AMF.
However, even before we get into moral uncertainty I think this still overstates the case:
Animal welfare (AW) interventions are much less robust than the Global Health and Development (GHD) interventions animal welfare advocates tend to compare them to. Most of them are fundamentally advocacy interventions, which I think advocates tend to overrate heavily.
How to deal with such uncertainty has been the topic of much debate, which I canât do justice to here. But one thing I try to do is compare apples-to-apples for robustness where possible; if I relax my standards for robustness and look at advocacy, how much more estimated cost-effectiveness do I get in the GHD space? Conveniently, I currently donate to Giving What We Can as âEffective Giving Advocacyâ and have looked into their forward-looking marginal multiplier a fair bit;I think itâs about 10x. Joel Tan looked and concluded 13x. Iâve checked with others who have looked at GWWC in detail; theyâre also around there. Iâve also seen 5x-20x claims for things like lead elimination advocacy, but I havenât looked into those claims in nearly as much detail.
Overall I think that if youâre comfortable donating to animal welfare interventions, comparing to AMF/âGivewell âTop Charitiesâ is just a mistake; you should be comparing to the actual best GHD interventions under your tolerance for shaky evidence, which will have estimated cost-effectiveness 10x higher or possibly even more.
Also, I subjectively feel like AW is quite a bit less robust than even GHD advocacy; thereâs a robustness issue from advocacy in both cases, but AW also really struggles with a lack of feedback loopsâwe canât ask the animals how they feelâand so I think is much more likely to end up causing harm on its own terms. I donât know how to quantify this issue, and it doesnât seem like a huge issue for cage-free specifically, so will set this aside. Back when AW interventions were more about trying to end factory farming rather than improving conditions on factory farms it did worry me quite a bit.
As I noted in my comment under that post, Open Phil thinks the marginal FAW opportunity going forward is around 20% of Saulius, not 60% of Saulius; I havenât seen anything that would cause me to argue with them on this, and this cuts the gap by 3x.
Another issue is around âpay it forwardâ or ârippleâ effects, where helping someone enables them to help others, which seem to only apply to humans not animals. Iâm not looking at the long-term future here, just the next generation or so; after that I tend to think the ripples fade out. But even over that short time, the amount of follow-on good a life saved can do seems significant, and probably moves my sense of things by small amount. Still, itâs hard to quantify and Iâll set this aside as well.
After the two issues I am willing to quantify weâre down to around 3.3x, and weâre still assuming hedonism.
On the other hand, I have the impression that RP made an admirable effort to tend towards conservatism in some empirical assumptions, if not moral ones. I think Open Phil also tends this way sometimes. So Iâm not as sure as I usually would what happens if somebody looks more deeply; overwhelmingly I would say EA has found that interventions get worse the more you look at them, which is a lot of why I penalise non-robustness in the first place, but perhaps Open Phil + RP have been conservative enough that this isnât the case?
***
Still, my overall guess is that if you assume hedonism AW comes out ahead. I am not a moral realist; if people want to go all-in on hedonism and donate to AW on those grounds, I donât see that I have any grounds to argue with them. But as my OP alluded to, I tend to think there is more at stake /â humans are âworth moreâ in the non-hedonic worlds. So when I work through this I end up underwhelmed by the overall case.
***
This brings us to the much thornier territory of moral uncertainty. While continuing to observe that Iâm out of my depth philosophically, and am correspondingly uncertain how best to approach this, some notes on how I think about this and where I seem to be differing:
I find experience machine thought experiments, and peopleâs lack of enthusiasm for them, much more compelling than âtortured Timâ thought experiements for trying to get a handle on how much of what matters is pleasure/âsuffering. The issue I see with modelling extreme suffering is that it tends to heavily disrupt non-hedonic goods, and so itâs hard to figure out how much of the badness is the suffering versus the disruption. We can get a sense of how much people care about this disruption from their refusal to enter the experience machine; a lot of the rejections I see and personally feel boil down to âIâm maxing out pleasure but losing everything that âactually mattersââ.
RP did mention this but I found their handling unconvincing; they seem to have very different intuitions to me for how much torture compromises human ability to experience what âactually mattersâ. Empirical evidence from people with chronic nerve damage is similarly tainted by the fact that e.g. friends often abandon you when youâre chronically in pain, you may have to drop hobbies that meant a lot to you, and so on.
Iâve been lucky enough never to experience anything that severe, but if I look at the worst periods of my life it certainly seemed like a lot more impact came from these âsecondaryâ effectsâinterference with non-hedonic goodsâthan from the primary suffering. My heart goes out to people who are dealing with worse conditions and very likely taking larger âsecondaryâ hits.
I also just felt like the Tortured Tim thought experiment didnât âlandâ even on its own terms for me, similar to the sentiments expressed in this comment and this comment.
I mostly agree with your reasoning before even getting into moral uncertainty and up to and including this:
After the two issues I am willing to quantify weâre down to around 3.3x, and weâre still assuming hedonism.
However, if weâre assuming hedonism, I think your starting point is plausibly too low for animal welfare interventions, because it underestimates the disvalue of pain relative to life in full health, as I argue here.
I also think your response to the Tortured Tim thought experiment is reasonable. Still, I would say:
If you weigh desires/âpreferences by attention or their effects on attention (e.g. motivational salience), then the fact that intense suffering is so disruptive and would take priority over attention to other things in your life means it would matter a lot, supporting RPâs take. And if you weigh desires/âpreferences by attention or their effects on attention, it seems nonhuman animals matter a lot (but something like neuron count weighting isnât unreasonable).
I assume this is not how you weigh desires/âpreferences, though, or else you probably wouldnât disagree with RP here, and especially in the ways you do!
If you donât weigh desires by attention or their effects on attention, I donât see how you can ground interpersonal utility comparisons at all, especially between humans and other animals but even between humans, who may differ dramatically in their values. I still donât see a positive case for animals not mattering much.
If you weigh desires/âpreferences by attention or their effects on attention (e.g. motivational salience), then the fact that intense suffering is so disruptive and would take priority over attention to other things in your life means it would matter a lot.
Recently I failed to complete a dental procedure because I kept flinching whenever the dentist hit a particularly sensitive spot. They needed me to stay still. I promise you I would have preferred to stay still, not least because what ended up happening was I had to have it redone and endured more pain overall. My forebrain understood this, my hindbrain is dumb.
(FWIW the dentist was very understanding, and apologetic that the anesthetic didnât do its job. I did not get the impression that my failure was unusual given that.)
When I talk about suffering disrupting enjoyment of non-hedonic goods I mean something like that flinch; a forced âeliminate the pain!â response that likely made good sense back in the ancestral environment, but not a choice or preference in the usual sense of that term. This is particularly easy to see in cases like my flinch where the hindbrainâs âpreferenceâ is self-defeating, but I would make similar observations in some other cases, e.g. addiction.
If you donât weigh desires by attention or their effects on attention, I donât see how you can ground interpersonal utility comparisons at all
I donât quite see what youâre driving at with this line of argument.
I can see how being able to firmly âgroundâ things is a nice/âhelpful property for an theory of âwhat is good?â to have. I like being able to quantify things too. But to imply that measuring good must be this way seems like a case of succumbing to the Streetlight Effect, or perhaps even the McNamara fallacy if you then downgrade other conceptions of good in the style of below quote.
Put another way, it seems like you prefer to weight by attention because it makes answers easier to find, but what if such answers are just difficult to find?
The fact that âwhat is good?â has been debated for literally millenia with no resolution in sight suggests to me that it just is difficult to find, in the same way that after some amount of time you should acknowledge your keys just arenât under the streetlight.
But when the McNamara discipline is applied too literally, the first step is to measure whatever can be easily measured. The second step is to disregard that which canât easily be measured or given a quantitative value. The third step is to presume that what canât be measured easily really isnât important. The fourth step is to say that what canât be easily measured really doesnât exist.
To avoid the above pitfall, which I think all STEM types should keep in mind, when I suspect my numbers are failing to capture the (morally) important things my default response is to revert in the direction of Common sense (morality). I think STEM people who fail to check themselves this way often end up causing serious harm[1]. In this case that would make me less inclined to trade human lives for aminal welfare, not more.
Iâll probably leave this post at this point unless I see a pressing need for further clarification of my views. I do appreciate you taking the time to engage politely.
Itâs worth distinguishing different attentional mechanisms, like motivational salience from stimulus-driven attention. The flinch might be stimulus-driven. Being unable to stop thinking about something, like being madly in love or grieving, is motivational salience. And then thereâs top-down/âvoluntary/âendogenous attention, the executive function you use to intentionally focus on things.
We could pick any of these and measure their effects on attention. Motivational salience and top-down attention seem morally relevant, but stimulus-driven attention doesnât.
I donât mean to discount preferences if interpersonal comparisons canât be grounded. I mean that if animals have such preferences, you canât say theyâre less important (thereâs no fact of the matter either way), as I said in my top-level comment.
Hi Michael,
Sorry for putting off responding to this. I wrote this post quickly on a Sunday night, so naturally work got in the way of spending the time to put this together. Also, I just expect people to get very upset with me here regardless of what I say, which I understandâfrom their point of view Iâm potentially causing a lot of harmâbut naturally causes procrastination.
I still donât have a comprehensive response, but I think there are now a few things I can flag for where Iâm diverging here. I found titotalâs post helpful for establishing the starting point under hedonism:
However, even before we get into moral uncertainty I think this still overstates the case:
Animal welfare (AW) interventions are much less robust than the Global Health and Development (GHD) interventions animal welfare advocates tend to compare them to. Most of them are fundamentally advocacy interventions, which I think advocates tend to overrate heavily.
How to deal with such uncertainty has been the topic of much debate, which I canât do justice to here. But one thing I try to do is compare apples-to-apples for robustness where possible; if I relax my standards for robustness and look at advocacy, how much more estimated cost-effectiveness do I get in the GHD space? Conveniently, I currently donate to Giving What We Can as âEffective Giving Advocacyâ and have looked into their forward-looking marginal multiplier a fair bit; I think itâs about 10x. Joel Tan looked and concluded 13x. Iâve checked with others who have looked at GWWC in detail; theyâre also around there. Iâve also seen 5x-20x claims for things like lead elimination advocacy, but I havenât looked into those claims in nearly as much detail.
Overall I think that if youâre comfortable donating to animal welfare interventions, comparing to AMF/âGivewell âTop Charitiesâ is just a mistake; you should be comparing to the actual best GHD interventions under your tolerance for shaky evidence, which will have estimated cost-effectiveness 10x higher or possibly even more.
Also, I subjectively feel like AW is quite a bit less robust than even GHD advocacy; thereâs a robustness issue from advocacy in both cases, but AW also really struggles with a lack of feedback loopsâwe canât ask the animals how they feelâand so I think is much more likely to end up causing harm on its own terms. I donât know how to quantify this issue, and it doesnât seem like a huge issue for cage-free specifically, so will set this aside. Back when AW interventions were more about trying to end factory farming rather than improving conditions on factory farms it did worry me quite a bit.
As I noted in my comment under that post, Open Phil thinks the marginal FAW opportunity going forward is around 20% of Saulius, not 60% of Saulius; I havenât seen anything that would cause me to argue with them on this, and this cuts the gap by 3x.
Another issue is around âpay it forwardâ or ârippleâ effects, where helping someone enables them to help others, which seem to only apply to humans not animals. Iâm not looking at the long-term future here, just the next generation or so; after that I tend to think the ripples fade out. But even over that short time, the amount of follow-on good a life saved can do seems significant, and probably moves my sense of things by small amount. Still, itâs hard to quantify and Iâll set this aside as well.
After the two issues I am willing to quantify weâre down to around 3.3x, and weâre still assuming hedonism.
On the other hand, I have the impression that RP made an admirable effort to tend towards conservatism in some empirical assumptions, if not moral ones. I think Open Phil also tends this way sometimes. So Iâm not as sure as I usually would what happens if somebody looks more deeply; overwhelmingly I would say EA has found that interventions get worse the more you look at them, which is a lot of why I penalise non-robustness in the first place, but perhaps Open Phil + RP have been conservative enough that this isnât the case?
***
Still, my overall guess is that if you assume hedonism AW comes out ahead. I am not a moral realist; if people want to go all-in on hedonism and donate to AW on those grounds, I donât see that I have any grounds to argue with them. But as my OP alluded to, I tend to think there is more at stake /â humans are âworth moreâ in the non-hedonic worlds. So when I work through this I end up underwhelmed by the overall case.
***
This brings us to the much thornier territory of moral uncertainty. While continuing to observe that Iâm out of my depth philosophically, and am correspondingly uncertain how best to approach this, some notes on how I think about this and where I seem to be differing:
I find experience machine thought experiments, and peopleâs lack of enthusiasm for them, much more compelling than âtortured Timâ thought experiements for trying to get a handle on how much of what matters is pleasure/âsuffering. The issue I see with modelling extreme suffering is that it tends to heavily disrupt non-hedonic goods, and so itâs hard to figure out how much of the badness is the suffering versus the disruption. We can get a sense of how much people care about this disruption from their refusal to enter the experience machine; a lot of the rejections I see and personally feel boil down to âIâm maxing out pleasure but losing everything that âactually mattersââ.
RP did mention this but I found their handling unconvincing; they seem to have very different intuitions to me for how much torture compromises human ability to experience what âactually mattersâ. Empirical evidence from people with chronic nerve damage is similarly tainted by the fact that e.g. friends often abandon you when youâre chronically in pain, you may have to drop hobbies that meant a lot to you, and so on.
Iâve been lucky enough never to experience anything that severe, but if I look at the worst periods of my life it certainly seemed like a lot more impact came from these âsecondaryâ effectsâinterference with non-hedonic goodsâthan from the primary suffering. My heart goes out to people who are dealing with worse conditions and very likely taking larger âsecondaryâ hits.
I also just felt like the Tortured Tim thought experiment didnât âlandâ even on its own terms for me, similar to the sentiments expressed in this comment and this comment.
I mostly agree with your reasoning before even getting into moral uncertainty and up to and including this:
However, if weâre assuming hedonism, I think your starting point is plausibly too low for animal welfare interventions, because it underestimates the disvalue of pain relative to life in full health, as I argue here.
I also think your response to the Tortured Tim thought experiment is reasonable. Still, I would say:
If you weigh desires/âpreferences by attention or their effects on attention (e.g. motivational salience), then the fact that intense suffering is so disruptive and would take priority over attention to other things in your life means it would matter a lot, supporting RPâs take. And if you weigh desires/âpreferences by attention or their effects on attention, it seems nonhuman animals matter a lot (but something like neuron count weighting isnât unreasonable).
I assume this is not how you weigh desires/âpreferences, though, or else you probably wouldnât disagree with RP here, and especially in the ways you do!
If you donât weigh desires by attention or their effects on attention, I donât see how you can ground interpersonal utility comparisons at all, especially between humans and other animals but even between humans, who may differ dramatically in their values. I still donât see a positive case for animals not mattering much.
Recently I failed to complete a dental procedure because I kept flinching whenever the dentist hit a particularly sensitive spot. They needed me to stay still. I promise you I would have preferred to stay still, not least because what ended up happening was I had to have it redone and endured more pain overall. My forebrain understood this, my hindbrain is dumb.
(FWIW the dentist was very understanding, and apologetic that the anesthetic didnât do its job. I did not get the impression that my failure was unusual given that.)
When I talk about suffering disrupting enjoyment of non-hedonic goods I mean something like that flinch; a forced âeliminate the pain!â response that likely made good sense back in the ancestral environment, but not a choice or preference in the usual sense of that term. This is particularly easy to see in cases like my flinch where the hindbrainâs âpreferenceâ is self-defeating, but I would make similar observations in some other cases, e.g. addiction.
I donât quite see what youâre driving at with this line of argument.
I can see how being able to firmly âgroundâ things is a nice/âhelpful property for an theory of âwhat is good?â to have. I like being able to quantify things too. But to imply that measuring good must be this way seems like a case of succumbing to the Streetlight Effect, or perhaps even the McNamara fallacy if you then downgrade other conceptions of good in the style of below quote.
Put another way, it seems like you prefer to weight by attention because it makes answers easier to find, but what if such answers are just difficult to find?
The fact that âwhat is good?â has been debated for literally millenia with no resolution in sight suggests to me that it just is difficult to find, in the same way that after some amount of time you should acknowledge your keys just arenât under the streetlight.
To avoid the above pitfall, which I think all STEM types should keep in mind, when I suspect my numbers are failing to capture the (morally) important things my default response is to revert in the direction of Common sense (morality). I think STEM people who fail to check themselves this way often end up causing serious harm[1]. In this case that would make me less inclined to trade human lives for aminal welfare, not more.
Iâll probably leave this post at this point unless I see a pressing need for further clarification of my views. I do appreciate you taking the time to engage politely.
SBF is the obvious example here, but really Iâve seen this so often in EA. Big fan of Warren Buffetâs quote here:
Itâs worth distinguishing different attentional mechanisms, like motivational salience from stimulus-driven attention. The flinch might be stimulus-driven. Being unable to stop thinking about something, like being madly in love or grieving, is motivational salience. And then thereâs top-down/âvoluntary/âendogenous attention, the executive function you use to intentionally focus on things.
We could pick any of these and measure their effects on attention. Motivational salience and top-down attention seem morally relevant, but stimulus-driven attention doesnât.
I donât mean to discount preferences if interpersonal comparisons canât be grounded. I mean that if animals have such preferences, you canât say theyâre less important (thereâs no fact of the matter either way), as I said in my top-level comment.