Although I generally encourage dissenting opinions in the EA community, I think the idea expressed by this post is harmful and dangerous for similar reasons as those expressed by Brian and Peter.
1) “Some have argued for estimates showing animal welfare interventions to be much more cost-effective per unit of suffering averted, with an implication that animal welfare should perhaps therefore be prioritised.”
This seems to be a misrepresentation of the views held by many EAs. Cost-effectiveness calculations are employed by every EA prioritization organization, and nobody is claiming they imply a higher priority. They are only one of many factors we consider when evaluating causes.
2) “Moreover, if it could achieve a lasting improvement in societal values, it might have a large benefit in improved animal welfare over the long-term.”
3) “The upshot of this is that it is likely interventions in human welfare, as well as being immediately effective to relieve suffering and improve lives, also tend to have a significant long-term impact. This is often more difficult to measure, but the short-term impact can generally be used as a reasonable proxy. … ”
I could replace “human” with “non-human animal” welfare here and the argument would be just as valid. It’s a grand assumption to think this applies to human-focused causes but not others. If you have further justification for this, I think that would be an interesting post.
4) “For many types of human welfare intervention, we can use the short-term benefits to humans as a proxy for ongoing improvements in a way that is not possible – and may be misleading – when it comes to improvements to animal welfare.”
I think we’d all be happy for you to defend this assertion, since it is quite controversial within EA and the broader community.
Thanks for your comment and the link to your own post, which I’d not read. I’m glad to see discussion of these indirect effects, and I think it’s an area that needs more work for a deeper understanding.
I’m a bit confused by your hostility, as it seems that we are largely in agreement about the central point, which is that the route to long-term benefits flows in large part through short-term effects on humans (whether those are welfare improvements, value shifts, or some other category). I’m aware that this is not a novel claim, but it’s also one that is not universally known.
I’m particularly confused by your opening sentence. Could you explain how this is harmful or dangerous?
A couple of replies to specific points follow, so that we can thread the conversation properly if need be.
I appreciate that you’re thinking about flow-through/long-term effects and definitely agree we need more discussion and understanding in the area.
My “hostility” (although it isn’t that extreme =] really) is primarily due to the propagation of the assumption that “human-focused causes have positive significant flow-through effects while non-human animal-focused causes do not.” We have a lot more research to do into questions like this before we have that sort of confidence.
So the danger here is that impact-focused people might read the post and say “Wow, I should stop trying to support non-human animal-welfare since it doesn’t matter in the long-run!” I realize that your personal view is more nuanced, and I wish that came across more in your post. The possible flow-through effects: (i) promoting antispeciesism, (ii) scope sensitivity, (iii) reducing cognitive dissonance, and many more seem perfectly viable.
Sure, that makes sense. I think that the post would only be likely to elicit that immediate response in someone whose major reason for supporting animal welfare was the large amount of short-term suffering that it could avert, but I will make sure to pay attention to the possible take-home messages when writing blog posts.
I’m happy to hear you state your views outside the post. They seem reasonable and open-minded, which was not my original impression. I look forward to reading more of your work. Always feel free to send me articles/ideas for critique/discussion.
> I think we’d all be happy for you to defend this assertion, since it is quite controversial within EA and the broader community.
I think you must be misreading my assertion, because I don’t think it’s very controversial.
I am here saying—and not here defending (though I link to others saying this) -- that many short term welfare benefits to humans are likely to compound in a way that means that the size of short term benefit tracks the size of long term benefit.
I’m also claiming that, in contrast, with animal welfare interventions it matters much more how the benefit was achieved, because most of the indirect benefits will come through the same channel (better welfare outcomes from human value shifts, for example, may be much better than similarly sized better welfare outcomes from the invention of a comfier cage for battery hens).
If that is your assertion, I feel the post is misrepresenting your view as something much stronger (i.e human-focused causes have significant positive impact that non-human animal-focused causes do not, therefore human-focused causes are better). This is disingenuous and caused our negative reactions.
I’ve just re-read the post and I don’t think it misrepresents the view. But it is clearly the case that people reading it can come away with an erroneous impression, so something has gone wrong. Sorry about that.
“that many short term welfare benefits to humans are likely to compound in a way that means that the size of short term benefit tracks the size of long term benefit.”
I think this is actually controversial in the EA community. My impression is that Eliezer Yudkowsky and Luke Muehlhauser would disagree with it, as would I. Others who support the view are likely to acknowledge that it’s non-obvious and could be mistaken. Many forms of short-term progress may increase long-term risks.
> Cost-effectiveness calculations are employed by every EA prioritization organization, and nobody is claiming they necessarily imply a higher priority.
Cost-effectiveness is one of the classic tools used in prioritisation, and at least in theory a higher level of cost-effectiveness should exactly imply higher priority. Now the issue is that we don’t trust our estimates, because they may omit important consequences that we have some awareness of, or track the wrong variables. But when people bring cost-effectiveness estimates up, there is often an implicit claim to priority (or one may be read in even if not intended).
“Cost-effectiveness is one of the classic tools used in prioritisation, and at least in theory a higher level of cost-effectiveness should exactly imply higher priority. Now the issue is that we don’t trust our estimates, because they may omit important consequences that we have some awareness of, or track the wrong variables.”
I totally agree.
“But when people bring cost-effectiveness estimates up, there is often an implicit claim to priority (or one may be read in even if not intended).”
I would agree with the point in parentheses, but often it’s just brought up as one factor in a multitude of decision-making criteria. And I think that’s a good place for it, at least until we get better at it.
Although I generally encourage dissenting opinions in the EA community, I think the idea expressed by this post is harmful and dangerous for similar reasons as those expressed by Brian and Peter.
1) “Some have argued for estimates showing animal welfare interventions to be much more cost-effective per unit of suffering averted, with an implication that animal welfare should perhaps therefore be prioritised.”
This seems to be a misrepresentation of the views held by many EAs. Cost-effectiveness calculations are employed by every EA prioritization organization, and nobody is claiming they imply a higher priority. They are only one of many factors we consider when evaluating causes.
2) “Moreover, if it could achieve a lasting improvement in societal values, it might have a large benefit in improved animal welfare over the long-term.”
I am glad this sentence was included, but it is relatively deep in the post and is one of the strongest reasons EAs advocate against factory farming and speciesism. I posted my thoughts on the subject here: ( http://thebestwecan.org/2014/04/29/indirect-impact-of-animal-advocacy/ )
3) “The upshot of this is that it is likely interventions in human welfare, as well as being immediately effective to relieve suffering and improve lives, also tend to have a significant long-term impact. This is often more difficult to measure, but the short-term impact can generally be used as a reasonable proxy. … ”
I could replace “human” with “non-human animal” welfare here and the argument would be just as valid. It’s a grand assumption to think this applies to human-focused causes but not others. If you have further justification for this, I think that would be an interesting post.
4) “For many types of human welfare intervention, we can use the short-term benefits to humans as a proxy for ongoing improvements in a way that is not possible – and may be misleading – when it comes to improvements to animal welfare.”
I think we’d all be happy for you to defend this assertion, since it is quite controversial within EA and the broader community.
Hi Jacy,
Thanks for your comment and the link to your own post, which I’d not read. I’m glad to see discussion of these indirect effects, and I think it’s an area that needs more work for a deeper understanding.
I’m a bit confused by your hostility, as it seems that we are largely in agreement about the central point, which is that the route to long-term benefits flows in large part through short-term effects on humans (whether those are welfare improvements, value shifts, or some other category). I’m aware that this is not a novel claim, but it’s also one that is not universally known.
I’m particularly confused by your opening sentence. Could you explain how this is harmful or dangerous?
A couple of replies to specific points follow, so that we can thread the conversation properly if need be.
Owen,
I appreciate that you’re thinking about flow-through/long-term effects and definitely agree we need more discussion and understanding in the area.
My “hostility” (although it isn’t that extreme =] really) is primarily due to the propagation of the assumption that “human-focused causes have positive significant flow-through effects while non-human animal-focused causes do not.” We have a lot more research to do into questions like this before we have that sort of confidence.
So the danger here is that impact-focused people might read the post and say “Wow, I should stop trying to support non-human animal-welfare since it doesn’t matter in the long-run!” I realize that your personal view is more nuanced, and I wish that came across more in your post. The possible flow-through effects: (i) promoting antispeciesism, (ii) scope sensitivity, (iii) reducing cognitive dissonance, and many more seem perfectly viable.
Hope that makes sense.
Sure, that makes sense. I think that the post would only be likely to elicit that immediate response in someone whose major reason for supporting animal welfare was the large amount of short-term suffering that it could avert, but I will make sure to pay attention to the possible take-home messages when writing blog posts.
I’m happy to hear you state your views outside the post. They seem reasonable and open-minded, which was not my original impression. I look forward to reading more of your work. Always feel free to send me articles/ideas for critique/discussion.
> I think we’d all be happy for you to defend this assertion, since it is quite controversial within EA and the broader community.
I think you must be misreading my assertion, because I don’t think it’s very controversial.
I am here saying—and not here defending (though I link to others saying this) -- that many short term welfare benefits to humans are likely to compound in a way that means that the size of short term benefit tracks the size of long term benefit.
I’m also claiming that, in contrast, with animal welfare interventions it matters much more how the benefit was achieved, because most of the indirect benefits will come through the same channel (better welfare outcomes from human value shifts, for example, may be much better than similarly sized better welfare outcomes from the invention of a comfier cage for battery hens).
If that is your assertion, I feel the post is misrepresenting your view as something much stronger (i.e human-focused causes have significant positive impact that non-human animal-focused causes do not, therefore human-focused causes are better). This is disingenuous and caused our negative reactions.
I’ve just re-read the post and I don’t think it misrepresents the view. But it is clearly the case that people reading it can come away with an erroneous impression, so something has gone wrong. Sorry about that.
“that many short term welfare benefits to humans are likely to compound in a way that means that the size of short term benefit tracks the size of long term benefit.”
I think this is actually controversial in the EA community. My impression is that Eliezer Yudkowsky and Luke Muehlhauser would disagree with it, as would I. Others who support the view are likely to acknowledge that it’s non-obvious and could be mistaken. Many forms of short-term progress may increase long-term risks.
> Cost-effectiveness calculations are employed by every EA prioritization organization, and nobody is claiming they necessarily imply a higher priority.
Cost-effectiveness is one of the classic tools used in prioritisation, and at least in theory a higher level of cost-effectiveness should exactly imply higher priority. Now the issue is that we don’t trust our estimates, because they may omit important consequences that we have some awareness of, or track the wrong variables. But when people bring cost-effectiveness estimates up, there is often an implicit claim to priority (or one may be read in even if not intended).
“Cost-effectiveness is one of the classic tools used in prioritisation, and at least in theory a higher level of cost-effectiveness should exactly imply higher priority. Now the issue is that we don’t trust our estimates, because they may omit important consequences that we have some awareness of, or track the wrong variables.”
I totally agree.
“But when people bring cost-effectiveness estimates up, there is often an implicit claim to priority (or one may be read in even if not intended).”
I would agree with the point in parentheses, but often it’s just brought up as one factor in a multitude of decision-making criteria. And I think that’s a good place for it, at least until we get better at it.