Animal welfare can be about more than promoting empathy. For one thing, it’s about promoting empathy for nonhumans, which is somewhat different thing from promoting empathy wholesale (which usually means being nicer to other people). Secondly, animal welfare as a case study can raise a number of important ethical issues, such as naturalistic fallacies, welfare- vs. rights-based empathy, how far we think sentience extends, how to weigh minds of different complexities, population ethics, and lots more.
Also, animal welfare is quite sticky, which means it could be a good way to draw people in to these issues and get them excited about them.
I agree that, e.g., veg outreach is not the very best way to help animals. I think talking explicitly about things like wild-animal suffering and digital sentients in the future can be better, which is why I focus on those. But veg outreach is probably not vastly worse, and it can be a good donation suggestion for mainstream donors who are weirded out by far-future ideas.
As far as: “I do think that optimising for long-term animal welfare is not the best place to stop in picking an instrumental goal, because it’s quite hard to see how things affect it.” I don’t agree with this, depending how broadly we define “animal.” It seems likely to me that most of the sentience of the far future will reside in non-human-like creatures (robots, sentient subroutines, simulated insects, etc.), and most of the far-future-related things I write about are relevant to improving long-term “animal” welfare in that sense.
I happen to think that promoting empathy wholesale is likely better than promoting animal welfare, but I guess I haven’t presented an argument for that. The conclusion I’d draw is that we should be able to identify some targets which are better by our own lights as instrumental goals than short/medium run animal welfare. Promoting empathy for animals could be such.
I do see instrumental benefits to promoting animal welfare for the accessibility—though also instrumental harms. I’m not sure how these weigh against each other.
On optimising for long-term animal welfare: yes, changing societal views may have an effect on this, although I guess that the expected size of our influence there may be rather smaller than the expected size of our influence on whether there is a long-term society at all.
I happen to agree that promoting empathy (for animals) is probably better than promoting welfare directly, but a devil’s advocate might point out that beliefs often follow actions, and maybe directly changing people’s practices toward animals would be a more concrete way to change values.
I think whether there is a long-term society at all is relatively hard to change, except maybe in the case of AI risk. I think our expected influence through values is not obviously smaller and may be larger than our expected influence through whether there is a future, especially for non-mainstream values. This is doubly true if you’re a negative utilitarian, since for NUs there aren’t feasible ways to decrease the probability of a future ( http://foundational-research.org/publications/how-would-catastrophic-risks-affect-prospects-for-compromise/ ), and doing so isn’t nice to other value systems ( http://foundational-research.org/publications/reasons-to-be-nice-to-other-value-systems/ ), so you have to focus on improving the quality of the future. By the same token, it’s nicer for non-NUs to focus on improving the quality of the future (which is something NUs can support) than on making the future more likely (which is something NUs oppose).
Hi Owen :)
Animal welfare can be about more than promoting empathy. For one thing, it’s about promoting empathy for nonhumans, which is somewhat different thing from promoting empathy wholesale (which usually means being nicer to other people). Secondly, animal welfare as a case study can raise a number of important ethical issues, such as naturalistic fallacies, welfare- vs. rights-based empathy, how far we think sentience extends, how to weigh minds of different complexities, population ethics, and lots more.
Also, animal welfare is quite sticky, which means it could be a good way to draw people in to these issues and get them excited about them.
I agree that, e.g., veg outreach is not the very best way to help animals. I think talking explicitly about things like wild-animal suffering and digital sentients in the future can be better, which is why I focus on those. But veg outreach is probably not vastly worse, and it can be a good donation suggestion for mainstream donors who are weirded out by far-future ideas.
As far as: “I do think that optimising for long-term animal welfare is not the best place to stop in picking an instrumental goal, because it’s quite hard to see how things affect it.” I don’t agree with this, depending how broadly we define “animal.” It seems likely to me that most of the sentience of the far future will reside in non-human-like creatures (robots, sentient subroutines, simulated insects, etc.), and most of the far-future-related things I write about are relevant to improving long-term “animal” welfare in that sense.
Thanks for the considered thoughts. :-)
I happen to think that promoting empathy wholesale is likely better than promoting animal welfare, but I guess I haven’t presented an argument for that. The conclusion I’d draw is that we should be able to identify some targets which are better by our own lights as instrumental goals than short/medium run animal welfare. Promoting empathy for animals could be such.
I do see instrumental benefits to promoting animal welfare for the accessibility—though also instrumental harms. I’m not sure how these weigh against each other.
On optimising for long-term animal welfare: yes, changing societal views may have an effect on this, although I guess that the expected size of our influence there may be rather smaller than the expected size of our influence on whether there is a long-term society at all.
I happen to agree that promoting empathy (for animals) is probably better than promoting welfare directly, but a devil’s advocate might point out that beliefs often follow actions, and maybe directly changing people’s practices toward animals would be a more concrete way to change values.
I think whether there is a long-term society at all is relatively hard to change, except maybe in the case of AI risk. I think our expected influence through values is not obviously smaller and may be larger than our expected influence through whether there is a future, especially for non-mainstream values. This is doubly true if you’re a negative utilitarian, since for NUs there aren’t feasible ways to decrease the probability of a future ( http://foundational-research.org/publications/how-would-catastrophic-risks-affect-prospects-for-compromise/ ), and doing so isn’t nice to other value systems ( http://foundational-research.org/publications/reasons-to-be-nice-to-other-value-systems/ ), so you have to focus on improving the quality of the future. By the same token, it’s nicer for non-NUs to focus on improving the quality of the future (which is something NUs can support) than on making the future more likely (which is something NUs oppose).