Iâm sceptical of Rethinkâs moral weight numbers[1], and am more convinced of something closer to anchoring on neuron counts (and even more convinced by extreme uncertainty). This puts animal charities more like 10x ahead rather than 1000 or 1 million times. Iâm also sceptical of very small animals (insects) having a meaningful probability/âdegree of sentience.
I am sceptical of suffering focused utilitarianism[2], and am worried that animal welfare interventions tend to lean strongly in favour of things that reduce the number of animals, on the assumption that their lives are net negative. Examples of this sort of mindset include this, this, and this.
Not all of these actively claim the given animalsâ lives must be net negative, but Iâm concerned about this being seen as obviously true and baked into the sorts of interventions that are pursued. Iâm especially concerned about the idea that the question of whether animalsâ lives are net-negative is not relevant (see first linked comment), because the way in which it is relevant is that it favours preventing animals from coming into existence (this is more commonly supported than actively euthanising animals).
Farmed animals are currently the majority of mammal + bird biomass, and so ending the (factory) farming of animals is concomitant with reducing the total mammal + bird population[3] by >50%, and this is not something that I see talked about as potentially negative.
That said, if pushed I would still fairly strongly predict that farmed chickens lives are net negative at least, which is why on net I support the pro animal welfare position.
I think something like worldview diversification is essentially a reasonable idea, for reasons of risk aversion and optimising under expected future information. The second is an explore/âexploit tradeoff take (which often ends up looking suspiciously similar to risk aversion đ§).
In the case where there is a lot of uncertainty on the relative value of different cause areas (not just in rough scale, but that things we think are positive EV could be neutral or very negative), it makes sense to hedge and put a few eggs into each basket so that you can pivot when new important information arises. It would be bad to, for instance, spend all your money euthanising all the fish on the planet and then later discover this was bad and that also there is a new much more effective anti-TB intervention.
Of course, this more favours doing more research on everything than it does pouring a lot of exploit-oriented money into Global Health, but in practice I think some degree of trying to follow through on interventions is necessary to properly explore (plus you can throw in some other considerations like time preference/âdiscount rates), and OpenPhil isnât spending money overall at a rate that implies reckless naive EV maximising (over-exploitation).
I believe something like âpartiality shouldnât be a (completely) dirty wordâ. When taken to extremes, most people accept some concessions to partiality. For instance itâs generally considered not a good strategic move to pressure people into giving so much of their income that they canât live comfortably, even though for a sufficiently motivated moral actor this would likely still be net positive. Most people also would not jump at the chance to be replaced by a species that has 10% higher welfare.
I think itâs wrong to apply this logic only at the extremes, and there should be some consideration of what the market will bear when considering more middle of the road sacrifices. For instance a big factor in the cost effectiveness of lead elimination is that it can be happily picked up by more mainstream funders.
(I realise a lot of these are not super well justified, Iâm just trying to get the main points across).
Iâm planning to publish a post this week addressing one small part of this, although itâs a pretty complicated topic so I donât expect this to get that far in justifying the position
Not meant in a very technical sense, just as the idea that there is probably more suffering relative to positive wellbeing, or that itâs easier to prevent it. Again, this is for reasons that are beyond the scope of this post. But two factors are: 1) I think common sense reasoning about the neutral point of experience is overly pessimistic 2) I am sceptical of the intensity of pain and pleasure being logarithmically distributed (severe pain ~100x worse than moderate pain), and especially of this being biased in the negative direction. One reason for this is that I find the âfirst storyâ for interpreting Weberâs law in this post much more intuitive, i.e. that logarithmically distributed stimuli get compressed to a more linear range of experience
Weighted by biomass obviously. The question of actual moral value falls back to the moral weights issue above. A point of reference on the high-moral-weights-sceptical end of the spectrum is this table@Vasco Grilođž compiled of aggregate neuron counts (although, as mentioned, I donât actually think neuron counts are likely to hold up in the long run)
I am sceptical of suffering focused utilitarianism[2], and am worried that animal welfare interventions tend to lean strongly in favour of things that reduce the number of animals, on the assumption that their lives are net negative.
You could donate to organisations improving instead of decreasing the lives of animals. I estimated a past cost-effectiveness of Shrimp Welfare Projectâs Humane Slaughter Initiative (HSI) of 43.5 k times the marginal cost-effectiveness of GiveWellâs top charities.
Iâm sceptical of Rethinkâs moral weight numbers[1], and am more convinced of something closer to anchoring on neuron counts (and even more convinced by extreme uncertainty). This puts animal charities more like 10x ahead rather than 1000 or 1 million times. Iâm also sceptical of very small animals (insects) having a meaningful probability/âdegree of sentience.
I agree with the last sentence. Using Rethink Prioritiesâ welfare range for chickens based on neurons, I would conclude corporate campaigns for chicken welfare are 11.1 (= 1.51*10^3*0.00244/â0.332) times as cost-effective as GiveWellâs top charities.
Rethink Prioritiesâ median welfare range for shrimps of 0.031 is 31 k (= 0.031/â10^-6) times their welfare range based on neurons of 10^-6. For you to get to this super low welfare range, you would have to justify putting a very low weight in all the other 11 models considered by Rethink Priorities. In general, justifying a best guess so many orders of magnitude away from that coming out of the most in-depth research on the matter seems very hard.
2) I am sceptical of the intensity of pain and pleasure being logarithmically distributed (severe pain ~100x worse than moderate pain), and especially of this being biased in the negative direction. One reason for this is that I find the âfirst storyâ for interpreting Weberâs law in this post much more intuitive, i.e. that logarithmically distributed stimuli get compressed to a more linear range of experience
Assuming in my cost-effectiveness analysis of HSI that disabling and excruciating pain are as intense as hurtful pain (setting B2 and B3 of tab âTypes of painâ to 1), and maintaining the other assumptions, 1 day of e.g. âscalding and severe burningâ would be neutralised by 1 day of fully healthy life. I think this massively underestimates the badness of severe suffering. Yet, even then, I conclude the past cost-effectiveness of HSI is 2.17 times the marginal cost-effectiveness of GiveWellâs top charities.
I think something like worldview diversification is essentially a reasonable idea, for reasons of risk aversion and optimising under expected future information.
Farmed animals are neglected, so I do not think worldview diversidication would be at risk due to moving 100 M$ to animal welfare instead of global health and development. I calculated 99.9 % of the annual philanthropic spending is on humans.
In contrast, based on Rethink Prioritiesâ median welfare ranges, the annual disability of farmed animals is much larger than that of humans.
Thanks Vasco, I did vote for animal welfare, so on net I agree with most of your points. On some specific things:
You could donate to organisations improving instead of decreasing the lives of animals
This seems right, and is why I support chicken corporate campaigns which tend to increase welfare. Some reasons this is not quite satisfactory:
It feels a bit like a âhelping slaves to live happier livesâ intervention rather than âfreeing the slavesâ
Iâm overall uncertain about whether animals lives are generally net positive, rather than strongly thinking they are
Iâd still be worried about donations to these things generally growing the AW ecosystem as a side effect (e.g. due to fungibility of donations, training up people who then do work with more suffering-focused assumptions)
But these are just concerns and not deal breakers.
Rethink Prioritiesâ median welfare range for shrimps of 0.031 is 31 k (= 0.031/â10^-6) times their welfare range based on neurons of 10^-6. For you to get to this super low welfare range, you would have to justify putting a very low weight in all the other 11 models considered by Rethink Priorities.
I am sufficiently sceptical to put a low weight on the other 11 models (or at least withhold judgement until Iâve thought it through more). As I mentioned Iâm writing a post Iâm hoping to publish this week with at least one argument related to this.
The gist of that post will be: itâs double counting to consider the 11 other models as separate lines of evidence, and similarly double counting to consider all the individual proxies (e.g. âanxiety-like behaviourâ and âfear-like behaviourâ) as independent evidence within the models.
Many of the proxies (I claim most) collapse to the single factor of âdoes it behave as though it contains some kind of reinforcement learning system?â. This itself may be predictive of sentience, because this is true of humans, but I consider this to be more like one factor, rather than many independent lines of evidence that are counted strongly under many different models.
Because of this (a lot of the proxies looking like side effects of some kind of reinforcement learning system), I would expect we will continue to see these proxies as we look at smaller and smaller animals, and this wouldnât be a big update. I would expect that if you look at a nematode worm for instance, it might show:
âTaste-aversion behaviourâ: Moving away from a noxious stimulus, or learning that a particular location contains a noxious stimulus
âDepression-like behaviourâ: Giving up/âputting less energy into exploring after repeatedly failing
âAnxiety-like behaviourâ: Being put on edge or moving more quickly if you expose it to a stimulus which has previously preceded some kind of punishment
âCuriosity-like behaviourâ: Exploring things even when it has some clearly exploitable resource
It might not show all of these (maybe a nematode is in fact too small, I donât know much about them), but hopefully you get the point that these look like manifestations of the same underlying thing such that observing more of them becomes weak evidence once you have seen a few.
Even if you didnât accept that they were all exactly side effects of âa reinforcement learning type systemâ (which seems reasonable), still I believe this idea of there being common explanatory factors for different proxies which are not necessarily sentience related should be factored in.
(RPâs model does do some non-linear weighting of proxies at various points, but not exactly accounting for this thing⊠hopefully my longer post will address this).
On the side of neuron counts, I donât think this is particularly strong evidence either. But I see it as evidence on the side of a factor like âtheir brain looks structurally similar to a humanâsâ, vs the factor of âthey behave somewhat similarly to a humanâ for which the proxies are evidence.
To me neither of these lines of evidence (âbrain structural similarityâ and âbehavioural similarityâ) seems obviously deserving of more weight.
Farmed animals are neglected, so I do not think worldview diversidication would be at risk due to moving 100 M$ to animal welfare
I definitely agree with this, I would only be concerned if we moved almost all funding to animal welfare.
Iâd still be worried about donations to these things generally growing the AW ecosystem as a side effect (e.g. due to fungibility of donations, training up people who then do work with more suffering-focused assumptions)
Without more information, I would guess that funding work on improving rather than decreasing animal lives will at the margin incentivises people to follow the funding, and therefore skill up to work on improving rather than decreasing animal lives.
I am sufficiently sceptical to put a low weight on the other 11 models (or at least withhold judgement until Iâve thought it through more). As I mentioned Iâm writing a post Iâm hoping to publish this week with at least one argument related to this.
I am looking forward to the post. Thanks for sharing the gist and some details. You may want to share a draft with people from Rethink Priorities.
To me neither of these lines of evidence (âbrain structural similarityâ and âbehavioural similarityâ) seems obviously deserving of more weight.
Farmed animals are neglected, so I do not think worldview diversidication would be at risk due to moving 100 M$ to animal welfare instead of global health and development. I calculated 99.9 % of the annual philanthropic spending is on humans.
I think it would be more appropriate to use something like human welfare spending for low-income countries rather than counting ~all charitable activity as in a broad âhumanâ bucket. That is to maintain parity with the way youâve sliced off a particularly effective part of the animal-welfare pie (farmed animal welfare). E.g., some quick Google work suggests animal shelters brought in 3.5B in 2023 in just the US (although a fair portion of of that may be government contracts).
Companion animal shelters may be the animal-welfare equivalent of opera for human-focused charities (spending lots on relatively few individuals who are relatively privileged in a sense). While deciding not to give to farmed-animal charities because of dog shelter spending doesnât make much sense, I would submit that not giving to bednets because of opera spending poses much the same problem.
I donât think that changes your underlying point much at all, though!
Thanks Jason, I would say that giving to animal shelters might be more like giving to the cancer society, or even world vision, rather than opera but thatâs as fairly minor point.
Farmed animals are currently the majority of mammal + bird biomass, and so ending the (factory) farming of animals is concomitant with reducing the total mammal + bird population[3] by >50%, and this is not something that I see talked about as potentially negative.
Presumably counterfactual reductions in animal agriculture result in counterfactual reductions in land use for agriculture, and so counterfactual increases in wild habitat, allowing more wild animals to be born and live. Animal agriculture is responsible for a disproportionate share of land use.
As someone suffering-focused, I see this as reason to not work on diet change and reducing animal agriculture, because increasing wild animal populations seems bad. I mostly support welfare reforms and reducing the use of very small animals in particular.
I was assuming that a reduction in agriculture would result in an overall reduction in the biomass (and âneuron countâ[1]) of birds and mammals, because:
Currently the biomass of farmed birds + mammals is about 10x that of farmed birds + mammals (source, not sure how marine mammals are counted but only the ballpark is needed), and this is only using 45% of the habitable land as you say
Logically it makes sense that farming aims to efficiently convert land area into animal biomass, and has more of a top down ability to achieve this than nature does. The animals that are most widely farmed are partly chosen for having food chains of only one step, and not needing to run around a lot expending energy.
A point against this is that animals are slaughtered earlier than their natural lifespan, which would result in fewer days experienced per unit of feed input. But given the numbers above I donât think this is an offsetting factor
Of course by number of individuals it would go the other way, which is presumably why you are concerned about reducing farming from a suffering-focused perspective. So I think this comes back to the issue of moral weights for small animals (as usual đ).
...
Iâm now trying to inhabit a position that I donât exactly believe, but is interesting and that I do find somewhat persuasive.
From a bigger picture perspective, you can imagine someone trying to derive the optimal arrangement of civilisation according to hedonic utilitarianism, where they accept something closer to my end of the suffering-focused and logarithmic-intensity axes[2]. Suppose they have a good model of evolutionary theory and economics but lack the details of how life on earth currently looks.
They might think something like the following: âIn order to expect any kind of top down control over the outcome you need an intelligent species + culture that can coordinate over the use of large areas (geographical, or in whatever relevant space). This species will probably be high maintenance, because they will need to have had very complex and demanding needs and wants in order for them to develop the necessary culture in the first place.
The ideal scenario would be for a relatively small population of this high maintenance species to act as stewards for a much larger population of creatures that are very low maintenance, in order to achieve a high total utility with the resources available. These low maintenance creatures should be chosen to not require a lot of energy, be easily satisfied with simple and cheap pleasures, and have simple social structures such that you can scale the number of individuals without too many side effects.
Of course, this is just a pipe dream, because this would require the advanced species to have some kind of intrinsic preference for stewarding this larger population. Among the set of all goals it seems unlikely they would have this specific oneâ.
If you look at the actual world it is quite striking how close it is to this vision. Humanity does maintain large populations of low maintenance animals, using a large proportion of the resources that are available to do so, at minimum economic cost. The difference is that we currently torture them.
If you were to accept the vision above, it looks like an easier move from âmaintaining large population and torturing themâ to âmaintaining large population and trying to give them happy livesâ, than it is from âlarge population + torturing themâ to â90% smaller population of domesticated animalsâ to âlater maybe we make the population large again for morally motivated reasonsâ.
...
Anyway, sorry for getting on a tangent from directly replying to your comment, but this long term picture is the thing that makes me actually uneasy about going hard on interventions to end factory farming. That is, on the margin currently Iâm pretty happy with a best guess of it being positive expected value to reduce the amount of animal farming, but would be more hesitant about ending farming overnight because of the potential for irreversible effects.
I would expect that if non-animal protein sources become clearly superior than animal sources, then this would result in a very rapid collapse in the number of farmed animals, and that once this has happened it could be a lot harder to move towards the âhigh population, high welfareâ world (because we would start using all the land for something else, and the idea of using a large fraction of the land on earth for managed populations of animals would become seen as weird).
I think itâs not widely conceptualised that potentially âPTC-dominant alternative protein â >50%[3] collapse in the welfare-range-weighted population of creatures within 10 yearsâ.
Used as a stand-in for some more accurate proxy for sentience, but which scales predominantly with brain size/âcomplexity rather than number of individuals
Using â>50%â as a stand-in for âa quite surprising amount of the total fractionâ and welfare-range-weighted as a stand-in for âweighted by the delta in welfare that humans could reasonably expect to achieve with some degree of confidence (e.g. without it being in animals that are so different from humans that their sentience is highly questionable)â
My personal reasons favouring global health:
Iâm sceptical of Rethinkâs moral weight numbers[1], and am more convinced of something closer to anchoring on neuron counts (and even more convinced by extreme uncertainty). This puts animal charities more like 10x ahead rather than 1000 or 1 million times. Iâm also sceptical of very small animals (insects) having a meaningful probability/âdegree of sentience.
I am sceptical of suffering focused utilitarianism[2], and am worried that animal welfare interventions tend to lean strongly in favour of things that reduce the number of animals, on the assumption that their lives are net negative. Examples of this sort of mindset include this, this, and this.
Not all of these actively claim the given animalsâ lives must be net negative, but Iâm concerned about this being seen as obviously true and baked into the sorts of interventions that are pursued. Iâm especially concerned about the idea that the question of whether animalsâ lives are net-negative is not relevant (see first linked comment), because the way in which it is relevant is that it favours preventing animals from coming into existence (this is more commonly supported than actively euthanising animals).
Farmed animals are currently the majority of mammal + bird biomass, and so ending the (factory) farming of animals is concomitant with reducing the total mammal + bird population[3] by >50%, and this is not something that I see talked about as potentially negative.
That said, if pushed I would still fairly strongly predict that farmed chickens lives are net negative at least, which is why on net I support the pro animal welfare position.
I think something like worldview diversification is essentially a reasonable idea, for reasons of risk aversion and optimising under expected future information. The second is an explore/âexploit tradeoff take (which often ends up looking suspiciously similar to risk aversion đ§).
In the case where there is a lot of uncertainty on the relative value of different cause areas (not just in rough scale, but that things we think are positive EV could be neutral or very negative), it makes sense to hedge and put a few eggs into each basket so that you can pivot when new important information arises. It would be bad to, for instance, spend all your money euthanising all the fish on the planet and then later discover this was bad and that also there is a new much more effective anti-TB intervention.
Of course, this more favours doing more research on everything than it does pouring a lot of exploit-oriented money into Global Health, but in practice I think some degree of trying to follow through on interventions is necessary to properly explore (plus you can throw in some other considerations like time preference/âdiscount rates), and OpenPhil isnât spending money overall at a rate that implies reckless naive EV maximising (over-exploitation).
Some written-down ideas in this direction: We can do better than argmax, Tyranny of the Epistemic Majority, In defense of more research and reflection.
I believe something like âpartiality shouldnât be a (completely) dirty wordâ. When taken to extremes, most people accept some concessions to partiality. For instance itâs generally considered not a good strategic move to pressure people into giving so much of their income that they canât live comfortably, even though for a sufficiently motivated moral actor this would likely still be net positive. Most people also would not jump at the chance to be replaced by a species that has 10% higher welfare.
I think itâs wrong to apply this logic only at the extremes, and there should be some consideration of what the market will bear when considering more middle of the road sacrifices. For instance a big factor in the cost effectiveness of lead elimination is that it can be happily picked up by more mainstream funders.
(I realise a lot of these are not super well justified, Iâm just trying to get the main points across).
Iâm planning to publish a post this week addressing one small part of this, although itâs a pretty complicated topic so I donât expect this to get that far in justifying the position
Not meant in a very technical sense, just as the idea that there is probably more suffering relative to positive wellbeing, or that itâs easier to prevent it. Again, this is for reasons that are beyond the scope of this post. But two factors are:
1) I think common sense reasoning about the neutral point of experience is overly pessimistic
2) I am sceptical of the intensity of pain and pleasure being logarithmically distributed (severe pain ~100x worse than moderate pain), and especially of this being biased in the negative direction. One reason for this is that I find the âfirst storyâ for interpreting Weberâs law in this post much more intuitive, i.e. that logarithmically distributed stimuli get compressed to a more linear range of experience
Weighted by biomass obviously. The question of actual moral value falls back to the moral weights issue above. A point of reference on the high-moral-weights-sceptical end of the spectrum is this table @Vasco Grilođž compiled of aggregate neuron counts (although, as mentioned, I donât actually think neuron counts are likely to hold up in the long run)
Thanks for sharing your thoughts, Will!
You could donate to organisations improving instead of decreasing the lives of animals. I estimated a past cost-effectiveness of Shrimp Welfare Projectâs Humane Slaughter Initiative (HSI) of 43.5 k times the marginal cost-effectiveness of GiveWellâs top charities.
I agree with the last sentence. Using Rethink Prioritiesâ welfare range for chickens based on neurons, I would conclude corporate campaigns for chicken welfare are 11.1 (= 1.51*10^3*0.00244/â0.332) times as cost-effective as GiveWellâs top charities.
Rethink Prioritiesâ median welfare range for shrimps of 0.031 is 31 k (= 0.031/â10^-6) times their welfare range based on neurons of 10^-6. For you to get to this super low welfare range, you would have to justify putting a very low weight in all the other 11 models considered by Rethink Priorities. In general, justifying a best guess so many orders of magnitude away from that coming out of the most in-depth research on the matter seems very hard.
Assuming in my cost-effectiveness analysis of HSI that disabling and excruciating pain are as intense as hurtful pain (setting B2 and B3 of tab âTypes of painâ to 1), and maintaining the other assumptions, 1 day of e.g. âscalding and severe burningâ would be neutralised by 1 day of fully healthy life. I think this massively underestimates the badness of severe suffering. Yet, even then, I conclude the past cost-effectiveness of HSI is 2.17 times the marginal cost-effectiveness of GiveWellâs top charities.
Farmed animals are neglected, so I do not think worldview diversidication would be at risk due to moving 100 M$ to animal welfare instead of global health and development. I calculated 99.9 % of the annual philanthropic spending is on humans.
In contrast, based on Rethink Prioritiesâ median welfare ranges, the annual disability of farmed animals is much larger than that of humans.
I agree one should not put all resources into the best option, but we are very far from this (see 1st graph above).
Thanks Vasco, I did vote for animal welfare, so on net I agree with most of your points. On some specific things:
This seems right, and is why I support chicken corporate campaigns which tend to increase welfare. Some reasons this is not quite satisfactory:
It feels a bit like a âhelping slaves to live happier livesâ intervention rather than âfreeing the slavesâ
Iâm overall uncertain about whether animals lives are generally net positive, rather than strongly thinking they are
Iâd still be worried about donations to these things generally growing the AW ecosystem as a side effect (e.g. due to fungibility of donations, training up people who then do work with more suffering-focused assumptions)
But these are just concerns and not deal breakers.
I am sufficiently sceptical to put a low weight on the other 11 models (or at least withhold judgement until Iâve thought it through more). As I mentioned Iâm writing a post Iâm hoping to publish this week with at least one argument related to this.
The gist of that post will be: itâs double counting to consider the 11 other models as separate lines of evidence, and similarly double counting to consider all the individual proxies (e.g. âanxiety-like behaviourâ and âfear-like behaviourâ) as independent evidence within the models.
Many of the proxies (I claim most) collapse to the single factor of âdoes it behave as though it contains some kind of reinforcement learning system?â. This itself may be predictive of sentience, because this is true of humans, but I consider this to be more like one factor, rather than many independent lines of evidence that are counted strongly under many different models.
Because of this (a lot of the proxies looking like side effects of some kind of reinforcement learning system), I would expect we will continue to see these proxies as we look at smaller and smaller animals, and this wouldnât be a big update. I would expect that if you look at a nematode worm for instance, it might show:
âTaste-aversion behaviourâ: Moving away from a noxious stimulus, or learning that a particular location contains a noxious stimulus
âDepression-like behaviourâ: Giving up/âputting less energy into exploring after repeatedly failing
âAnxiety-like behaviourâ: Being put on edge or moving more quickly if you expose it to a stimulus which has previously preceded some kind of punishment
âCuriosity-like behaviourâ: Exploring things even when it has some clearly exploitable resource
It might not show all of these (maybe a nematode is in fact too small, I donât know much about them), but hopefully you get the point that these look like manifestations of the same underlying thing such that observing more of them becomes weak evidence once you have seen a few.
Even if you didnât accept that they were all exactly side effects of âa reinforcement learning type systemâ (which seems reasonable), still I believe this idea of there being common explanatory factors for different proxies which are not necessarily sentience related should be factored in.
(RPâs model does do some non-linear weighting of proxies at various points, but not exactly accounting for this thing⊠hopefully my longer post will address this).
On the side of neuron counts, I donât think this is particularly strong evidence either. But I see it as evidence on the side of a factor like âtheir brain looks structurally similar to a humanâsâ, vs the factor of âthey behave somewhat similarly to a humanâ for which the proxies are evidence.
To me neither of these lines of evidence (âbrain structural similarityâ and âbehavioural similarityâ) seems obviously deserving of more weight.
I definitely agree with this, I would only be concerned if we moved almost all funding to animal welfare.
Without more information, I would guess that funding work on improving rather than decreasing animal lives will at the margin incentivises people to follow the funding, and therefore skill up to work on improving rather than decreasing animal lives.
I am looking forward to the post. Thanks for sharing the gist and some details. You may want to share a draft with people from Rethink Priorities.
I find it hard to come up with other proxies.
I think it would be more appropriate to use something like human welfare spending for low-income countries rather than counting ~all charitable activity as in a broad âhumanâ bucket. That is to maintain parity with the way youâve sliced off a particularly effective part of the animal-welfare pie (farmed animal welfare). E.g., some quick Google work suggests animal shelters brought in 3.5B in 2023 in just the US (although a fair portion of of that may be government contracts).
Companion animal shelters may be the animal-welfare equivalent of opera for human-focused charities (spending lots on relatively few individuals who are relatively privileged in a sense). While deciding not to give to farmed-animal charities because of dog shelter spending doesnât make much sense, I would submit that not giving to bednets because of opera spending poses much the same problem.
I donât think that changes your underlying point much at all, though!
Thanks Jason, I would say that giving to animal shelters might be more like giving to the cancer society, or even world vision, rather than opera but thatâs as fairly minor point.
Presumably counterfactual reductions in animal agriculture result in counterfactual reductions in land use for agriculture, and so counterfactual increases in wild habitat, allowing more wild animals to be born and live. Animal agriculture is responsible for a disproportionate share of land use.
Source: https://ââourworldindata.org/ââglobal-land-for-agriculture
As someone suffering-focused, I see this as reason to not work on diet change and reducing animal agriculture, because increasing wild animal populations seems bad. I mostly support welfare reforms and reducing the use of very small animals in particular.
I was assuming that a reduction in agriculture would result in an overall reduction in the biomass (and âneuron countâ[1]) of birds and mammals, because:
Currently the biomass of farmed birds + mammals is about 10x that of farmed birds + mammals (source, not sure how marine mammals are counted but only the ballpark is needed), and this is only using 45% of the habitable land as you say
Logically it makes sense that farming aims to efficiently convert land area into animal biomass, and has more of a top down ability to achieve this than nature does. The animals that are most widely farmed are partly chosen for having food chains of only one step, and not needing to run around a lot expending energy.
A point against this is that animals are slaughtered earlier than their natural lifespan, which would result in fewer days experienced per unit of feed input. But given the numbers above I donât think this is an offsetting factor
Of course by number of individuals it would go the other way, which is presumably why you are concerned about reducing farming from a suffering-focused perspective. So I think this comes back to the issue of moral weights for small animals (as usual đ).
...
Iâm now trying to inhabit a position that I donât exactly believe, but is interesting and that I do find somewhat persuasive.
From a bigger picture perspective, you can imagine someone trying to derive the optimal arrangement of civilisation according to hedonic utilitarianism, where they accept something closer to my end of the suffering-focused and logarithmic-intensity axes[2]. Suppose they have a good model of evolutionary theory and economics but lack the details of how life on earth currently looks.
They might think something like the following: âIn order to expect any kind of top down control over the outcome you need an intelligent species + culture that can coordinate over the use of large areas (geographical, or in whatever relevant space). This species will probably be high maintenance, because they will need to have had very complex and demanding needs and wants in order for them to develop the necessary culture in the first place.
The ideal scenario would be for a relatively small population of this high maintenance species to act as stewards for a much larger population of creatures that are very low maintenance, in order to achieve a high total utility with the resources available. These low maintenance creatures should be chosen to not require a lot of energy, be easily satisfied with simple and cheap pleasures, and have simple social structures such that you can scale the number of individuals without too many side effects.
Of course, this is just a pipe dream, because this would require the advanced species to have some kind of intrinsic preference for stewarding this larger population. Among the set of all goals it seems unlikely they would have this specific oneâ.
If you look at the actual world it is quite striking how close it is to this vision. Humanity does maintain large populations of low maintenance animals, using a large proportion of the resources that are available to do so, at minimum economic cost. The difference is that we currently torture them.
If you were to accept the vision above, it looks like an easier move from âmaintaining large population and torturing themâ to âmaintaining large population and trying to give them happy livesâ, than it is from âlarge population + torturing themâ to â90% smaller population of domesticated animalsâ to âlater maybe we make the population large again for morally motivated reasonsâ.
...
Anyway, sorry for getting on a tangent from directly replying to your comment, but this long term picture is the thing that makes me actually uneasy about going hard on interventions to end factory farming. That is, on the margin currently Iâm pretty happy with a best guess of it being positive expected value to reduce the amount of animal farming, but would be more hesitant about ending farming overnight because of the potential for irreversible effects.
I would expect that if non-animal protein sources become clearly superior than animal sources, then this would result in a very rapid collapse in the number of farmed animals, and that once this has happened it could be a lot harder to move towards the âhigh population, high welfareâ world (because we would start using all the land for something else, and the idea of using a large fraction of the land on earth for managed populations of animals would become seen as weird).
I think itâs not widely conceptualised that potentially âPTC-dominant alternative protein â >50%[3] collapse in the welfare-range-weighted population of creatures within 10 yearsâ.
Used as a stand-in for some more accurate proxy for sentience, but which scales predominantly with brain size/âcomplexity rather than number of individuals
I.e. they think extremely bad experiences are not orders of magnitude worse than simply quite bad experiences
Using â>50%â as a stand-in for âa quite surprising amount of the total fractionâ and welfare-range-weighted as a stand-in for âweighted by the delta in welfare that humans could reasonably expect to achieve with some degree of confidence (e.g. without it being in animals that are so different from humans that their sentience is highly questionable)â