Third, itâs easy for an org to think itâs helping when itâs hurting: we know so little about how to help that some caution is warranted. I donât just want to do good in expectation: I want to do good.
FWIW, I think this would count against most animal interventions targeting vertebrates (welfare reforms, reductions in production), and possibly lead to paralysis pretty generally, and not just for animal advocates.
If we give extra weight to net harm over net benefits compared to inaction, as in typical difference-making views, I think most animal interventions targeting vertebrates will look worse than doing nothing, considering only the effects on Earth or in the next 20 years, say. This is because:
there are possibly far larger effects on wild invertebrates (even just wild insects and shrimp, but also of course also mites, springtails, nematodes and copepods) through land use change and effects on fishing, and huge net harm is possible through harming them, and
thereâs usually at least around as much reason to expect large net harm to wild animals as there is to expect large net benefit to them, and difference-making gives more weight to the former, so it will dominate.
There could be similar stories for the far future and acausally, replacing wild animals on Earth with far future moral patients and aliens. There are also possibilities and effects of which weâre totally unaware.
That being said, I suspect typical accounts of difference-making lead to paralysis pretty generally for similar reasons. This isnât just a problem for animal interventions. I discussed this and proposed some alternative accounts here.
Bracketing can also sometimes help. Itâs an attempt to formalize the idea that when weâre clueless about whether some group of moral patients is made better off or worse off, we can just ignore them and focus on those we are clueful about.
I like the idea of bracketing, but i feel like weâre never completely clueless when it comes to animal welfareâthereâs always a âchanceâ of sentience right? I donât see how it could help here.
Is there then some kinds of probability range threshold we should consider close -to-clueless and bracket out?
Also Itâs easier for me who is pretty happy right now that insects and anything smaller arenât important in welfare calculations, but if you do give extra weight to harm in calculations, and you do think insects have a non-negligible chance of pain, I agree with MichaelStJulesâs thatâs bound to lead to a lot of inaction.
As a side note i think we all want to do good, not just good in expectation, but âgood in expectationâ is the best we can do with limited knowledge right?
Itâs reasonable to think there are important differences between at least some insects and some of the smaller organisms under discussion on the Forum, like nematodes. See, e.g., this new paper by Klein and Barron.
I donât necessarily want to give extra weight to net harm, as Michael suggested. My primary concern is to avoid getting mugged. Some people think caring about insects already counts as getting mugged. I take that concern seriously, but donât think it carries the day.
Iâm generally skeptical of Forum-style EV maximization, which involves a lot of hastily-built models with outputs that are highly sensitive to speculative inputs. When I push back against EV maximization, Iâm really pushing back against EV maximization as practiced around here, not as the in-principle correct account of decision-making under uncertainty. And when I say that Iâm into doing good vs. doing good in expectation, thatâs a way of insisting, âI am not going to let highly contentious debates in decision theory and normative ethics, which we will never settle and on which we will all change our minds a thousand times if weâre being intellectually honest, derail me from doing the good thatâs in front of me.â You can disagree with me about whether the âgoodâ in front of me is actually good. But as this post argues, Iâm not as far from common sense as some might think.
FWIW, my general orientation to most of the debates about these kinds of theoretical issues is that they should nudge your thinking but not drive it. What should drive your thinking is just: âSuffering is bad. Do something about it.â So, yes, the numbers count. Yes, update your strategy based on the odds of making a difference. Yes, care about the counterfactual and, all else equal, put your efforts in the places that others ignore. But for most people in most circumstances, they should look at their opportunity set, choose the best thing they think they can sweat and bleed over for years, and then get to work. Donât worry too much about whether youâve chosen the optimal cause, whether youâre vulnerable to complex cluelessness, or whether one of your several stated reasons for action might lead to paralysis, because the consensus on all these issues will change 300 times over the course of a few years.
nice one thatâs excellent i agree with all of that.
To clarify think a lot of forum EV calculation in the global health space (not necessarily maximization) is pretty reasonable and we donât see the wild swings you speak of.
But yeah naive maximization based on hugely uncertain calculations which might tell us stopping factory farming is good one day, then bad the nextâi donât take that seriously.
But yeah naive maximization based on hugely uncertain calculations which might tell us stopping factory farming is good one day, then bad the nextâi donât take that seriously.
I wonder whether people are sufficiently thinking at the margin when they make the above criticism. I assume good algorithmic trading models could easily say one should invest more in a company in one day, and less in the next. This does not mean the models are flawed. It could simply mean there is uncertainty about the optimal amount to invest. It would not make sense to frequently shift a large fraction of the resources spent on stopping factory-farming from day to day. However, this does not follow from someone arguing there should be more factory-farming in one day, and less in the next. What follows from a post like mine on the impact of factory-farming accounting for soil animals are tiny shifts in the overall portfolio. I have some related thoughts here.
In addition, I think the takeaway from being very uncertain about whether factory-farming increases or decreases animal welfare is that stopping it is not something that robustly increases welfare. I recommend research on the welfare of soil animals in different biomes over pursuing whatever land use change interventions naively look the most cost-effective.
If weâre not something like robustly certain that stopping factory farming increases animal welfare then weâre not robustly certain anything increases animal welfare.
Trading can happen second to second. Real work on real issues requires years of planning and many years of carrying out. I donât think its wise to get too distracted mid-action unless there is pretty overwhelming evidence that what you are doing is probably bad, or that thereâs a waaaaaaaaaaaaaay better thing to do instead. Making âTiny shiftsâ in a charity portfolio isnât super practical. Rather when we are fairly confident that thereâs something better to be done I think we slowly and carefully make a shift. And being âfairly confidentâ is tricky.
If weâre not something like robustly certain that factory farming increases animal welfare then weâre not robustly certain anything increases animal welfare.
I think you meant âstopping factory-farmingâ. I would say research on the welfare of soil animals has a much lower risk of decreasing welfare in expectation.
Trading can happen second to second. Real work on real issues requires years of planning and many years of carrying out.
Making âTiny shiftsâ in a charity portfolio isnât super practical.
I do not know what you mean by this. However, what I meant is that it makes sense to recommend Y over X if Y is more cost-effective at the margin than X, and the recommendation is not expected to change the marginal cost-effectiveness of X and Y much as a result of changes in their funding caused by the recommendation (which I believe applies to my post).
In terms of direct work, I think interventions with smaller effects on soil animals as a fraction of those on the target beneficiaries have a lower risk of decreasing animal welfare in expectation. For example, I believe cage-free corporate campaigns have a lower risk of decreasing animal welfare in expectation than decreasing the consumption of chicken meat. For my preferred way of comparing welfare across species (where individual welfare per animal-year is proportional to ânumber of neuronsâ^0.5), Iestimate decreasing the consumption of chicken meat changes the welfare of soil ants, termites, springtails, mites, and nematodes 83.7 k times as much as it increaes the welfare of chickens, whereas Icalculate cage-free corporate campaigns change the welfare of such soil animals 1.15 k times as much as they increase the welfare of chickens. On the other hand, in practice, I expect the effects on soil animals to be sufficiently large in both cases for me to be basically agnostic about whether they increase or decrease welfare in expectation.
I donât necessarily want to give extra weight to net harm, as Michael suggested. My primary concern is to avoid getting mugged. Some people think caring about insects already counts as getting mugged. I take that concern seriously, but donât think it carries the day.
Does it make sense to be concerned about being mugged by a probability of sentience of, for example, 1 %, which I wouldguess is lower than that of nematodes? The risk of death due to driving a car in the United Kingdom (UK) is something like 2.48*10^-7 per 100 km, but people there do not feel mugged by some spending on road safety. I think not considering abundant animals with a probability of sentience of 1 % is more accurately described as neglecting a very serious risk, not as being mugged. I understand your concern is that the probability of sentience of 1 % is not robust, but I believe one should still not neglect it. I see the lack of robustness as a reason for further research.
My sense is that if youâre weighing nematodes, you should also consider things like conscious subsystems or experience sizes that could tell you larger-brained animals have thousands or millions of times more valenced experiences or more valence at a time per individual organism. For example, if a nematode realizes some valence-generating function (or indicator) once with its ~302 neurons, how many times could a chicken brain, with ~200 million neurons, separately realize a similar function? What about a cow brain, with 3 billion neurons?
Taking expected values over those hypotheses and different possible scaling law hypotheses tends, on credences I find plausible, to lead to expected moral weights scaling roughly proportionally with the number of neurons (see the illustration in the conscious subsystems post). But nematodes (and other wild invertebrates) could still matter a lot even on proportional weighing, e.g. as you found here.
My sense is that if youâre weighing nematodes, you should also consider things like conscious subsystems or experience sizes that could tell you larger-brained animals have thousands or millions of times more valenced experiences or more valence at a time per individual organism.
These numbers are already compatible with individual welfare per animal-year proportional to ânumber of neuronsâ^0.5, which has been my speculative best guess. This suggests 1 fully happy human-year has 18.9 k (= 1/â(5.28*10^-5)) times as much welfare as 1 fully happy soil-nematode-year.
Taking expected values over those hypotheses and different possible scaling law hypotheses tends, on credences I find plausible, to lead to expected moral weights scaling roughly proportionally with the number of neurons (see the illustration in the conscious subsystems post).
I have also been updating towards a view closer to this. I wonder whether it implies prioritising microorganisms (relatedly). There are 3*10^29 soil archaea and bacteria, 613 M (= 3*10^29/â(4.89*10^20)) times as many as soil nematodes.
As a side note, what I do not find reasonable is individual welfare per animal-year being proportional to 2^ânumber of neuronsâ.
But nematodes (and other wild invertebrates) could still matter a lot even on proportional weighing, e.g. as you found here.
Agreed. In addition to the estimates in that section for the effects on soil animals as a fraction of those on the target beneficiaries, I havesome for the total welfare of animal populations. For individual welfare per animal-year proportional to the number of neurons, I estimate the absolute value of the total welfare of soil nematodes is 47.6 times that of humans.
FWIW, my general orientation to most of the debates about these kinds of theoretical issues is that they should nudge your thinking but not drive it. What should drive your thinking is just: âSuffering is bad. Do something about it.â So, yes, the numbers count. Yes, update your strategy based on the odds of making a difference. Yes, care about the counterfactual and, all else equal, put your efforts in the places that others ignore. But for most people in most circumstances, they should look at their opportunity set, choose the best thing they think they can sweat and bleed over for years, and then get to work.
As I just commented, I like this point to understand your general orientation better, but I do not seem to agree with the sentiment about the impact of moral views on cause prioritisation. It makes sense to have 4 years with an impact of 0 throughout a career of 44 years to increase the impact of the remaining 40 years (= 44 â 4) by more than 10 % (= 4â40). In this case, the impact would not be 0 âin most circumstancesâ (40/â44 = 90.9 % > 50 %). So I very much agree with a literal interpretation of the above. However, I feel like it conveys that moral views, and cause prioritisation are less important than what they actually are.
Ya, bracketing on its own wouldnât tell you to ignore a potential group of moral patients just because its probability of sentience is very small. The numbers could compensate. Itâs more that conditional on sentience, weâd have to be clueless about whether theyâre made better or worse off. And we may often be in this position in practice.
I think you could still want some kind of difference-making view or bounded utility function used with bracketing, so that you can discount extreme overall downsides more than proportionally to their probability, along with extreme upsides. Or do something like Nicolausian discounting, i.e. ignoring small probabilities.
Great post! :)
FWIW, I think this would count against most animal interventions targeting vertebrates (welfare reforms, reductions in production), and possibly lead to paralysis pretty generally, and not just for animal advocates.
If we give extra weight to net harm over net benefits compared to inaction, as in typical difference-making views, I think most animal interventions targeting vertebrates will look worse than doing nothing, considering only the effects on Earth or in the next 20 years, say. This is because:
there are possibly far larger effects on wild invertebrates (even just wild insects and shrimp, but also of course also mites, springtails, nematodes and copepods) through land use change and effects on fishing, and huge net harm is possible through harming them, and
thereâs usually at least around as much reason to expect large net harm to wild animals as there is to expect large net benefit to them, and difference-making gives more weight to the former, so it will dominate.
There could be similar stories for the far future and acausally, replacing wild animals on Earth with far future moral patients and aliens. There are also possibilities and effects of which weâre totally unaware.
That being said, I suspect typical accounts of difference-making lead to paralysis pretty generally for similar reasons. This isnât just a problem for animal interventions. I discussed this and proposed some alternative accounts here.
Bracketing can also sometimes help. Itâs an attempt to formalize the idea that when weâre clueless about whether some group of moral patients is made better off or worse off, we can just ignore them and focus on those we are clueful about.
I like the idea of bracketing, but i feel like weâre never completely clueless when it comes to animal welfareâthereâs always a âchanceâ of sentience right? I donât see how it could help here.
Is there then some kinds of probability range threshold we should consider close -to-clueless and bracket out?
Also Itâs easier for me who is pretty happy right now that insects and anything smaller arenât important in welfare calculations, but if you do give extra weight to harm in calculations, and you do think insects have a non-negligible chance of pain, I agree with MichaelStJulesâs thatâs bound to lead to a lot of inaction.
As a side note i think we all want to do good, not just good in expectation, but âgood in expectationâ is the best we can do with limited knowledge right?
Thanks, Nick. A few quick thoughts:
Itâs reasonable to think there are important differences between at least some insects and some of the smaller organisms under discussion on the Forum, like nematodes. See, e.g., this new paper by Klein and Barron.
I donât necessarily want to give extra weight to net harm, as Michael suggested. My primary concern is to avoid getting mugged. Some people think caring about insects already counts as getting mugged. I take that concern seriously, but donât think it carries the day.
Iâm generally skeptical of Forum-style EV maximization, which involves a lot of hastily-built models with outputs that are highly sensitive to speculative inputs. When I push back against EV maximization, Iâm really pushing back against EV maximization as practiced around here, not as the in-principle correct account of decision-making under uncertainty. And when I say that Iâm into doing good vs. doing good in expectation, thatâs a way of insisting, âI am not going to let highly contentious debates in decision theory and normative ethics, which we will never settle and on which we will all change our minds a thousand times if weâre being intellectually honest, derail me from doing the good thatâs in front of me.â You can disagree with me about whether the âgoodâ in front of me is actually good. But as this post argues, Iâm not as far from common sense as some might think.
FWIW, my general orientation to most of the debates about these kinds of theoretical issues is that they should nudge your thinking but not drive it. What should drive your thinking is just: âSuffering is bad. Do something about it.â So, yes, the numbers count. Yes, update your strategy based on the odds of making a difference. Yes, care about the counterfactual and, all else equal, put your efforts in the places that others ignore. But for most people in most circumstances, they should look at their opportunity set, choose the best thing they think they can sweat and bleed over for years, and then get to work. Donât worry too much about whether youâve chosen the optimal cause, whether youâre vulnerable to complex cluelessness, or whether one of your several stated reasons for action might lead to paralysis, because the consensus on all these issues will change 300 times over the course of a few years.
nice one thatâs excellent i agree with all of that.
To clarify think a lot of forum EV calculation in the global health space (not necessarily maximization) is pretty reasonable and we donât see the wild swings you speak of.
But yeah naive maximization based on hugely uncertain calculations which might tell us stopping factory farming is good one day, then bad the nextâi donât take that seriously.
Hi Nick.
I wonder whether people are sufficiently thinking at the margin when they make the above criticism. I assume good algorithmic trading models could easily say one should invest more in a company in one day, and less in the next. This does not mean the models are flawed. It could simply mean there is uncertainty about the optimal amount to invest. It would not make sense to frequently shift a large fraction of the resources spent on stopping factory-farming from day to day. However, this does not follow from someone arguing there should be more factory-farming in one day, and less in the next. What follows from a post like mine on the impact of factory-farming accounting for soil animals are tiny shifts in the overall portfolio. I have some related thoughts here.
In addition, I think the takeaway from being very uncertain about whether factory-farming increases or decreases animal welfare is that stopping it is not something that robustly increases welfare. I recommend research on the welfare of soil animals in different biomes over pursuing whatever land use change interventions naively look the most cost-effective.
If weâre not something like robustly certain that stopping factory farming increases animal welfare then weâre not robustly certain anything increases animal welfare.
Trading can happen second to second. Real work on real issues requires years of planning and many years of carrying out. I donât think its wise to get too distracted mid-action unless there is pretty overwhelming evidence that what you are doing is probably bad, or that thereâs a waaaaaaaaaaaaaay better thing to do instead. Making âTiny shiftsâ in a charity portfolio isnât super practical. Rather when we are fairly confident that thereâs something better to be done I think we slowly and carefully make a shift. And being âfairly confidentâ is tricky.
I think you meant âstopping factory-farmingâ. I would say research on the welfare of soil animals has a much lower risk of decreasing welfare in expectation.
Here is how I think about this.
I do not know what you mean by this. However, what I meant is that it makes sense to recommend Y over X if Y is more cost-effective at the margin than X, and the recommendation is not expected to change the marginal cost-effectiveness of X and Y much as a result of changes in their funding caused by the recommendation (which I believe applies to my post).
yes i missed the word stopping!
yes we can always do research for sure thatâs great. I was considering direct work though not including research.
In terms of direct work, I think interventions with smaller effects on soil animals as a fraction of those on the target beneficiaries have a lower risk of decreasing animal welfare in expectation. For example, I believe cage-free corporate campaigns have a lower risk of decreasing animal welfare in expectation than decreasing the consumption of chicken meat. For my preferred way of comparing welfare across species (where individual welfare per animal-year is proportional to ânumber of neuronsâ^0.5), I estimate decreasing the consumption of chicken meat changes the welfare of soil ants, termites, springtails, mites, and nematodes 83.7 k times as much as it increaes the welfare of chickens, whereas I calculate cage-free corporate campaigns change the welfare of such soil animals 1.15 k times as much as they increase the welfare of chickens. On the other hand, in practice, I expect the effects on soil animals to be sufficiently large in both cases for me to be basically agnostic about whether they increase or decrease welfare in expectation.
Hi Bob.
Does it make sense to be concerned about being mugged by a probability of sentience of, for example, 1 %, which I would guess is lower than that of nematodes? The risk of death due to driving a car in the United Kingdom (UK) is something like 2.48*10^-7 per 100 km, but people there do not feel mugged by some spending on road safety. I think not considering abundant animals with a probability of sentience of 1 % is more accurately described as neglecting a very serious risk, not as being mugged. I understand your concern is that the probability of sentience of 1 % is not robust, but I believe one should still not neglect it. I see the lack of robustness as a reason for further research.
My sense is that if youâre weighing nematodes, you should also consider things like conscious subsystems or experience sizes that could tell you larger-brained animals have thousands or millions of times more valenced experiences or more valence at a time per individual organism. For example, if a nematode realizes some valence-generating function (or indicator) once with its ~302 neurons, how many times could a chicken brain, with ~200 million neurons, separately realize a similar function? What about a cow brain, with 3 billion neurons?
Taking expected values over those hypotheses and different possible scaling law hypotheses tends, on credences I find plausible, to lead to expected moral weights scaling roughly proportionally with the number of neurons (see the illustration in the conscious subsystems post). But nematodes (and other wild invertebrates) could still matter a lot even on proportional weighing, e.g. as you found here.
Thanks, Michael.
These numbers are already compatible with individual welfare per animal-year proportional to ânumber of neuronsâ^0.5, which has been my speculative best guess. This suggests 1 fully happy human-year has 18.9 k (= 1/â(5.28*10^-5)) times as much welfare as 1 fully happy soil-nematode-year.
I have also been updating towards a view closer to this. I wonder whether it implies prioritising microorganisms (relatedly). There are 3*10^29 soil archaea and bacteria, 613 M (= 3*10^29/â(4.89*10^20)) times as many as soil nematodes.
As a side note, what I do not find reasonable is individual welfare per animal-year being proportional to 2^ânumber of neuronsâ.
Agreed. In addition to the estimates in that section for the effects on soil animals as a fraction of those on the target beneficiaries, I have some for the total welfare of animal populations. For individual welfare per animal-year proportional to the number of neurons, I estimate the absolute value of the total welfare of soil nematodes is 47.6 times that of humans.
As I just commented, I like this point to understand your general orientation better, but I do not seem to agree with the sentiment about the impact of moral views on cause prioritisation. It makes sense to have 4 years with an impact of 0 throughout a career of 44 years to increase the impact of the remaining 40 years (= 44 â 4) by more than 10 % (= 4â40). In this case, the impact would not be 0 âin most circumstancesâ (40/â44 = 90.9 % > 50 %). So I very much agree with a literal interpretation of the above. However, I feel like it conveys that moral views, and cause prioritisation are less important than what they actually are.
Ya, bracketing on its own wouldnât tell you to ignore a potential group of moral patients just because its probability of sentience is very small. The numbers could compensate. Itâs more that conditional on sentience, weâd have to be clueless about whether theyâre made better or worse off. And we may often be in this position in practice.
I think you could still want some kind of difference-making view or bounded utility function used with bracketing, so that you can discount extreme overall downsides more than proportionally to their probability, along with extreme upsides. Or do something like Nicolausian discounting, i.e. ignoring small probabilities.
Thanks, Michael. Iâm quite sympathetic to the idea of bracketing!