It’s reasonable to think there are important differences between at least some insects and some of the smaller organisms under discussion on the Forum, like nematodes. See, e.g., this new paper by Klein and Barron.
I don’t necessarily want to give extra weight to net harm, as Michael suggested. My primary concern is to avoid getting mugged. Some people think caring about insects already counts as getting mugged. I take that concern seriously, but don’t think it carries the day.
I’m generally skeptical of Forum-style EV maximization, which involves a lot of hastily-built models with outputs that are highly sensitive to speculative inputs. When I push back against EV maximization, I’m really pushing back against EV maximization as practiced around here, not as the in-principle correct account of decision-making under uncertainty. And when I say that I’m into doing good vs. doing good in expectation, that’s a way of insisting, “I am not going to let highly contentious debates in decision theory and normative ethics, which we will never settle and on which we will all change our minds a thousand times if we’re being intellectually honest, derail me from doing the good that’s in front of me.” You can disagree with me about whether the “good” in front of me is actually good. But as this post argues, I’m not as far from common sense as some might think.
FWIW, my general orientation to most of the debates about these kinds of theoretical issues is that they should nudge your thinking but not drive it. What should drive your thinking is just: “Suffering is bad. Do something about it.” So, yes, the numbers count. Yes, update your strategy based on the odds of making a difference. Yes, care about the counterfactual and, all else equal, put your efforts in the places that others ignore. But for most people in most circumstances, they should look at their opportunity set, choose the best thing they think they can sweat and bleed over for years, and then get to work. Don’t worry too much about whether you’ve chosen the optimal cause, whether you’re vulnerable to complex cluelessness, or whether one of your several stated reasons for action might lead to paralysis, because the consensus on all these issues will change 300 times over the course of a few years.
nice one that’s excellent i agree with all of that.
To clarify think a lot of forum EV calculation in the global health space (not necessarily maximization) is pretty reasonable and we don’t see the wild swings you speak of.
But yeah naive maximization based on hugely uncertain calculations which might tell us stopping factory farming is good one day, then bad the next—i don’t take that seriously.
But yeah naive maximization based on hugely uncertain calculations which might tell us stopping factory farming is good one day, then bad the next—i don’t take that seriously.
I wonder whether people are sufficiently thinking at the margin when they make the above criticism. I assume good algorithmic trading models could easily say one should invest more in a company in one day, and less in the next. This does not mean the models are flawed. It could simply mean there is uncertainty about the optimal amount to invest. It would not make sense to frequently shift a large fraction of the resources spent on stopping factory-farming from day to day. However, this does not follow from someone arguing there should be more factory-farming in one day, and less in the next. What follows from a post like mine on the impact of factory-farming accounting for soil animals are tiny shifts in the overall portfolio. I have some related thoughts here.
In addition, I think the takeaway from being very uncertain about whether factory-farming increases or decreases animal welfare is that stopping it is not something that robustly increases welfare. I recommend research on the welfare of soil animals in different biomes over pursuing whatever land use change interventions naively look the most cost-effective.
If we’re not something like robustly certain that stopping factory farming increases animal welfare then we’re not robustly certain anything increases animal welfare.
Trading can happen second to second. Real work on real issues requires years of planning and many years of carrying out. I don’t think its wise to get too distracted mid-action unless there is pretty overwhelming evidence that what you are doing is probably bad, or that there’s a waaaaaaaaaaaaaay better thing to do instead. Making “Tiny shifts” in a charity portfolio isn’t super practical. Rather when we are fairly confident that there’s something better to be done I think we slowly and carefully make a shift. And being “fairly confident” is tricky.
If we’re not something like robustly certain that factory farming increases animal welfare then we’re not robustly certain anything increases animal welfare.
I think you meant “stopping factory-farming”. I would say research on the welfare of soil animals has a much lower risk of decreasing welfare in expectation.
Trading can happen second to second. Real work on real issues requires years of planning and many years of carrying out.
Making “Tiny shifts” in a charity portfolio isn’t super practical.
I do not know what you mean by this. However, what I meant is that it makes sense to recommend Y over X if Y is more cost-effective at the margin than X, and the recommendation is not expected to change the marginal cost-effectiveness of X and Y much as a result of changes in their funding caused by the recommendation (which I believe applies to my post).
In terms of direct work, I think interventions with smaller effects on soil animals as a fraction of those on the target beneficiaries have a lower risk of decreasing animal welfare in expectation. For example, I believe cage-free corporate campaigns have a lower risk of decreasing animal welfare in expectation than decreasing the consumption of chicken meat. For my preferred way of comparing welfare across species (where individual welfare per animal-year is proportional to “number of neurons”^0.5), Iestimate decreasing the consumption of chicken meat changes the welfare of soil ants, termites, springtails, mites, and nematodes 83.7 k times as much as it increaes the welfare of chickens, whereas Icalculate cage-free corporate campaigns change the welfare of such soil animals 1.15 k times as much as they increase the welfare of chickens. On the other hand, in practice, I expect the effects on soil animals to be sufficiently large in both cases for me to be basically agnostic about whether they increase or decrease welfare in expectation.
I don’t necessarily want to give extra weight to net harm, as Michael suggested. My primary concern is to avoid getting mugged. Some people think caring about insects already counts as getting mugged. I take that concern seriously, but don’t think it carries the day.
Does it make sense to be concerned about being mugged by a probability of sentience of, for example, 1 %, which I wouldguess is lower than that of nematodes? The risk of death due to driving a car in the United Kingdom (UK) is something like 2.48*10^-7 per 100 km, but people there do not feel mugged by some spending on road safety. I think not considering abundant animals with a probability of sentience of 1 % is more accurately described as neglecting a very serious risk, not as being mugged. I understand your concern is that the probability of sentience of 1 % is not robust, but I believe one should still not neglect it. I see the lack of robustness as a reason for further research.
My sense is that if you’re weighing nematodes, you should also consider things like conscious subsystems or experience sizes that could tell you larger-brained animals have thousands or millions of times more valenced experiences or more valence at a time per individual organism. For example, if a nematode realizes some valence-generating function (or indicator) once with its ~302 neurons, how many times could a chicken brain, with ~200 million neurons, separately realize a similar function? What about a cow brain, with 3 billion neurons?
Taking expected values over those hypotheses and different possible scaling law hypotheses tends, on credences I find plausible, to lead to expected moral weights scaling roughly proportionally with the number of neurons (see the illustration in the conscious subsystems post). But nematodes (and other wild invertebrates) could still matter a lot even on proportional weighing, e.g. as you found here.
My sense is that if you’re weighing nematodes, you should also consider things like conscious subsystems or experience sizes that could tell you larger-brained animals have thousands or millions of times more valenced experiences or more valence at a time per individual organism.
These numbers are already compatible with individual welfare per animal-year proportional to “number of neurons”^0.5, which has been my speculative best guess. This suggests 1 fully happy human-year has 18.9 k (= 1/(5.28*10^-5)) times as much welfare as 1 fully happy soil-nematode-year.
Taking expected values over those hypotheses and different possible scaling law hypotheses tends, on credences I find plausible, to lead to expected moral weights scaling roughly proportionally with the number of neurons (see the illustration in the conscious subsystems post).
I have also been updating towards a view closer to this. I wonder whether it implies prioritising microorganisms (relatedly). There are 3*10^29 soil archaea and bacteria, 613 M (= 3*10^29/(4.89*10^20)) times as many as soil nematodes.
As a side note, what I do not find reasonable is individual welfare per animal-year being proportional to 2^”number of neurons”.
But nematodes (and other wild invertebrates) could still matter a lot even on proportional weighing, e.g. as you found here.
Agreed. In addition to the estimates in that section for the effects on soil animals as a fraction of those on the target beneficiaries, I havesome for the total welfare of animal populations. For individual welfare per animal-year proportional to the number of neurons, I estimate the absolute value of the total welfare of soil nematodes is 47.6 times that of humans.
FWIW, my general orientation to most of the debates about these kinds of theoretical issues is that they should nudge your thinking but not drive it. What should drive your thinking is just: “Suffering is bad. Do something about it.” So, yes, the numbers count. Yes, update your strategy based on the odds of making a difference. Yes, care about the counterfactual and, all else equal, put your efforts in the places that others ignore. But for most people in most circumstances, they should look at their opportunity set, choose the best thing they think they can sweat and bleed over for years, and then get to work.
As I just commented, I like this point to understand your general orientation better, but I do not seem to agree with the sentiment about the impact of moral views on cause prioritisation. It makes sense to have 4 years with an impact of 0 throughout a career of 44 years to increase the impact of the remaining 40 years (= 44 − 4) by more than 10 % (= 4⁄40). In this case, the impact would not be 0 “in most circumstances” (40/44 = 90.9 % > 50 %). So I very much agree with a literal interpretation of the above. However, I feel like it conveys that moral views, and cause prioritisation are less important than what they actually are.
Thanks, Nick. A few quick thoughts:
It’s reasonable to think there are important differences between at least some insects and some of the smaller organisms under discussion on the Forum, like nematodes. See, e.g., this new paper by Klein and Barron.
I don’t necessarily want to give extra weight to net harm, as Michael suggested. My primary concern is to avoid getting mugged. Some people think caring about insects already counts as getting mugged. I take that concern seriously, but don’t think it carries the day.
I’m generally skeptical of Forum-style EV maximization, which involves a lot of hastily-built models with outputs that are highly sensitive to speculative inputs. When I push back against EV maximization, I’m really pushing back against EV maximization as practiced around here, not as the in-principle correct account of decision-making under uncertainty. And when I say that I’m into doing good vs. doing good in expectation, that’s a way of insisting, “I am not going to let highly contentious debates in decision theory and normative ethics, which we will never settle and on which we will all change our minds a thousand times if we’re being intellectually honest, derail me from doing the good that’s in front of me.” You can disagree with me about whether the “good” in front of me is actually good. But as this post argues, I’m not as far from common sense as some might think.
FWIW, my general orientation to most of the debates about these kinds of theoretical issues is that they should nudge your thinking but not drive it. What should drive your thinking is just: “Suffering is bad. Do something about it.” So, yes, the numbers count. Yes, update your strategy based on the odds of making a difference. Yes, care about the counterfactual and, all else equal, put your efforts in the places that others ignore. But for most people in most circumstances, they should look at their opportunity set, choose the best thing they think they can sweat and bleed over for years, and then get to work. Don’t worry too much about whether you’ve chosen the optimal cause, whether you’re vulnerable to complex cluelessness, or whether one of your several stated reasons for action might lead to paralysis, because the consensus on all these issues will change 300 times over the course of a few years.
nice one that’s excellent i agree with all of that.
To clarify think a lot of forum EV calculation in the global health space (not necessarily maximization) is pretty reasonable and we don’t see the wild swings you speak of.
But yeah naive maximization based on hugely uncertain calculations which might tell us stopping factory farming is good one day, then bad the next—i don’t take that seriously.
Hi Nick.
I wonder whether people are sufficiently thinking at the margin when they make the above criticism. I assume good algorithmic trading models could easily say one should invest more in a company in one day, and less in the next. This does not mean the models are flawed. It could simply mean there is uncertainty about the optimal amount to invest. It would not make sense to frequently shift a large fraction of the resources spent on stopping factory-farming from day to day. However, this does not follow from someone arguing there should be more factory-farming in one day, and less in the next. What follows from a post like mine on the impact of factory-farming accounting for soil animals are tiny shifts in the overall portfolio. I have some related thoughts here.
In addition, I think the takeaway from being very uncertain about whether factory-farming increases or decreases animal welfare is that stopping it is not something that robustly increases welfare. I recommend research on the welfare of soil animals in different biomes over pursuing whatever land use change interventions naively look the most cost-effective.
If we’re not something like robustly certain that stopping factory farming increases animal welfare then we’re not robustly certain anything increases animal welfare.
Trading can happen second to second. Real work on real issues requires years of planning and many years of carrying out. I don’t think its wise to get too distracted mid-action unless there is pretty overwhelming evidence that what you are doing is probably bad, or that there’s a waaaaaaaaaaaaaay better thing to do instead. Making “Tiny shifts” in a charity portfolio isn’t super practical. Rather when we are fairly confident that there’s something better to be done I think we slowly and carefully make a shift. And being “fairly confident” is tricky.
I think you meant “stopping factory-farming”. I would say research on the welfare of soil animals has a much lower risk of decreasing welfare in expectation.
Here is how I think about this.
I do not know what you mean by this. However, what I meant is that it makes sense to recommend Y over X if Y is more cost-effective at the margin than X, and the recommendation is not expected to change the marginal cost-effectiveness of X and Y much as a result of changes in their funding caused by the recommendation (which I believe applies to my post).
yes i missed the word stopping!
yes we can always do research for sure that’s great. I was considering direct work though not including research.
In terms of direct work, I think interventions with smaller effects on soil animals as a fraction of those on the target beneficiaries have a lower risk of decreasing animal welfare in expectation. For example, I believe cage-free corporate campaigns have a lower risk of decreasing animal welfare in expectation than decreasing the consumption of chicken meat. For my preferred way of comparing welfare across species (where individual welfare per animal-year is proportional to “number of neurons”^0.5), I estimate decreasing the consumption of chicken meat changes the welfare of soil ants, termites, springtails, mites, and nematodes 83.7 k times as much as it increaes the welfare of chickens, whereas I calculate cage-free corporate campaigns change the welfare of such soil animals 1.15 k times as much as they increase the welfare of chickens. On the other hand, in practice, I expect the effects on soil animals to be sufficiently large in both cases for me to be basically agnostic about whether they increase or decrease welfare in expectation.
Hi Bob.
Does it make sense to be concerned about being mugged by a probability of sentience of, for example, 1 %, which I would guess is lower than that of nematodes? The risk of death due to driving a car in the United Kingdom (UK) is something like 2.48*10^-7 per 100 km, but people there do not feel mugged by some spending on road safety. I think not considering abundant animals with a probability of sentience of 1 % is more accurately described as neglecting a very serious risk, not as being mugged. I understand your concern is that the probability of sentience of 1 % is not robust, but I believe one should still not neglect it. I see the lack of robustness as a reason for further research.
My sense is that if you’re weighing nematodes, you should also consider things like conscious subsystems or experience sizes that could tell you larger-brained animals have thousands or millions of times more valenced experiences or more valence at a time per individual organism. For example, if a nematode realizes some valence-generating function (or indicator) once with its ~302 neurons, how many times could a chicken brain, with ~200 million neurons, separately realize a similar function? What about a cow brain, with 3 billion neurons?
Taking expected values over those hypotheses and different possible scaling law hypotheses tends, on credences I find plausible, to lead to expected moral weights scaling roughly proportionally with the number of neurons (see the illustration in the conscious subsystems post). But nematodes (and other wild invertebrates) could still matter a lot even on proportional weighing, e.g. as you found here.
Thanks, Michael.
These numbers are already compatible with individual welfare per animal-year proportional to “number of neurons”^0.5, which has been my speculative best guess. This suggests 1 fully happy human-year has 18.9 k (= 1/(5.28*10^-5)) times as much welfare as 1 fully happy soil-nematode-year.
I have also been updating towards a view closer to this. I wonder whether it implies prioritising microorganisms (relatedly). There are 3*10^29 soil archaea and bacteria, 613 M (= 3*10^29/(4.89*10^20)) times as many as soil nematodes.
As a side note, what I do not find reasonable is individual welfare per animal-year being proportional to 2^”number of neurons”.
Agreed. In addition to the estimates in that section for the effects on soil animals as a fraction of those on the target beneficiaries, I have some for the total welfare of animal populations. For individual welfare per animal-year proportional to the number of neurons, I estimate the absolute value of the total welfare of soil nematodes is 47.6 times that of humans.
As I just commented, I like this point to understand your general orientation better, but I do not seem to agree with the sentiment about the impact of moral views on cause prioritisation. It makes sense to have 4 years with an impact of 0 throughout a career of 44 years to increase the impact of the remaining 40 years (= 44 − 4) by more than 10 % (= 4⁄40). In this case, the impact would not be 0 “in most circumstances” (40/44 = 90.9 % > 50 %). So I very much agree with a literal interpretation of the above. However, I feel like it conveys that moral views, and cause prioritisation are less important than what they actually are.