I am fine with neglecting indirect effects on other wild animals besides the infected animals and screwworms, but I think these are the most directly affected (the goal of the intervention is their eradication), so they should be considered. Do you know whether the increase in welfare of the no longer infected wild animals would be larger than the decrease in welfare of the eradicated screwworms assuming these have positive lives? If it takes 100 worm-years, like 100 worms for 1 year, to kill a host animal, and each lethal infection is worse than the counterfactual death by 0.5 host-years of fully healthy life, the welfare per worm-year would only have to be more than 0.5 % (= 0.5/â100) of the welfare per host-year of fully healthy life for the intervention to be harmful[1]. This seems possible considering that Rethink Prioritiesâ median welfare range of silkworms is 0.388 % (= 0.002/â0.515) of that of pigs. I also think the worms may have positive lives because they basically live inside the food they eat. I suspect there is a natural tendency to neglect the effect on worms because they are disguting (at least to me[2]), but this is not a good reason to disregard their welfare.
I asked Matthias 15 days ago about the effects of reducing worm-years, and Matthias replied:
I havenât looked into this at all, but the effect of eradication efforts (whether through gene drive or the traditional sterile insect technique) is that screwworm stop reproducing and cease to exist, not that they die anguishing deaths.
I am not accounting for the effect of extending the lives of the animals which would no longer be infected because it is quite unclear whether wild animals have negative or positive lives.
Speaking for myself /â not for anyone else here:
My (highly uncertain + subjective) guess is that each lethal infection is probably worse than 0.5 host-years equivalents, but the number of worms per host animal probably could vary significantly. That being said, personally I am fine with the assumption of modelling ~0 additional counterfactual suffering for screwworms that are never brought into existence, rather than e.g. an eradication campaign that involves killing existing animals.
Iâm unsure how to think about the possibility that the screwworm species which might be living significantly net positive lives such that it trumps the benefit of reduced suffering from screwworm deaths, but Iâd personally prefer stronger evidence for wellbeing or harms on the wormâs end to justify inaction here (ie not look into the possibility/âfeasibility of this)
Again, speaking only for myselfâIâm not personally fixated on either gene drives or sterile insect approaches! I am also very interested in finding out reasons to not proceed with the project, find alternative approaches, which doesnât preclude the possibility that the net welfare of screwworms should be more heavily weighed as a consideration. That being said, I would be surprised if something like âwe should do nothing to alleviate host animal suffering because their suffering can provide more utils for the screwwormâ was a sufficiently convincing reason to not do more work /â investigation in this area (for nonutilitarian reasons), though I understand there are a set of assumptions /â views one might hold that could drive disagreement here.[1]
If a highly uncertain BOTEC showed you that torturing humans would bring more utility to digital beings than the suffering incurred on the humans, would you endorse allowing this? At what ratio would you change your mind, and how many OOMs of uncertainty on the BOTEC would you be OK with?
Orâwould you be in favour of taking this further and spreading the screwworm globally simply because it provides more utils, rather than just not eradicating the screwworm?
If a highly uncertain BOTEC showed you that torturing humans would bring more utility to digital beings than the suffering incurred on the humans, would you endorse allowing this?
Yes, as I strongly endorse impartiality and hedonism. In practice, I do not see how tortuting humans would be the best way to increase the welfare of digital minds. I assume torturing biological humans requires way more energy than torturing virtual non-sentient humans, and I think it is extremely unlikely that the digital minds would directly want humans to suffer (as opposed to being attached to a superficial property of some torture). Impartiality and hedonism often recommend actions widely considered bad in super remote thought experiments, but, as far as I am aware, none in real life.
I would also say torture is far from the right word to describe the suffering of the host animals. âExtreme tortureâ refers to excruciating pain, which âcanât be sustained for long (e.g., hours, as opposed to minutes) without neurological shutdownâ. In contrast, I think it takes way more than minutes for screwworms to kill a host animal.
At what ratio would you change your mind, and how many OOMs of uncertainty on the BOTEC would you be OK with?
If the estimate of the increase in welfare of the digital minds, and decrease in welfare of the humans accounted for all considerations, including my prior that torture rarely is the best course of action, I would change my mind at a ratio of 1 as implied by impartiality. I also strongly endorse maximising expected welfare, as I do not see how one can reject the axioms of Von NeumannâMorgenstern utility theorem (completeness, transitivity, continuity, and independence), so I would not decide based on the uncertainty of estimates accounting for all the considerations. I would just care uncertainty to determine how much to weight my priors and observations, with more uncertain observations resulting in me staying closer to my priors that torture is bad (as in inverse-variance weighting).
Orâwould you be in favour of taking this further and spreading the screwworm globally simply because it provides more utils, rather than just not eradicating the screwworm?
The most cost-effective interventions under high uncertainty are often decreasing uncertainty instead of executing the actions which currently look best. So my top priority would be assessing the welfare of the worms, not acting as though their welfare is sufficiently negative or positive. Matthias said âI havenât looked into this at allâ, so I guess there is room for the team to learn at least a bit.
Iâll say up front that I definitely agree that we should look into the impacts on worms a nonzero amount! The main reason for the comment is that I donât think the appropriate bar for whether or not the project should warrant more investigation is whether or not it passes a BOTEC under your set of assumptions (which I am grateful for you sharingâI respect your willingness to share this and your consistency).
Again, not speaking on behalf of the teamâbut Iâm happy to bite the bullet and say that Iâm much more willing to defer to some deontological constraints in the face of uncertainty, rather than follow impartiality and maximising expected value all the way to its conclusion, whatever those conclusions are. This isnât an argument against the end goal that you are aiming for, but more my best guess in terms of how to get there in practice.
Impartiality and hedonism often recommend actions widely considered bad in super remote thought experiments, but, as far as I am aware, none in real life.
I suspect this might be driven by it not being considered to be bad under your own worldview? Like itâs unsurprising that your preferred worldview doesnât recommend actions that you consider bad, but actually my guess is that not working on global poverty and development for the meat eater problem is in fact an action that might be widely considered bad in real life for many reasonable operationalisations (though I donât have empirical evidence to support this).[1]
I do agree with you on the word choices under this technical conception of excruciating pain /â extreme torture,[2] though I think the idea that it âdefinitionallyâ canât be sustained beyond minutes does have some potential failure modes. That being said, I wasnât actually using torture as a descriptor for the screwworm situation, more just illustrating what I might consider a point of difference between our views, i.e. that I would not be in favour of allowing humans to be tortured by AIs even if you created a BOTEC showed that this caused net positive utils in expectation; and I would not be in favour of an intervention to spread the new world screwworm around the world, even if you created a BOTEC that showed it was the best way of creating utilsâI would reject these at least on deontological grounds in the current state of the world.
This is not to suggest that I think âwidely considered badâ is a good bar here! A lot of moral progress came from ideas that initially were âwidely considered badâ. Just suggesting this particular defence of impartiality + hedonism; namely that it âdoes not recommend actions widely considered bad in real lifeâ seems unlikely to be correctâsimply because most people are not impartial hedonists to the extent you are.
Iâll say up front that I definitely agree that we should look into the impacts on worms a nonzero amount!
Cool! I think this is the main point. If one is thinking about eradicating millions or billions of worms, it is worth thinking at least some hours about their welfare.
I suspect this might be driven by it not being considered to be bad under your own worldview?
I meant bad under the worldview of a random person, not me.
Like itâs unsurprising that your preferred worldview doesnât recommend actions that you consider bad, but actually my guess is that not working on global poverty and development for the meat eater problem is in fact an action that might be widely considered bad in real life for many reasonable operationalisations (though I donât have empirical evidence to support this).
Given my endorsement of impartial hedonism, and worries about the meat-eating problem, I do not know whether decreasing human mortality is good or bad in many cases, which is definitely a controversial view. However, I do not think it implies any controversial recommendations. I recommend taking into account the effects on animals of global health and development interventions, and also donating more to animal welfare instead of global health and development relative to a world where the meat-eating problem was not a concern. I guess most people would not see these recommendations as bad (and definitely not as bad as torturing people), although I assume many would see them as quite debatable.
Neither of which [extreme torture or excruciating pain] were my wording!
Among the categories of pain defined by the Welfare Footprint Project, I think your wording (âtortureâ, not extreme torture) is the closest to excruciating pain.
That being said, I wasnât actually using torture as a descriptor for the screwworm situation, more just illustrating what I might consider a point of difference between our views, i.e. that I would not be in favour of allowing humans to be tortured by AIs even if you created a BOTEC showed that this caused net positive utils in expectation; and I would not be in favour of an intervention to spread the new world screwworm around the world, even if you created a BOTEC that showed it was the best way of creating utilsâI would reject these at least on deontological grounds in the current state of the world.
I said the estimation of the effects on humans and digital minds would have to account âfor all considerationsâ, so it would not be a âBOTECâ (back-of-the-envelope-calculation). There is a strong prior against torturing humans, so one would need extremely strong evidence to update enough to do it. A random BOTEC from me would definitely not be enough.
Thanks for sharing!
I am fine with neglecting indirect effects on other wild animals besides the infected animals and screwworms, but I think these are the most directly affected (the goal of the intervention is their eradication), so they should be considered. Do you know whether the increase in welfare of the no longer infected wild animals would be larger than the decrease in welfare of the eradicated screwworms assuming these have positive lives? If it takes 100 worm-years, like 100 worms for 1 year, to kill a host animal, and each lethal infection is worse than the counterfactual death by 0.5 host-years of fully healthy life, the welfare per worm-year would only have to be more than 0.5 % (= 0.5/â100) of the welfare per host-year of fully healthy life for the intervention to be harmful[1]. This seems possible considering that Rethink Prioritiesâ median welfare range of silkworms is 0.388 % (= 0.002/â0.515) of that of pigs. I also think the worms may have positive lives because they basically live inside the food they eat. I suspect there is a natural tendency to neglect the effect on worms because they are disguting (at least to me[2]), but this is not a good reason to disregard their welfare.
I asked Matthias 15 days ago about the effects of reducing worm-years, and Matthias replied:
I am not accounting for the effect of extending the lives of the animals which would no longer be infected because it is quite unclear whether wild animals have negative or positive lives.
Writing this comment made me feel a bit disgusted.
Speaking for myself /â not for anyone else here:
My (highly uncertain + subjective) guess is that each lethal infection is probably worse than 0.5 host-years equivalents, but the number of worms per host animal probably could vary significantly.
That being said, personally I am fine with the assumption of modelling ~0 additional counterfactual suffering for screwworms that are never brought into existence, rather than e.g. an eradication campaign that involves killing existing animals.
Iâm unsure how to think about the possibility that the screwworm species which might be living significantly net positive lives such that it trumps the benefit of reduced suffering from screwworm deaths, but Iâd personally prefer stronger evidence for wellbeing or harms on the wormâs end to justify inaction here (ie not look into the possibility/âfeasibility of this)
Again, speaking only for myselfâIâm not personally fixated on either gene drives or sterile insect approaches! I am also very interested in finding out reasons to not proceed with the project, find alternative approaches, which doesnât preclude the possibility that the net welfare of screwworms should be more heavily weighed as a consideration. That being said, I would be surprised if something like âwe should do nothing to alleviate host animal suffering because their suffering can provide more utils for the screwwormâ was a sufficiently convincing reason to not do more work /â investigation in this area (for nonutilitarian reasons), though I understand there are a set of assumptions /â views one might hold that could drive disagreement here.[1]
If a highly uncertain BOTEC showed you that torturing humans would bring more utility to digital beings than the suffering incurred on the humans, would you endorse allowing this? At what ratio would you change your mind, and how many OOMs of uncertainty on the BOTEC would you be OK with?
Orâwould you be in favour of taking this further and spreading the screwworm globally simply because it provides more utils, rather than just not eradicating the screwworm?
Thanks, Bruce.
Yes, as I strongly endorse impartiality and hedonism. In practice, I do not see how tortuting humans would be the best way to increase the welfare of digital minds. I assume torturing biological humans requires way more energy than torturing virtual non-sentient humans, and I think it is extremely unlikely that the digital minds would directly want humans to suffer (as opposed to being attached to a superficial property of some torture). Impartiality and hedonism often recommend actions widely considered bad in super remote thought experiments, but, as far as I am aware, none in real life.
I would also say torture is far from the right word to describe the suffering of the host animals. âExtreme tortureâ refers to excruciating pain, which âcanât be sustained for long (e.g., hours, as opposed to minutes) without neurological shutdownâ. In contrast, I think it takes way more than minutes for screwworms to kill a host animal.
If the estimate of the increase in welfare of the digital minds, and decrease in welfare of the humans accounted for all considerations, including my prior that torture rarely is the best course of action, I would change my mind at a ratio of 1 as implied by impartiality. I also strongly endorse maximising expected welfare, as I do not see how one can reject the axioms of Von NeumannâMorgenstern utility theorem (completeness, transitivity, continuity, and independence), so I would not decide based on the uncertainty of estimates accounting for all the considerations. I would just care uncertainty to determine how much to weight my priors and observations, with more uncertain observations resulting in me staying closer to my priors that torture is bad (as in inverse-variance weighting).
The most cost-effective interventions under high uncertainty are often decreasing uncertainty instead of executing the actions which currently look best. So my top priority would be assessing the welfare of the worms, not acting as though their welfare is sufficiently negative or positive. Matthias said âI havenât looked into this at allâ, so I guess there is room for the team to learn at least a bit.
Iâll say up front that I definitely agree that we should look into the impacts on worms a nonzero amount! The main reason for the comment is that I donât think the appropriate bar for whether or not the project should warrant more investigation is whether or not it passes a BOTEC under your set of assumptions (which I am grateful for you sharingâI respect your willingness to share this and your consistency).
Again, not speaking on behalf of the teamâbut Iâm happy to bite the bullet and say that Iâm much more willing to defer to some deontological constraints in the face of uncertainty, rather than follow impartiality and maximising expected value all the way to its conclusion, whatever those conclusions are. This isnât an argument against the end goal that you are aiming for, but more my best guess in terms of how to get there in practice.
I suspect this might be driven by it not being considered to be bad under your own worldview? Like itâs unsurprising that your preferred worldview doesnât recommend actions that you consider bad, but actually my guess is that not working on global poverty and development for the meat eater problem is in fact an action that might be widely considered bad in real life for many reasonable operationalisations (though I donât have empirical evidence to support this).[1]
I do agree with you on the word choices under this technical conception of excruciating pain /â extreme torture,[2] though I think the idea that it âdefinitionallyâ canât be sustained beyond minutes does have some potential failure modes.
That being said, I wasnât actually using torture as a descriptor for the screwworm situation, more just illustrating what I might consider a point of difference between our views, i.e. that I would not be in favour of allowing humans to be tortured by AIs even if you created a BOTEC showed that this caused net positive utils in expectation; and I would not be in favour of an intervention to spread the new world screwworm around the world, even if you created a BOTEC that showed it was the best way of creating utilsâI would reject these at least on deontological grounds in the current state of the world.
This is not to suggest that I think âwidely considered badâ is a good bar here! A lot of moral progress came from ideas that initially were âwidely considered badâ. Just suggesting this particular defence of impartiality + hedonism; namely that it âdoes not recommend actions widely considered bad in real lifeâ seems unlikely to be correctâsimply because most people are not impartial hedonists to the extent you are.
Neither of which were my wording!
Cool! I think this is the main point. If one is thinking about eradicating millions or billions of worms, it is worth thinking at least some hours about their welfare.
I meant bad under the worldview of a random person, not me.
Given my endorsement of impartial hedonism, and worries about the meat-eating problem, I do not know whether decreasing human mortality is good or bad in many cases, which is definitely a controversial view. However, I do not think it implies any controversial recommendations. I recommend taking into account the effects on animals of global health and development interventions, and also donating more to animal welfare instead of global health and development relative to a world where the meat-eating problem was not a concern. I guess most people would not see these recommendations as bad (and definitely not as bad as torturing people), although I assume many would see them as quite debatable.
Among the categories of pain defined by the Welfare Footprint Project, I think your wording (âtortureâ, not extreme torture) is the closest to excruciating pain.
I said the estimation of the effects on humans and digital minds would have to account âfor all considerationsâ, so it would not be a âBOTECâ (back-of-the-envelope-calculation). There is a strong prior against torturing humans, so one would need extremely strong evidence to update enough to do it. A random BOTEC from me would definitely not be enough.