My (highly uncertain + subjective) guess is that each lethal infection is probably worse than 0.5 host-years equivalents, but the number of worms per host animal probably could vary significantly. That being said, personally I am fine with the assumption of modelling ~0 additional counterfactual suffering for screwworms that are never brought into existence, rather than e.g. an eradication campaign that involves killing existing animals.
I’m unsure how to think about the possibility that the screwworm species which might be living significantly net positive lives such that it trumps the benefit of reduced suffering from screwworm deaths, but I’d personally prefer stronger evidence for wellbeing or harms on the worm’s end to justify inaction here (ie not look into the possibility/feasibility of this)
Again, speaking only for myself—I’m not personally fixated on either gene drives or sterile insect approaches! I am also very interested in finding out reasons to not proceed with the project, find alternative approaches, which doesn’t preclude the possibility that the net welfare of screwworms should be more heavily weighed as a consideration. That being said, I would be surprised if something like “we should do nothing to alleviate host animal suffering because their suffering can provide more utils for the screwworm” was a sufficiently convincing reason to not do more work / investigation in this area (for nonutilitarian reasons), though I understand there are a set of assumptions / views one might hold that could drive disagreement here.[1]
If a highly uncertain BOTEC showed you that torturing humans would bring more utility to digital beings than the suffering incurred on the humans, would you endorse allowing this? At what ratio would you change your mind, and how many OOMs of uncertainty on the BOTEC would you be OK with?
Or—would you be in favour of taking this further and spreading the screwworm globally simply because it provides more utils, rather than just not eradicating the screwworm?
If a highly uncertain BOTEC showed you that torturing humans would bring more utility to digital beings than the suffering incurred on the humans, would you endorse allowing this?
Yes, as I strongly endorse impartiality and hedonism. In practice, I do not see how tortuting humans would be the best way to increase the welfare of digital minds. I assume torturing biological humans requires way more energy than torturing virtual non-sentient humans, and I think it is extremely unlikely that the digital minds would directly want humans to suffer (as opposed to being attached to a superficial property of some torture). Impartiality and hedonism often recommend actions widely considered bad in super remote thought experiments, but, as far as I am aware, none in real life.
I would also say torture is far from the right word to describe the suffering of the host animals. “Extreme torture” refers to excruciating pain, which “can’t be sustained for long (e.g., hours, as opposed to minutes) without neurological shutdown”. In contrast, I think it takes way more than minutes for screwworms to kill a host animal.
At what ratio would you change your mind, and how many OOMs of uncertainty on the BOTEC would you be OK with?
If the estimate of the increase in welfare of the digital minds, and decrease in welfare of the humans accounted for all considerations, including my prior that torture rarely is the best course of action, I would change my mind at a ratio of 1 as implied by impartiality. I also strongly endorse maximising expected welfare, as I do not see how one can reject the axioms of Von Neumann–Morgenstern utility theorem (completeness, transitivity, continuity, and independence), so I would not decide based on the uncertainty of estimates accounting for all the considerations. I would just care uncertainty to determine how much to weight my priors and observations, with more uncertain observations resulting in me staying closer to my priors that torture is bad (as in inverse-variance weighting).
Or—would you be in favour of taking this further and spreading the screwworm globally simply because it provides more utils, rather than just not eradicating the screwworm?
The most cost-effective interventions under high uncertainty are often decreasing uncertainty instead of executing the actions which currently look best. So my top priority would be assessing the welfare of the worms, not acting as though their welfare is sufficiently negative or positive. Matthias said “I haven’t looked into this at all”, so I guess there is room for the team to learn at least a bit.
I’ll say up front that I definitely agree that we should look into the impacts on worms a nonzero amount! The main reason for the comment is that I don’t think the appropriate bar for whether or not the project should warrant more investigation is whether or not it passes a BOTEC under your set of assumptions (which I am grateful for you sharing—I respect your willingness to share this and your consistency).
Again, not speaking on behalf of the team—but I’m happy to bite the bullet and say that I’m much more willing to defer to some deontological constraints in the face of uncertainty, rather than follow impartiality and maximising expected value all the way to its conclusion, whatever those conclusions are. This isn’t an argument against the end goal that you are aiming for, but more my best guess in terms of how to get there in practice.
Impartiality and hedonism often recommend actions widely considered bad in super remote thought experiments, but, as far as I am aware, none in real life.
I suspect this might be driven by it not being considered to be bad under your own worldview? Like it’s unsurprising that your preferred worldview doesn’t recommend actions that you consider bad, but actually my guess is that not working on global poverty and development for the meat eater problem is in fact an action that might be widely considered bad in real life for many reasonable operationalisations (though I don’t have empirical evidence to support this).[1]
I do agree with you on the word choices under this technical conception of excruciating pain / extreme torture,[2] though I think the idea that it ‘definitionally’ can’t be sustained beyond minutes does have some potential failure modes. That being said, I wasn’t actually using torture as a descriptor for the screwworm situation, more just illustrating what I might consider a point of difference between our views, i.e. that I would not be in favour of allowing humans to be tortured by AIs even if you created a BOTEC showed that this caused net positive utils in expectation; and I would not be in favour of an intervention to spread the new world screwworm around the world, even if you created a BOTEC that showed it was the best way of creating utils—I would reject these at least on deontological grounds in the current state of the world.
This is not to suggest that I think “widely considered bad” is a good bar here! A lot of moral progress came from ideas that initially were “widely considered bad”. Just suggesting this particular defence of impartiality + hedonism; namely that it “does not recommend actions widely considered bad in real life” seems unlikely to be correct—simply because most people are not impartial hedonists to the extent you are.
I’ll say up front that I definitely agree that we should look into the impacts on worms a nonzero amount!
Cool! I think this is the main point. If one is thinking about eradicating millions or billions of worms, it is worth thinking at least some hours about their welfare.
I suspect this might be driven by it not being considered to be bad under your own worldview?
I meant bad under the worldview of a random person, not me.
Like it’s unsurprising that your preferred worldview doesn’t recommend actions that you consider bad, but actually my guess is that not working on global poverty and development for the meat eater problem is in fact an action that might be widely considered bad in real life for many reasonable operationalisations (though I don’t have empirical evidence to support this).
Given my endorsement of impartial hedonism, and worries about the meat-eating problem, I do not know whether decreasing human mortality is good or bad in many cases, which is definitely a controversial view. However, I do not think it implies any controversial recommendations. I recommend taking into account the effects on animals of global health and development interventions, and also donating more to animal welfare instead of global health and development relative to a world where the meat-eating problem was not a concern. I guess most people would not see these recommendations as bad (and definitely not as bad as torturing people), although I assume many would see them as quite debatable.
Neither of which [extreme torture or excruciating pain] were my wording!
Among the categories of pain defined by the Welfare Footprint Project, I think your wording (“torture”, not extreme torture) is the closest to excruciating pain.
That being said, I wasn’t actually using torture as a descriptor for the screwworm situation, more just illustrating what I might consider a point of difference between our views, i.e. that I would not be in favour of allowing humans to be tortured by AIs even if you created a BOTEC showed that this caused net positive utils in expectation; and I would not be in favour of an intervention to spread the new world screwworm around the world, even if you created a BOTEC that showed it was the best way of creating utils—I would reject these at least on deontological grounds in the current state of the world.
I said the estimation of the effects on humans and digital minds would have to account “for all considerations”, so it would not be a “BOTEC” (back-of-the-envelope-calculation). There is a strong prior against torturing humans, so one would need extremely strong evidence to update enough to do it. A random BOTEC from me would definitely not be enough.
Speaking for myself / not for anyone else here:
My (highly uncertain + subjective) guess is that each lethal infection is probably worse than 0.5 host-years equivalents, but the number of worms per host animal probably could vary significantly.
That being said, personally I am fine with the assumption of modelling ~0 additional counterfactual suffering for screwworms that are never brought into existence, rather than e.g. an eradication campaign that involves killing existing animals.
I’m unsure how to think about the possibility that the screwworm species which might be living significantly net positive lives such that it trumps the benefit of reduced suffering from screwworm deaths, but I’d personally prefer stronger evidence for wellbeing or harms on the worm’s end to justify inaction here (ie not look into the possibility/feasibility of this)
Again, speaking only for myself—I’m not personally fixated on either gene drives or sterile insect approaches! I am also very interested in finding out reasons to not proceed with the project, find alternative approaches, which doesn’t preclude the possibility that the net welfare of screwworms should be more heavily weighed as a consideration. That being said, I would be surprised if something like “we should do nothing to alleviate host animal suffering because their suffering can provide more utils for the screwworm” was a sufficiently convincing reason to not do more work / investigation in this area (for nonutilitarian reasons), though I understand there are a set of assumptions / views one might hold that could drive disagreement here.[1]
If a highly uncertain BOTEC showed you that torturing humans would bring more utility to digital beings than the suffering incurred on the humans, would you endorse allowing this? At what ratio would you change your mind, and how many OOMs of uncertainty on the BOTEC would you be OK with?
Or—would you be in favour of taking this further and spreading the screwworm globally simply because it provides more utils, rather than just not eradicating the screwworm?
Thanks, Bruce.
Yes, as I strongly endorse impartiality and hedonism. In practice, I do not see how tortuting humans would be the best way to increase the welfare of digital minds. I assume torturing biological humans requires way more energy than torturing virtual non-sentient humans, and I think it is extremely unlikely that the digital minds would directly want humans to suffer (as opposed to being attached to a superficial property of some torture). Impartiality and hedonism often recommend actions widely considered bad in super remote thought experiments, but, as far as I am aware, none in real life.
I would also say torture is far from the right word to describe the suffering of the host animals. “Extreme torture” refers to excruciating pain, which “can’t be sustained for long (e.g., hours, as opposed to minutes) without neurological shutdown”. In contrast, I think it takes way more than minutes for screwworms to kill a host animal.
If the estimate of the increase in welfare of the digital minds, and decrease in welfare of the humans accounted for all considerations, including my prior that torture rarely is the best course of action, I would change my mind at a ratio of 1 as implied by impartiality. I also strongly endorse maximising expected welfare, as I do not see how one can reject the axioms of Von Neumann–Morgenstern utility theorem (completeness, transitivity, continuity, and independence), so I would not decide based on the uncertainty of estimates accounting for all the considerations. I would just care uncertainty to determine how much to weight my priors and observations, with more uncertain observations resulting in me staying closer to my priors that torture is bad (as in inverse-variance weighting).
The most cost-effective interventions under high uncertainty are often decreasing uncertainty instead of executing the actions which currently look best. So my top priority would be assessing the welfare of the worms, not acting as though their welfare is sufficiently negative or positive. Matthias said “I haven’t looked into this at all”, so I guess there is room for the team to learn at least a bit.
I’ll say up front that I definitely agree that we should look into the impacts on worms a nonzero amount! The main reason for the comment is that I don’t think the appropriate bar for whether or not the project should warrant more investigation is whether or not it passes a BOTEC under your set of assumptions (which I am grateful for you sharing—I respect your willingness to share this and your consistency).
Again, not speaking on behalf of the team—but I’m happy to bite the bullet and say that I’m much more willing to defer to some deontological constraints in the face of uncertainty, rather than follow impartiality and maximising expected value all the way to its conclusion, whatever those conclusions are. This isn’t an argument against the end goal that you are aiming for, but more my best guess in terms of how to get there in practice.
I suspect this might be driven by it not being considered to be bad under your own worldview? Like it’s unsurprising that your preferred worldview doesn’t recommend actions that you consider bad, but actually my guess is that not working on global poverty and development for the meat eater problem is in fact an action that might be widely considered bad in real life for many reasonable operationalisations (though I don’t have empirical evidence to support this).[1]
I do agree with you on the word choices under this technical conception of excruciating pain / extreme torture,[2] though I think the idea that it ‘definitionally’ can’t be sustained beyond minutes does have some potential failure modes.
That being said, I wasn’t actually using torture as a descriptor for the screwworm situation, more just illustrating what I might consider a point of difference between our views, i.e. that I would not be in favour of allowing humans to be tortured by AIs even if you created a BOTEC showed that this caused net positive utils in expectation; and I would not be in favour of an intervention to spread the new world screwworm around the world, even if you created a BOTEC that showed it was the best way of creating utils—I would reject these at least on deontological grounds in the current state of the world.
This is not to suggest that I think “widely considered bad” is a good bar here! A lot of moral progress came from ideas that initially were “widely considered bad”. Just suggesting this particular defence of impartiality + hedonism; namely that it “does not recommend actions widely considered bad in real life” seems unlikely to be correct—simply because most people are not impartial hedonists to the extent you are.
Neither of which were my wording!
Cool! I think this is the main point. If one is thinking about eradicating millions or billions of worms, it is worth thinking at least some hours about their welfare.
I meant bad under the worldview of a random person, not me.
Given my endorsement of impartial hedonism, and worries about the meat-eating problem, I do not know whether decreasing human mortality is good or bad in many cases, which is definitely a controversial view. However, I do not think it implies any controversial recommendations. I recommend taking into account the effects on animals of global health and development interventions, and also donating more to animal welfare instead of global health and development relative to a world where the meat-eating problem was not a concern. I guess most people would not see these recommendations as bad (and definitely not as bad as torturing people), although I assume many would see them as quite debatable.
Among the categories of pain defined by the Welfare Footprint Project, I think your wording (“torture”, not extreme torture) is the closest to excruciating pain.
I said the estimation of the effects on humans and digital minds would have to account “for all considerations”, so it would not be a “BOTEC” (back-of-the-envelope-calculation). There is a strong prior against torturing humans, so one would need extremely strong evidence to update enough to do it. A random BOTEC from me would definitely not be enough.