I’ll say up front that I definitely agree that we should look into the impacts on worms a nonzero amount! The main reason for the comment is that I don’t think the appropriate bar for whether or not the project should warrant more investigation is whether or not it passes a BOTEC under your set of assumptions (which I am grateful for you sharing—I respect your willingness to share this and your consistency).
Again, not speaking on behalf of the team—but I’m happy to bite the bullet and say that I’m much more willing to defer to some deontological constraints in the face of uncertainty, rather than follow impartiality and maximising expected value all the way to its conclusion, whatever those conclusions are. This isn’t an argument against the end goal that you are aiming for, but more my best guess in terms of how to get there in practice.
Impartiality and hedonism often recommend actions widely considered bad in super remote thought experiments, but, as far as I am aware, none in real life.
I suspect this might be driven by it not being considered to be bad under your own worldview? Like it’s unsurprising that your preferred worldview doesn’t recommend actions that you consider bad, but actually my guess is that not working on global poverty and development for the meat eater problem is in fact an action that might be widely considered bad in real life for many reasonable operationalisations (though I don’t have empirical evidence to support this).[1]
I do agree with you on the word choices under this technical conception of excruciating pain / extreme torture,[2] though I think the idea that it ‘definitionally’ can’t be sustained beyond minutes does have some potential failure modes. That being said, I wasn’t actually using torture as a descriptor for the screwworm situation, more just illustrating what I might consider a point of difference between our views, i.e. that I would not be in favour of allowing humans to be tortured by AIs even if you created a BOTEC showed that this caused net positive utils in expectation; and I would not be in favour of an intervention to spread the new world screwworm around the world, even if you created a BOTEC that showed it was the best way of creating utils—I would reject these at least on deontological grounds in the current state of the world.
This is not to suggest that I think “widely considered bad” is a good bar here! A lot of moral progress came from ideas that initially were “widely considered bad”. Just suggesting this particular defence of impartiality + hedonism; namely that it “does not recommend actions widely considered bad in real life” seems unlikely to be correct—simply because most people are not impartial hedonists to the extent you are.
I’ll say up front that I definitely agree that we should look into the impacts on worms a nonzero amount!
Cool! I think this is the main point. If one is thinking about eradicating millions or billions of worms, it is worth thinking at least some hours about their welfare.
I suspect this might be driven by it not being considered to be bad under your own worldview?
I meant bad under the worldview of a random person, not me.
Like it’s unsurprising that your preferred worldview doesn’t recommend actions that you consider bad, but actually my guess is that not working on global poverty and development for the meat eater problem is in fact an action that might be widely considered bad in real life for many reasonable operationalisations (though I don’t have empirical evidence to support this).
Given my endorsement of impartial hedonism, and worries about the meat-eating problem, I do not know whether decreasing human mortality is good or bad in many cases, which is definitely a controversial view. However, I do not think it implies any controversial recommendations. I recommend taking into account the effects on animals of global health and development interventions, and also donating more to animal welfare instead of global health and development relative to a world where the meat-eating problem was not a concern. I guess most people would not see these recommendations as bad (and definitely not as bad as torturing people), although I assume many would see them as quite debatable.
Neither of which [extreme torture or excruciating pain] were my wording!
Among the categories of pain defined by the Welfare Footprint Project, I think your wording (“torture”, not extreme torture) is the closest to excruciating pain.
That being said, I wasn’t actually using torture as a descriptor for the screwworm situation, more just illustrating what I might consider a point of difference between our views, i.e. that I would not be in favour of allowing humans to be tortured by AIs even if you created a BOTEC showed that this caused net positive utils in expectation; and I would not be in favour of an intervention to spread the new world screwworm around the world, even if you created a BOTEC that showed it was the best way of creating utils—I would reject these at least on deontological grounds in the current state of the world.
I said the estimation of the effects on humans and digital minds would have to account “for all considerations”, so it would not be a “BOTEC” (back-of-the-envelope-calculation). There is a strong prior against torturing humans, so one would need extremely strong evidence to update enough to do it. A random BOTEC from me would definitely not be enough.
I’ll say up front that I definitely agree that we should look into the impacts on worms a nonzero amount! The main reason for the comment is that I don’t think the appropriate bar for whether or not the project should warrant more investigation is whether or not it passes a BOTEC under your set of assumptions (which I am grateful for you sharing—I respect your willingness to share this and your consistency).
Again, not speaking on behalf of the team—but I’m happy to bite the bullet and say that I’m much more willing to defer to some deontological constraints in the face of uncertainty, rather than follow impartiality and maximising expected value all the way to its conclusion, whatever those conclusions are. This isn’t an argument against the end goal that you are aiming for, but more my best guess in terms of how to get there in practice.
I suspect this might be driven by it not being considered to be bad under your own worldview? Like it’s unsurprising that your preferred worldview doesn’t recommend actions that you consider bad, but actually my guess is that not working on global poverty and development for the meat eater problem is in fact an action that might be widely considered bad in real life for many reasonable operationalisations (though I don’t have empirical evidence to support this).[1]
I do agree with you on the word choices under this technical conception of excruciating pain / extreme torture,[2] though I think the idea that it ‘definitionally’ can’t be sustained beyond minutes does have some potential failure modes.
That being said, I wasn’t actually using torture as a descriptor for the screwworm situation, more just illustrating what I might consider a point of difference between our views, i.e. that I would not be in favour of allowing humans to be tortured by AIs even if you created a BOTEC showed that this caused net positive utils in expectation; and I would not be in favour of an intervention to spread the new world screwworm around the world, even if you created a BOTEC that showed it was the best way of creating utils—I would reject these at least on deontological grounds in the current state of the world.
This is not to suggest that I think “widely considered bad” is a good bar here! A lot of moral progress came from ideas that initially were “widely considered bad”. Just suggesting this particular defence of impartiality + hedonism; namely that it “does not recommend actions widely considered bad in real life” seems unlikely to be correct—simply because most people are not impartial hedonists to the extent you are.
Neither of which were my wording!
Cool! I think this is the main point. If one is thinking about eradicating millions or billions of worms, it is worth thinking at least some hours about their welfare.
I meant bad under the worldview of a random person, not me.
Given my endorsement of impartial hedonism, and worries about the meat-eating problem, I do not know whether decreasing human mortality is good or bad in many cases, which is definitely a controversial view. However, I do not think it implies any controversial recommendations. I recommend taking into account the effects on animals of global health and development interventions, and also donating more to animal welfare instead of global health and development relative to a world where the meat-eating problem was not a concern. I guess most people would not see these recommendations as bad (and definitely not as bad as torturing people), although I assume many would see them as quite debatable.
Among the categories of pain defined by the Welfare Footprint Project, I think your wording (“torture”, not extreme torture) is the closest to excruciating pain.
I said the estimation of the effects on humans and digital minds would have to account “for all considerations”, so it would not be a “BOTEC” (back-of-the-envelope-calculation). There is a strong prior against torturing humans, so one would need extremely strong evidence to update enough to do it. A random BOTEC from me would definitely not be enough.