Itās not so much that thereās a specific threshold away from 50%, itās more that if youāre wildly uncertain and itās highly speculative, rather than assigning a single precise probability like 55%, you should use a range of probabilities, say 40% to 70%. This range has values on either side of 50%. Then:
If you were difference-making ambiguity averse,[1] then both increasing their populations would look bad (possibly more bad lives in expectation) and decreasing their populations would look bad (possibly fewer good lives in expectation). Youād want to minimize these effects, by avoiding interventions with such large predictable effects on wild animal population sizes, or by hedging.
If you were ambiguity averse (not difference-making), then I imagine youād want to decrease their populations. The worst possibilities for animals in the near-term are those where wild invertebrates are sentient and have horrible lives in expectation and youād want to make those less bad. But s-risks (and especially hellish existential risks) would plausibly dominate instead, if you can robustly mitigate them.
On a different account dealing with imprecise credences, when we reduce their populations, you might say these wild animals are neither better off in expectation (in case they have good lives in expectation), nor are they worse off in expectation (in case they have bad lives in expectation), so we can ignore them, via a principle that extends the Pareto principle (Hedden, 2024).
(Iām assuming weāre ruling out an average welfare of exactly 0 or assigning that negligible probability, EDIT: conditional on sentience/āhaving any welfare at all.)
On standard accounts of difference-making ambiguity aversion, which I think are problematic. Iām less sure about the implications of other accounts. See my 2024 post.
(Iām assuming weāre ruling out an average welfare of exactly 0 or assigning that negligible probability.)
I would agree any particular value for the welfare per animal-year has a negligible probability because my probability distribution is practically continuous, such that there is lots of values around any particular one.
Fascinating discussion between the two of you here, thanks.
I have one comment: I donāt think their welfare being exactly 0 should have negligible probability. If we consider an animal like the soil nematode, I think there should be a significant probability assigned to the possibility that they are not sentient, unless Iām missing something?
Yes, absolutely right about 0 being possible and reaonably likely. Maybe Iād say āaverage welfare conditional on having any welfare at allā. I only added that so that X% likely to be negative meant (100-X)% likely to be positive, in order to simplify the argument.
I think āprobability of sentienceā*āexpected welfare conditional on sentienceā >> (1 - āprobability of sentienceā)*āexpected welfare conditional on non-sentienceā, such that the expected welfare can be estimated from the 1st expression. However, I would say the expected welfare conditional on non-sentience is not exactly 0. For this to be the case, one would have to be certain that a welfare of exactly 0 follows from failing to satisfy the sentience criteria, which is not possible. Yet, in practice, it could still be the case that there is a decent probability mass on a welfare close to 0.
Itās not so much that thereās a specific threshold away from 50%, itās more that if youāre wildly uncertain and itās highly speculative, rather than assigning a single precise probability like 55%, you should use a range of probabilities, say 40% to 70%. This range has values on either side of 50%. Then:
If you were difference-making ambiguity averse,[1] then both increasing their populations would look bad (possibly more bad lives in expectation) and decreasing their populations would look bad (possibly fewer good lives in expectation). Youād want to minimize these effects, by avoiding interventions with such large predictable effects on wild animal population sizes, or by hedging.
If you were ambiguity averse (not difference-making), then I imagine youād want to decrease their populations. The worst possibilities for animals in the near-term are those where wild invertebrates are sentient and have horrible lives in expectation and youād want to make those less bad. But s-risks (and especially hellish existential risks) would plausibly dominate instead, if you can robustly mitigate them.
On a different account dealing with imprecise credences, when we reduce their populations, you might say these wild animals are neither better off in expectation (in case they have good lives in expectation), nor are they worse off in expectation (in case they have bad lives in expectation), so we can ignore them, via a principle that extends the Pareto principle (Hedden, 2024).
(Iām assuming weāre ruling out an average welfare of exactly 0 or assigning that negligible probability, EDIT: conditional on sentience/āhaving any welfare at all.)
On standard accounts of difference-making ambiguity aversion, which I think are problematic. Iām less sure about the implications of other accounts. See my 2024 post.
Thanks for clarifying, Michael.
I would agree any particular value for the welfare per animal-year has a negligible probability because my probability distribution is practically continuous, such that there is lots of values around any particular one.
Fascinating discussion between the two of you here, thanks.
I have one comment: I donāt think their welfare being exactly 0 should have negligible probability. If we consider an animal like the soil nematode, I think there should be a significant probability assigned to the possibility that they are not sentient, unless Iām missing something?
Yes, absolutely right about 0 being possible and reaonably likely. Maybe Iād say āaverage welfare conditional on having any welfare at allā. I only added that so that X% likely to be negative meant (100-X)% likely to be positive, in order to simplify the argument.
Thanks, Toby! Credits go to Michael.
I think āprobability of sentienceā*āexpected welfare conditional on sentienceā >> (1 - āprobability of sentienceā)*āexpected welfare conditional on non-sentienceā, such that the expected welfare can be estimated from the 1st expression. However, I would say the expected welfare conditional on non-sentience is not exactly 0. For this to be the case, one would have to be certain that a welfare of exactly 0 follows from failing to satisfy the sentience criteria, which is not possible. Yet, in practice, it could still be the case that there is a decent probability mass on a welfare close to 0.