Outsourcing the welfare estimates to Gemini seems like a risky move to me. It’s a key part of the whole analysis, but is an extremely challenging question to begin answering. What’s the reason to expect Gemini to be able to do a good job of this, given the blind spots we know current AI models still have?
If I’m understanding right, this would flip all your conclusions on their head, and instead of trying to eliminate wild animal habitats, the top priority would be to increase them?
Such extreme sensitivity to highly uncertain quantities strikes me as a strong reductio ad absurdum argument against this approach to decision making on this kind of question. Otherwise we find ourselves oscillating wildly between “destroy all nature” and “destroy all humans” on the basis of each piece of new information, never being especially confident in either.
I like Bayesianism and expected value maximization as a framework for decision making under uncertainty, but when considering situations with enormous amounts of value described by extremely speculative probability estimates, I think we probably need to approach things differently (or at least adapt our priors so as to be less sensitive to these kind of problems). Something like Holden Karnofsky’s approach here (which Anthony DiGiovanni shared with me on a recent post on insect suffering).
Edit: Anthony DiGiovanni doesn’t actually endorse Holden Karnofsky’s approach (see Anthony’s comment below).
I’ll add that even if I would make different methodological choices, I think it’s still useful to highlight the scale of indirect effects on wild animals. The default in the community seems to be to ignore these effects, and there doesn’t seem to be good justification for that. I think it’s great that Vasco is taking these effects seriously and seeing where they might lead.
(And, as in my other comment, the conclusions and analysis could hold approximately anyway for those sufficiently pessimistic about the lives of wild invertebrates, or who give enough weight to sufficiently suffering-focused views.)
I agree both with the specific point about using LLMs and the more general point about sensitivity to highly speculative and ambiguous values. I would endorse imprecise credences, and the use of approaches to decision-making with imprecise credences. See also Anthony’s piece against precise Bayesianism.[1]
That being said, if you’re sufficiently suffering-focused or confident that their lives are negative on average (or confident that their lives are positive on average), then you don’t have to worry about this too much.
Thanks, Michael. Would the approaches you mention only recommend acting on the basis that wild animals have negative lives if the probability of this was sufficiently high? If so, why would my estimates of the probability of soil nematodes, mites, and springtails having negative lives of 58.7 %, 55.8 %, and 55.0 % be too low, but, for example, 70 % be sufficiently high?
It’s not so much that there’s a specific threshold away from 50%, it’s more that if you’re wildly uncertain and it’s highly speculative, rather than assigning a single precise probability like 55%, you should use a range of probabilities, say 40% to 70%. This range has values on either side of 50%. Then:
If you were difference-making ambiguity averse,[1] then both increasing their populations would look bad (possibly more bad lives in expectation) and decreasing their populations would look bad (possibly fewer good lives in expectation). You’d want to minimize these effects, by avoiding interventions with such large predictable effects on wild animal population sizes, or by hedging.
If you were ambiguity averse (not difference-making), then I imagine you’d want to decrease their populations. The worst possibilities for animals in the near-term are those where wild invertebrates are sentient and have horrible lives in expectation and you’d want to make those less bad. But s-risks (and especially hellish existential risks) would plausibly dominate instead, if you can robustly mitigate them.
On a different account dealing with imprecise credences, when we reduce their populations, you might say these wild animals are neither better off in expectation (in case they have good lives in expectation), nor are they worse off in expectation (in case they have bad lives in expectation), so we can ignore them, via a principle that extends the Pareto principle (Hedden, 2024).
(I’m assuming we’re ruling out an average welfare of exactly 0 or assigning that negligible probability, EDIT: conditional on sentience/having any welfare at all.)
On standard accounts of difference-making ambiguity aversion, which I think are problematic. I’m less sure about the implications of other accounts. See my 2024 post.
(I’m assuming we’re ruling out an average welfare of exactly 0 or assigning that negligible probability.)
I would agree any particular value for the welfare per animal-year has a negligible probability because my probability distribution is practically continuous, such that there is lots of values around any particular one.
Fascinating discussion between the two of you here, thanks.
I have one comment: I don’t think their welfare being exactly 0 should have negligible probability. If we consider an animal like the soil nematode, I think there should be a significant probability assigned to the possibility that they are not sentient, unless I’m missing something?
Yes, absolutely right about 0 being possible and reaonably likely. Maybe I’d say “average welfare conditional on having any welfare at all”. I only added that so that X% likely to be negative meant (100-X)% likely to be positive, in order to simplify the argument.
I think “probability of sentience”*”expected welfare conditional on sentience” >> (1 - “probability of sentience”)*”expected welfare conditional on non-sentience”, such that the expected welfare can be estimated from the 1st expression. However, I would say the expected welfare conditional on non-sentience is not exactly 0. For this to be the case, one would have to be certain that a welfare of exactly 0 follows from failing to satisfy the sentience criteria, which is not possible. Yet, in practice, it could still be the case that there is a decent probability mass on a welfare close to 0.
Outsourcing the welfare estimates to Gemini seems like a risky move to me. It’s a key part of the whole analysis, but is an extremely challenging question to begin answering. What’s the reason to expect Gemini to be able to do a good job of this, given the blind spots we know current AI models still have?
I put little trust in Gemini’s or anyone’s estimates about whether soil nematodes, mites, and springtails have positive or negative lives. However, my conclusions do not depend on the specific values of Gemini’s guesses. Any guesses for the hedonistic welfare per animal-year as a fraction of that of fully healthy animals which were negative, and not super close to 0 would lead to similar conclusions. In addition, my sense is that most people working on wild animal welfare would guess soil nematodes, mites, and springtails have negative lives. I have now clarified this in the post. In addition, Gemini’s estimates are in close agreement with Ambitious Impact’s estimate for wild bugs.
Gemini provided best guesses for soil nematodes, mites, and springtails of −67 %, −44 %, and −38 %, which are 1.60, 1.05, and 0.905 times Ambitious Impact’s estimate of −42 % for wild bugs based on their deprecated welfare points system.
If I’m understanding right, this would flip all your conclusions on their head, and instead of trying to eliminate wild animal habitats, the top priority would be to increase them?
Right.
Such extreme sensitivity to highly uncertain quantities strikes me as a strong reductio ad absurdum argument against this approach to decision making on this kind of question. Otherwise we find ourselves oscillating wildly between “destroy all nature” and “destroy all humans” on the basis of each piece of new information, never being especially confident in either.
Uncertainty about whether wild animals have positive or negative lives only directly translates into uncertainty about whether one should increase or decrease wild-animal-years at the margin, which is not absurd, and neither is my recommendation of saving human lives cost-effectively. Killing all wild animals or humans are not live options.
I like Bayesianism and expected value maximization as a framework for decision making under uncertainty, but when considering situations with enormous amounts of value described by extremely speculative probability estimates, I think we probably need to approach things differently (or at least adapt our priors so as to be less sensitive to these kind of problems). Something like Holden Karnofsky’s approach here (which Anthony DiGiovanni shared with me on a recent post on insect suffering).
Approaches neglecting the effects on wild animals would be implicitly considering them negligible. For this to be the case, I think one would need an unreasonably certain prior that wild animals have welfare almost exactly equal to 0.
Something like Holden Karnofsky’s approach here (which Anthony DiGiovanni shared with me on a recent post on insect suffering)
(Context for other readers: To be clear, I don’t endorse Karnofsky’s model, which I think is kind of ad hoc and doesn’t address the root problem of arbitrariness in our credences. The least bad epistemic framework for addressing that problem, IMO, is imprecise probabilities (accounting for unawareness).)
Another potentially useful takeaway is that these interventions Vasco considered, or at least diet change interventions like Veganuary and School Plates, are not robustly positive in expectation, when considering exactly the near-term animal effects. So why would we support them?
These interventions don’t seem justified by their direct cost-effectiveness, unless we have adequate reason to single out those effects and ignore or discount the effects on wild terrestrial invertebrates. We’d need a good reason to single out the direct effects, or refer to even more indirect or longer term reasons (e.g. moral circle expansion, space colonization and s-risks).
I personally only care about the expected (posterior) impact. One can get a smaller expected impact by positing a more certain prior impact, but I do not know what would be the justification for being a priori very confident about the impact being 0.
I agree the interventions I considered are not robustly beneficial in expectation. However, I would not single out interventions changing the consumption of animal-based food (among the ones I analysed, all besides the broiler welfare and cage-free campaigns, and HSI). I estimate broiler welfare and cage-free corporate campaigns benefit soil animals 444 and 28.2 times as much as they benefit chickens.
Jim Buhler clarified what would be needed to neglect the uncertain nearterm effects of interventions targeting animals. I think the effects after 100 years or so are negligible, but that people neglecting nearterm effects due to their uncertainty should neglect more uncertain longterm effects even more.
Outsourcing the welfare estimates to Gemini seems like a risky move to me. It’s a key part of the whole analysis, but is an extremely challenging question to begin answering. What’s the reason to expect Gemini to be able to do a good job of this, given the blind spots we know current AI models still have?
I tried pasting your prompt into ChatGPT, with research mode, and the 3 value estimates it gave were all positive, rather than negative: https://chatgpt.com/share/683f34e8-8088-8006-8ba4-b719d025ac45
If I’m understanding right, this would flip all your conclusions on their head, and instead of trying to eliminate wild animal habitats, the top priority would be to increase them?
Such extreme sensitivity to highly uncertain quantities strikes me as a strong reductio ad absurdum argument against this approach to decision making on this kind of question. Otherwise we find ourselves oscillating wildly between “destroy all nature” and “destroy all humans” on the basis of each piece of new information, never being especially confident in either.
I like Bayesianism and expected value maximization as a framework for decision making under uncertainty, but when considering situations with enormous amounts of value described by extremely speculative probability estimates, I think we probably need to approach things differently (or at least adapt our priors so as to be less sensitive to these kind of problems). Something like Holden Karnofsky’s approach here (which Anthony DiGiovanni shared with me on a recent post on insect suffering).
Edit: Anthony DiGiovanni doesn’t actually endorse Holden Karnofsky’s approach (see Anthony’s comment below).
I’ll add that even if I would make different methodological choices, I think it’s still useful to highlight the scale of indirect effects on wild animals. The default in the community seems to be to ignore these effects, and there doesn’t seem to be good justification for that. I think it’s great that Vasco is taking these effects seriously and seeing where they might lead.
(And, as in my other comment, the conclusions and analysis could hold approximately anyway for those sufficiently pessimistic about the lives of wild invertebrates, or who give enough weight to sufficiently suffering-focused views.)
Thanks, Michael!
I agree both with the specific point about using LLMs and the more general point about sensitivity to highly speculative and ambiguous values. I would endorse imprecise credences, and the use of approaches to decision-making with imprecise credences. See also Anthony’s piece against precise Bayesianism.[1]
That being said, if you’re sufficiently suffering-focused or confident that their lives are negative on average (or confident that their lives are positive on average), then you don’t have to worry about this too much.
On difference-making ambiguity aversion as one natural group of approaches, see my 2024 post, my 2020 post and Greaves et al., 2022. I’m not confident these are the best approaches for dealing with imprecise credences (if averse to fanaticism).
Thanks, Michael. Would the approaches you mention only recommend acting on the basis that wild animals have negative lives if the probability of this was sufficiently high? If so, why would my estimates of the probability of soil nematodes, mites, and springtails having negative lives of 58.7 %, 55.8 %, and 55.0 % be too low, but, for example, 70 % be sufficiently high?
It’s not so much that there’s a specific threshold away from 50%, it’s more that if you’re wildly uncertain and it’s highly speculative, rather than assigning a single precise probability like 55%, you should use a range of probabilities, say 40% to 70%. This range has values on either side of 50%. Then:
If you were difference-making ambiguity averse,[1] then both increasing their populations would look bad (possibly more bad lives in expectation) and decreasing their populations would look bad (possibly fewer good lives in expectation). You’d want to minimize these effects, by avoiding interventions with such large predictable effects on wild animal population sizes, or by hedging.
If you were ambiguity averse (not difference-making), then I imagine you’d want to decrease their populations. The worst possibilities for animals in the near-term are those where wild invertebrates are sentient and have horrible lives in expectation and you’d want to make those less bad. But s-risks (and especially hellish existential risks) would plausibly dominate instead, if you can robustly mitigate them.
On a different account dealing with imprecise credences, when we reduce their populations, you might say these wild animals are neither better off in expectation (in case they have good lives in expectation), nor are they worse off in expectation (in case they have bad lives in expectation), so we can ignore them, via a principle that extends the Pareto principle (Hedden, 2024).
(I’m assuming we’re ruling out an average welfare of exactly 0 or assigning that negligible probability, EDIT: conditional on sentience/having any welfare at all.)
On standard accounts of difference-making ambiguity aversion, which I think are problematic. I’m less sure about the implications of other accounts. See my 2024 post.
Thanks for clarifying, Michael.
I would agree any particular value for the welfare per animal-year has a negligible probability because my probability distribution is practically continuous, such that there is lots of values around any particular one.
Fascinating discussion between the two of you here, thanks.
I have one comment: I don’t think their welfare being exactly 0 should have negligible probability. If we consider an animal like the soil nematode, I think there should be a significant probability assigned to the possibility that they are not sentient, unless I’m missing something?
Yes, absolutely right about 0 being possible and reaonably likely. Maybe I’d say “average welfare conditional on having any welfare at all”. I only added that so that X% likely to be negative meant (100-X)% likely to be positive, in order to simplify the argument.
Thanks, Toby! Credits go to Michael.
I think “probability of sentience”*”expected welfare conditional on sentience” >> (1 - “probability of sentience”)*”expected welfare conditional on non-sentience”, such that the expected welfare can be estimated from the 1st expression. However, I would say the expected welfare conditional on non-sentience is not exactly 0. For this to be the case, one would have to be certain that a welfare of exactly 0 follows from failing to satisfy the sentience criteria, which is not possible. Yet, in practice, it could still be the case that there is a decent probability mass on a welfare close to 0.
Thanks for the comment, Toby.
I put little trust in Gemini’s or anyone’s estimates about whether soil nematodes, mites, and springtails have positive or negative lives. However, my conclusions do not depend on the specific values of Gemini’s guesses. Any guesses for the hedonistic welfare per animal-year as a fraction of that of fully healthy animals which were negative, and not super close to 0 would lead to similar conclusions. In addition, my sense is that most people working on wild animal welfare would guess soil nematodes, mites, and springtails have negative lives. I have now clarified this in the post. In addition, Gemini’s estimates are in close agreement with Ambitious Impact’s estimate for wild bugs.
Right.
Uncertainty about whether wild animals have positive or negative lives only directly translates into uncertainty about whether one should increase or decrease wild-animal-years at the margin, which is not absurd, and neither is my recommendation of saving human lives cost-effectively. Killing all wild animals or humans are not live options.
Approaches neglecting the effects on wild animals would be implicitly considering them negligible. For this to be the case, I think one would need an unreasonably certain prior that wild animals have welfare almost exactly equal to 0.
(Context for other readers: To be clear, I don’t endorse Karnofsky’s model, which I think is kind of ad hoc and doesn’t address the root problem of arbitrariness in our credences. The least bad epistemic framework for addressing that problem, IMO, is imprecise probabilities (accounting for unawareness).)
Sorry, you did say this in the other thread as well and I should have made that clear in my comment originally. Have now edited.
Another potentially useful takeaway is that these interventions Vasco considered, or at least diet change interventions like Veganuary and School Plates, are not robustly positive in expectation, when considering exactly the near-term animal effects. So why would we support them?
These interventions don’t seem justified by their direct cost-effectiveness, unless we have adequate reason to single out those effects and ignore or discount the effects on wild terrestrial invertebrates. We’d need a good reason to single out the direct effects, or refer to even more indirect or longer term reasons (e.g. moral circle expansion, space colonization and s-risks).
Thanks, Michael.
I personally only care about the expected (posterior) impact. One can get a smaller expected impact by positing a more certain prior impact, but I do not know what would be the justification for being a priori very confident about the impact being 0.
I agree the interventions I considered are not robustly beneficial in expectation. However, I would not single out interventions changing the consumption of animal-based food (among the ones I analysed, all besides the broiler welfare and cage-free campaigns, and HSI). I estimate broiler welfare and cage-free corporate campaigns benefit soil animals 444 and 28.2 times as much as they benefit chickens.
Jim Buhler clarified what would be needed to neglect the uncertain nearterm effects of interventions targeting animals. I think the effects after 100 years or so are negligible, but that people neglecting nearterm effects due to their uncertainty should neglect more uncertain longterm effects even more.