Thanks for the great post, Mal! I strongly upvoted it.
I argue that, when applied consistently across cause areas, none of these approaches suggest wild animal welfare is distinctively intractable compared to global health or AI safety.
Agreed. In addition, I do not think wild animal welfare is distinctively intractable compared to interventios focussing on non-wild animals. I am uncertain to the point that I do not know whether electrically stunning shrimp increases or decreases welfare in expectation, and I see it as one of the interventions where effects on non-target beneficiaries are the least important.
Another group of people attempting to include all moral patients in their analyses seem to basically reject cluelessness by trying to calculate (at least partially based on intuitions) the effects of interventions on as many questionably sentient moral patients as possible (for example, see this post). The idea is to come up with all the effects you can think of and assign precise probabilities to every possible outcome, even in the face of deep uncertainty. You can even assign some kind of modifier to capture all the âunknown unknowns.â
[...] As a result, your views become volatile: You might determine an AI policy is net positive today, then completely reverse that judgment months later, after minor updates. Although some may think that this outcome is an unfortunate but necessary aspect of the ârightâ decision theory, it is extremely hard to see how one might run a movement this way. Switching from endorsing bird-safe glass to not endorsing it on a monthly basis would lead to little impact and few supporters.
In cases where there is large uncertainty about whether an intervention increases or decreases welfare (in expectation), I believe it is very often better to support interventions decreasing that uncertainty. In the post of mine linked above, my top recommendation is decreasing the uncertainty about whether soil nematodes have positive or negative lives. I tried to be clearer about decreasing uncertainty being my priority here.
At the same time, I would not say constantly switching between 2 options which can easily increase or decrease welfare in expectation is robustly worse than just pursuing one of them. The constant switching would achieve no impact, but it is unclear whether this is better or worse than pursuing a single option if there is large uncertainty about whether it increases or decreases welfare.
Hi Vasco! Thanks for the comment. I agree with you that switching is not necessarily worse (depending on your goals and principles) then just pursuing one uncertain intervention. I also agree with you that research is important when you find yourself in such a positionâitâs why Iâve dedicated my career to research :) And critically, I appreciate the clarification that âdecreasing uncertaintyâ is your priorityâI didnât realize that from past posts, but I think your most recent one is clear on that.
One thing Iâll just mention as a matter of personal inclinationâI feel unenthusiastic about precise probabilities for more reasons than just the switching issue (I pointed it out just to add to the discourse about things someone with that view should reflect on). Personally, it just doesnât feel accurate to my own epistemic state. When I look at my own uncertainties of this kind, it feels almost like lying to put a precise number on them (Iâm not saying others should feel this way, just that it is how I feel). So thatâs the most basic reason (among the other sort of theoretic reasons out there) that I feel attached to imprecise probabilities.
And critically, I appreciate the clarification that âdecreasing uncertaintyâ is your priorityâI didnât realize that from past posts, but I think your most recent one is clear on that.
Yes, I think I could have been clearer about it in the past. Now I am also more uncertain. I previously thought increasing agricultural was a pretty good heuristic for decreasing soil-animal-years, but it looks like it may easily increase these due to increasing soil-nematode-years.
When I look at my own uncertainties of this kind, it feels almost like lying to put a precise number on them (Iâm not saying others should feel this way, just that it is how I feel). So thatâs the most basic reason (among the other sort of theoretic reasons out there) that I feel attached to imprecise probabilities.
Makes sense. However, I would simply assign roughly the same probability to values (of a variable of interest) I feel very similarly about. The distribution representing the different possible values will be wider if one is indifferent between more of them. Yet, I do not understand how one could accept imprecise probabilities. In my mind, a given value is always less, more, or as likely as another. I would not be able to distinguish between the mass of 2 objects with 1 and 1.001 kg by just having them in my hands, but this does not mean their masses are incomparable.
Thanks for the great post, Mal! I strongly upvoted it.
Agreed. In addition, I do not think wild animal welfare is distinctively intractable compared to interventios focussing on non-wild animals. I am uncertain to the point that I do not know whether electrically stunning shrimp increases or decreases welfare in expectation, and I see it as one of the interventions where effects on non-target beneficiaries are the least important.
In cases where there is large uncertainty about whether an intervention increases or decreases welfare (in expectation), I believe it is very often better to support interventions decreasing that uncertainty. In the post of mine linked above, my top recommendation is decreasing the uncertainty about whether soil nematodes have positive or negative lives. I tried to be clearer about decreasing uncertainty being my priority here.
At the same time, I would not say constantly switching between 2 options which can easily increase or decrease welfare in expectation is robustly worse than just pursuing one of them. The constant switching would achieve no impact, but it is unclear whether this is better or worse than pursuing a single option if there is large uncertainty about whether it increases or decreases welfare.
Hi Vasco! Thanks for the comment. I agree with you that switching is not necessarily worse (depending on your goals and principles) then just pursuing one uncertain intervention. I also agree with you that research is important when you find yourself in such a positionâitâs why Iâve dedicated my career to research :) And critically, I appreciate the clarification that âdecreasing uncertaintyâ is your priorityâI didnât realize that from past posts, but I think your most recent one is clear on that.
One thing Iâll just mention as a matter of personal inclinationâI feel unenthusiastic about precise probabilities for more reasons than just the switching issue (I pointed it out just to add to the discourse about things someone with that view should reflect on). Personally, it just doesnât feel accurate to my own epistemic state. When I look at my own uncertainties of this kind, it feels almost like lying to put a precise number on them (Iâm not saying others should feel this way, just that it is how I feel). So thatâs the most basic reason (among the other sort of theoretic reasons out there) that I feel attached to imprecise probabilities.
Yes, I think I could have been clearer about it in the past. Now I am also more uncertain. I previously thought increasing agricultural was a pretty good heuristic for decreasing soil-animal-years, but it looks like it may easily increase these due to increasing soil-nematode-years.
Makes sense. However, I would simply assign roughly the same probability to values (of a variable of interest) I feel very similarly about. The distribution representing the different possible values will be wider if one is indifferent between more of them. Yet, I do not understand how one could accept imprecise probabilities. In my mind, a given value is always less, more, or as likely as another. I would not be able to distinguish between the mass of 2 objects with 1 and 1.001 kg by just having them in my hands, but this does not mean their masses are incomparable.