Pascalās bugging and the Rebugnant Conclusion (Sebo, 2024). :P
Interested to hear from Insect Welfare and Wild Animal Welfare advocates why they disagree that nematodes are the primary moral concern of planet Earth.
Iām sympathetic to difference-making risk aversion and difference-making ambiguity aversion (although see here) and assign nematodes a quite low probability of mattering much at all to me, low enough for now that Iām inclined to ignore them altogether (and what would have gone to nematodes instead goes to mitigating s-risks). Mites, springtails, copepods and insect larvae seem substantially more likely to matter to me, based on my beliefs about their capacities.
Still, Iād rather not go 100% on invertebrates either, also due to my difference-making sympathies. Iād deal with this like normative uncertainty and use a kind of bucket approach, like the Property Rights approach and hedging, with normative uncertainty about difference-making and approaches to dealing with uncertainty, about the nature of consciousness and moral patienthood and how to deal with it (although also see this), and about aggregation. So, roughly in practice, based on the probabilities of making a difference, probabilities of moral patienthood, attitudes towards risk and aggregation, I have a humans bucket, a mammals and birds bucket, a fish bucket, a shrimp and insects bucket, a mites, springtails and copepods bucket, and an s-risks bucket.
Thanks for sharing, Michael! I would be curious to know which donations you would recommend if you fully endorsed expectationaltotalhedonisticutilitarianism, like I do, as a moral theory (not necessarily as a decision criterion).
If using precise credences, then Iād be a strong longtermist (probably focusing on existential risks of some kind) or chase infinities. I havenāt thought a lot from this perspective about practical donation recommendations, if Iām assuming not suffering-focused. If suffering-focused (like I actually am), then probably CLR.
I would say a 10^-100 chance of 10^100 QALY is as good as 1 QALY. However, even if I thought the risk of human extinction over the next 10 years was 10 % (I guess it is 10^-7), I would not conclude decreasing it would be astronomically cost-effective. One should be scope-sensitive not only to large potential benefits, but also their small probabilities. Longtermists typically come up with huge amounts of benefits (e.g. 10^50 QALY), and then independently guess a probability which is only moderately small (e.g. 10^-10), which results in huge expected benefits (e.g. 10^40 QALY). Yet, the amount of benefits is not independent from their probability. For reasonable distributions describing the benefits, I think the expected benefits coming from very large benefits will be negligible. For example, if the benefits are described by a power law distribution with tail index alpha > 0, their probability will be proportional to ābenefitsā^-(1 + alpha), so the expected benefits linked to a given amount of benefits will be proportional to ābenefitsā*ābenefitsā^-(1 + alpha) = ābenefitsā^-alpha. This decreases with benefits, so the expected benefits coming from astronomical benefits will be negligible.
Pascalās bugging and the Rebugnant Conclusion (Sebo, 2024). :P
Iām sympathetic to difference-making risk aversion and difference-making ambiguity aversion (although see here) and assign nematodes a quite low probability of mattering much at all to me, low enough for now that Iām inclined to ignore them altogether (and what would have gone to nematodes instead goes to mitigating s-risks). Mites, springtails, copepods and insect larvae seem substantially more likely to matter to me, based on my beliefs about their capacities.
Still, Iād rather not go 100% on invertebrates either, also due to my difference-making sympathies. Iād deal with this like normative uncertainty and use a kind of bucket approach, like the Property Rights approach and hedging, with normative uncertainty about difference-making and approaches to dealing with uncertainty, about the nature of consciousness and moral patienthood and how to deal with it (although also see this), and about aggregation. So, roughly in practice, based on the probabilities of making a difference, probabilities of moral patienthood, attitudes towards risk and aggregation, I have a humans bucket, a mammals and birds bucket, a fish bucket, a shrimp and insects bucket, a mites, springtails and copepods bucket, and an s-risks bucket.
Thanks for sharing, Michael! I would be curious to know which donations you would recommend if you fully endorsed expectational total hedonistic utilitarianism, like I do, as a moral theory (not necessarily as a decision criterion).
If using precise credences, then Iād be a strong longtermist (probably focusing on existential risks of some kind) or chase infinities. I havenāt thought a lot from this perspective about practical donation recommendations, if Iām assuming not suffering-focused. If suffering-focused (like I actually am), then probably CLR.
Thanks, Michael. For readersā reference, CLR stands for Center on Long-Term Risk.
I would say a 10^-100 chance of 10^100 QALY is as good as 1 QALY. However, even if I thought the risk of human extinction over the next 10 years was 10 % (I guess it is 10^-7), I would not conclude decreasing it would be astronomically cost-effective. One should be scope-sensitive not only to large potential benefits, but also their small probabilities. Longtermists typically come up with huge amounts of benefits (e.g. 10^50 QALY), and then independently guess a probability which is only moderately small (e.g. 10^-10), which results in huge expected benefits (e.g. 10^40 QALY). Yet, the amount of benefits is not independent from their probability. For reasonable distributions describing the benefits, I think the expected benefits coming from very large benefits will be negligible. For example, if the benefits are described by a power law distribution with tail index alpha > 0, their probability will be proportional to ābenefitsā^-(1 + alpha), so the expected benefits linked to a given amount of benefits will be proportional to ābenefitsā*ābenefitsā^-(1 + alpha) = ābenefitsā^-alpha. This decreases with benefits, so the expected benefits coming from astronomical benefits will be negligible.