tl;dr: Tarsney seems to me to understate the likelihood that accounting for non-human animals would substantially affect the case for longtermism.
Tarsney includes a helpful appendix listing the simplifications made in his model/paper, and the rationales for these simplifications. Here’s a passage from that:
Simplification: The model ignores effects on the welfare of beings other than Homo sapiens and our “descendants”.
Rationale: (1) The sign and magnitude of the effects of paradigmatic longtermist interventions on the welfare of non-human animals (or their far-future counterparts) are very unclear. (2) Dropping this simplification seems unlikely to change our quantitative results by more than 1–2 orders of magnitude (though this is far from obvious), and so unlikely to affect our qualitative conclusions.
I appreciate Tarsney’s caveat that “this is far from obvious”, and, given that caveat, I don’t strongly disagree with this sentence. But it seems quite plausible to me[1] that considering those effects would strengthen or weaken the case for paradigmatic longtermist interventions by more than 1-2 orders of magnitude, or even that it would flip the sign of the expected value of those interventions.
Relatedly, I also think that considering those effects should plausibly change which longtermist interventions we support (not just whether we support them vs non-longtermist interventions).
(I’m not sure how likely I see these things as, so maybe I actually agree with Tarsney that this “seems unlikely [but with that being far from obvious]”.)
[1] We could operationalise “it seems quite plausible to me that X” as something like “there’s at least a 20% chance that I would think X if I spent another 100 hours of thinking about the topic”.
tl;dr: Tarsney seems to me to understate the likelihood that accounting for non-human animals would substantially affect the case for longtermism.
Tarsney includes a helpful appendix listing the simplifications made in his model/paper, and the rationales for these simplifications. Here’s a passage from that:
I appreciate Tarsney’s caveat that “this is far from obvious”, and, given that caveat, I don’t strongly disagree with this sentence. But it seems quite plausible to me[1] that considering those effects would strengthen or weaken the case for paradigmatic longtermist interventions by more than 1-2 orders of magnitude, or even that it would flip the sign of the expected value of those interventions.
Relatedly, I also think that considering those effects should plausibly change which longtermist interventions we support (not just whether we support them vs non-longtermist interventions).
(I’m not sure how likely I see these things as, so maybe I actually agree with Tarsney that this “seems unlikely [but with that being far from obvious]”.)
See also Non-Humans and the Long-Term Future.
[1] We could operationalise “it seems quite plausible to me that X” as something like “there’s at least a 20% chance that I would think X if I spent another 100 hours of thinking about the topic”.