I think it’s a useful criterion and have upvoted it, though I have a couple of criticisms. The main one is that I’m concerned that EA has a heuristics addiction (ITN, extinction risk treated as though it were the only concern in existential risk, the prioritisation of ivory league and other universities, orthogonality thesis as an argument for the likelihood of bad AI outcomes, formulaic approaches to events, generally relying on single research projects by a small team or individual to settle a difficult question), so while having useful tools is good in theory, in practice I worry that there’s little middle ground between ‘ignored entirely’ and ‘sees widespread adoption that encourages more lazy thinking’. I’m not sure what to do about this
The other criticism is an object level application of this concern:
Larger animals might have more capacity for suffering, but intuitively they might have 10x more or 100x more at most… thus we should prioritize small animals.
I think there’s huge uncertainty in
How much more capacity for suffering large animals have (‘infinitely more’ isn’t out of scope for some comparisons)
Flow-through effects of focusing on larger vs smaller animals (e.g. eating larger animals is much worse for the climate, and hence for the medium- and possibly long-term future; though smaller animals are maybe a bigger biorisk threat)
Human health (red meat ≈ large animal meat, and is probably worse for health than white meat)
So if we understand ‘should prioritise’ as ‘should plan to do substantial research and not treat this as a settled question until we have way more data on such concerns, but start with smaller animals and perhaps lean towards small-animal-favouring decisions in a personal capacity’ then this seems reasonable. But if this meant to be a serious justification for anything more substantial, then it seems like an example of overapplying the research efforts of a small team—whose work I like to be clear, but is nowhere near the last word on the subject.
I think it’s a useful criterion and have upvoted it, though I have a couple of criticisms. The main one is that I’m concerned that EA has a heuristics addiction (ITN, extinction risk treated as though it were the only concern in existential risk, the prioritisation of ivory league and other universities, orthogonality thesis as an argument for the likelihood of bad AI outcomes, formulaic approaches to events, generally relying on single research projects by a small team or individual to settle a difficult question), so while having useful tools is good in theory, in practice I worry that there’s little middle ground between ‘ignored entirely’ and ‘sees widespread adoption that encourages more lazy thinking’. I’m not sure what to do about this
The other criticism is an object level application of this concern:
I think there’s huge uncertainty in
How much more capacity for suffering large animals have (‘infinitely more’ isn’t out of scope for some comparisons)
Flow-through effects of focusing on larger vs smaller animals (e.g. eating larger animals is much worse for the climate, and hence for the medium- and possibly long-term future; though smaller animals are maybe a bigger biorisk threat)
Human health (red meat ≈ large animal meat, and is probably worse for health than white meat)
So if we understand ‘should prioritise’ as ‘should plan to do substantial research and not treat this as a settled question until we have way more data on such concerns, but start with smaller animals and perhaps lean towards small-animal-favouring decisions in a personal capacity’ then this seems reasonable. But if this meant to be a serious justification for anything more substantial, then it seems like an example of overapplying the research efforts of a small team—whose work I like to be clear, but is nowhere near the last word on the subject.