Ya, I think 80,000 Hours has been a bit uncareful. I think GPI has done a fine job, and Teruji Thomas has worked on person-affecting views with them.
Woops yeah, I meant to say that GPI is good about this but the transparency and precision gets lost as ideas spread. Fixed the confusing language in my original comment.
In the longtermism section on their key ideas page, 80,000 Hours essentially assumes totalism without making that explicit:
Yeah this is another really great example of how EA is lacking in transparent reasoning. This is especially problematic since many people probably don’t have the conceptual resources necessary to identify the assumption or how it relates to other EA ideas, so the response might just be a general aversion to EA.
This article is a bit older (2017) so maybe it’s more forgiveable, but their coverage of the asymmetry there is pretty bad.
As another piece of evidence, my university group is using an introductory fellowship syllabus recently developed by Oxford EA and there are zero required readings about anything related to population ethics and how different views here might affect cause prioritization. Instead extinction risks are presented as pretty overwhelmingly pressing.
FWIW, I’m skeptical of this, too. I’ve responded to that paper here, and have discussed some other concerns here.
Thanks, gonna check these out!