Let’s explore some hypothetical numbers to illustrate the general concept. If there’s a 5% chance that civilisation lasts for ten million years, then in expectation, there are 5000 future generations. If thousands of people making a concerted effort could, with a 55% probability, reduce the risk of premature extinction by 1 percentage point, then these efforts would in expectation save 28 future generations. If each generation contains ten billion people, that would be 280 billion lives saved.
But I’d guess a minority of people understands making sure someone comes to exist at all as saving them.
This article is a bit older (2017) so maybe it’s more forgiveable, but their coverage of the asymmetry there is pretty bad. They say “it’s unclear why the asymmetry would exist”, but philosophers have put forward arguments for the asymmetry (e.g. Frick 2014, and I cite a few more here), and they cite precisely 0 of them directly. Then they argue that the asymmetry has implausible implications for the nonidentity problem, but what they write doesn’t actually follow at all without further assumptions (e.g. the independence of irrelevant alternatives). Indeed, some of Teruji Thomas’s proposals avoid the problem, and at least this paper discussed in this paper they cite on that page avoids it, too.
Woops yeah, I meant to say that GPI is good about this but the transparency and precision gets lost as ideas spread. Fixed the confusing language in my original comment.
Yeah this is another really great example of how EA is lacking in transparent reasoning. This is especially problematic since many people probably don’t have the conceptual resources necessary to identify the assumption or how it relates to other EA ideas, so the response might just be a general aversion to EA.
This article is a bit older (2017) so maybe it’s more forgiveable, but their coverage of the asymmetry there is pretty bad.
As another piece of evidence, my university group is using an introductory fellowship syllabus recently developed by Oxford EA and there are zero required readings about anything related to population ethics and how different views here might affect cause prioritization. Instead extinction risks are presented as pretty overwhelmingly pressing.
FWIW, I’m skeptical of this, too. I’ve responded to that paper here, and have discussed some other concerns here.
Thanks for the specific examples. I hope some of 80,000 Hours’ staff members and persons who took 80,000 Hours’ passage on the asymmetry for granted will consider your criticism too.
Ya, I think 80,000 Hours has been a bit uncareful. I think GPI has done a fine job, and Teruji Thomas has worked on person-affecting views with them.
In the longtermism section on their key ideas page, 80,000 Hours essentially assumes totalism without making that explicit:
But I’d guess a minority of people understands making sure someone comes to exist at all as saving them.
This article is a bit older (2017) so maybe it’s more forgiveable, but their coverage of the asymmetry there is pretty bad. They say “it’s unclear why the asymmetry would exist”, but philosophers have put forward arguments for the asymmetry (e.g. Frick 2014, and I cite a few more here), and they cite precisely 0 of them directly. Then they argue that the asymmetry has implausible implications for the nonidentity problem, but what they write doesn’t actually follow at all without further assumptions (e.g. the independence of irrelevant alternatives). Indeed, some of Teruji Thomas’s proposals avoid the problem, and at least this paper discussed in this paper they cite on that page avoids it, too.
FWIW, I’m skeptical of this, too. I’ve responded to that paper here, and have discussed some other concerns here.
Woops yeah, I meant to say that GPI is good about this but the transparency and precision gets lost as ideas spread. Fixed the confusing language in my original comment.
Yeah this is another really great example of how EA is lacking in transparent reasoning. This is especially problematic since many people probably don’t have the conceptual resources necessary to identify the assumption or how it relates to other EA ideas, so the response might just be a general aversion to EA.
As another piece of evidence, my university group is using an introductory fellowship syllabus recently developed by Oxford EA and there are zero required readings about anything related to population ethics and how different views here might affect cause prioritization. Instead extinction risks are presented as pretty overwhelmingly pressing.
Thanks, gonna check these out!
Thanks for the specific examples. I hope some of 80,000 Hours’ staff members and persons who took 80,000 Hours’ passage on the asymmetry for granted will consider your criticism too.