Stefan Schubert, Lucius Caviola & Nadira S. Faber, ‘The Psychology of Existential Risk: Moral Judgments about Human Extinction’, Scientific Reports 9: 15100 (2019). doi: 10.1038/s41598-019-50145-9. Abstract:
The 21st century will likely see growing risks of human extinction, but currently, relatively small resources are invested in reducing such existential risks. Using three samples (UK general public, US general public, and UK students; total N = 2,507), we study how laypeople reason about human extinction. We find that people think that human extinction needs to be prevented. Strikingly, however, they do not think that an extinction catastrophe would be uniquely bad relative to near-extinction catastrophes, which allow for recovery. More people find extinction uniquely bad when (a) asked to consider the extinction of an animal species rather than humans, (b) asked to consider a case where human extinction is associated with less direct harm, and (c) they are explicitly prompted to consider long-term consequences of the catastrophes. We conclude that an important reason why people do not find extinction uniquely bad is that they focus on the immediate death and suffering that the catastrophes cause for fellow humans, rather than on the long-term consequences. Finally, we find that (d) laypeople—in line with prominent philosophical arguments—think that the quality of the future is relevant: they do find extinction uniquely bad when this means forgoing a utopian future.
This appears to be very a different conclusion from the survey where most people said that future generations matter just as much as the present (no requirement for utopia).
At The Unjournal (unjournal.org), we’re assessing whether this research high-impact enough to commission for evaluation for The Unjournal. We prioritize research for evaluation not based on its quality, credibility, etc—that’s the evaluators’ role. Instead, we consider its potential for global impact, its current influence on funding, policy, and thinking, on whether we see room for fruitful evaluation, and on whether it fits in our teams’ wheelhouse and our field scope.
I’m giving a take here to get responses from the authors and others in the field, and from stakeholders. And also to give anyone who reads this some insights into some of the things we consider in prioritizing research for evaluation (and getting possible feedback on this).
Note, we’ve mainly covered research in economics and impact measurement. But we do have a ‘psychology and attitudes’ field specialist team, and I’d like to be evaluating more research in this area (but there are some particular challenges I won’t discuss here).
Below, some considerations after a quick skim; your feedback is welcome. As it’s a skim some of my takes below may be naive. And it’s a bit of red-teaming.
My impression of what the study finds
They “Study how laypeople reason about human extinction.” They use mostly https://www.prolific.com/ samples, and, with various frames ask people to rank things triads like:
And then
I.e., people tend to state that the difference between B and A is much bigger than the difference between B and C.
I would interpret as evidence that lay people’s quick takes/intuitions/gut attitudes are not total-population utilitarian, nor particularly extinction averse.
Looking across frames, the ‘gradient’ is higher (relatively more people find C-B bigger than B-A_ for animal extinction vs animal crisis and for sterilization of everyone vs nearly everyone.
According to the authors (and this seems reasonable to me) because they
This is the ’Utopia Condition”:
So, they find extinction uniquely bad in cases where the comparison is framed in particular ways to highlight how uniquely bad it is? But isn’t this a bit like leading the witness or driving agreeability bias? Are people in this condition really given alternate reasonable ways of considering this? It seems a bit forced, although I guess that it tells you that people are at least not extremely resistant to this argument.
Could the study be impactful? Is it worth investing in commissioning an evaluation?
This study was funded by BERI, CEA and others. Impact-oriented funders thought it was worth investing in. This also suggests it might have the potential to impact their future thinking and choices
To understand ‘whether it could be impactful’ for myself, I would want a better sense of the goal, or of how the authors and people funding this study intended it to be used, or thought it could reasonably be used.
Is the goal here to:
1. Survey people’s attitudes to get at something normative; i.e., to understand the will of the people to be able to fulfill it?… Or is it
2. To understand what could sway people to support x-risk reduction initiatives and legislation to avoid extinction-level risks?
I’m not fully convinced that any particular empirical result here would have led to a meaningfully different conclusion, implication, and recommendation. Well, I’m not doubting this is the case; I just don’t see it obviously, and I’m very willing and eager to be convinced.
Are the methods… hypothetical ~quick choices on a Prolific sample, the ranking and relative-difference elicitation, between-subject comparisons (I think)… ‘reasonable’ enough to tell us something useful?
Are the Prolific samples (IMO the best we can do for things like these, but probably not representative of the US or UK “general public”) representative enough of the groups we care about for either goal 1 or 2 above?