I was surprised to read this from Peter Singer, a thoroughgoing utilitarian who I often see as a little extreme in how EA his beliefs are.
I don’t particularly agree with this conclusion:
When taking steps to reduce the risk that we will become extinct, we should focus on means that also further the interests of present and near-future people. If we are at the hinge of history, enabling people to escape poverty and get an education is as likely to move things in the right direction as almost anything else we might do
It seems extremely unlikely to me that global poverty is just as good at reducing existential risk as things that are more targeted, such as AI safety research. At least, Singer’s point requires significant elaboration on why he believes this to be the case. MichaelStJules writes more about this in his comment here.
Nevertheless, I found it valuable to see how Peter Singer views longtermism, which can provide a window into future public perceptions.
Yaroslav Elistratov writes more on Peter Singer’s thoughts on existential risk here.
I agree with your assessment. It is interesting to note that Singer’s comments are in response to Holden, who used to hold a similar view but no longer does (I believe).
The other part I found surprising was Singer’s comparison of longtermism with past harmful ideologies. At least in principle, I do think that, when evaluating moral views, we should take into consideration not only the contents of those views but also the consequences of publicizing them. But:
These two types of evaluation should be clearly distinguished and done separately, both for conceptual clarity and because they may require different responses. If the problem with a view is not that it is false but that it is dangerous, the appropriate response is probably not to reject the view, but to instead be strategic about how one discusses it publicly (e.g. give preference to less public contexts, frame the discussion in ways that reduce the view’s dangers, etc.)
As Richard Chappell pointed out recently, if one is going to consider the consequences of publicizing a view when evaluating it, one should also consider the consequences of publicizing objections to that view. And it seems like objections of the form “we should reject X because publicizing X will have bad consequences” have often had bad consequences historically.
The moral evaluation of the consequences expected to result from public discussion of a view should not beg the question against the view under consideration! Longtermists believe that people in the future, no matter how removed from us, are moral patients whom we should help. So in evaluating longtermism, one cannot ignore that, from a longtermist perspective, publicly demonizing this view—by comparing it to the Third Reich, Soviet communism, or white supremacy—will likely have very bad consequences (e.g. by making society less willing to help far-future people). (Note that this is very different from the usual arguments for utilitarianism being self-effacing: those arguments purport to establish that publicizing utilitarianism has bad consequences, as evaluated by utilitarianism itself. Here, by contrast, a non-longtermist moral standard is assumed when evaluating the consequences of publicizing longtermism.)
Picking reference classes is tricky. Perhaps it’s plausible to put longtermism in the reference class of “utopian ideology with considerable abuse potential”. But it also seems plausible to put longtermism in the reference class of “enlightened worldview that seeks to expand the circle of moral concern” (cf. Holden’s “Radical empathy”). In considering the consequences of publicizing longtermism, it seems objectionable to highlight one reference class, which suggests bad consequences, and ignore the other reference class, which suggests good consequences.
Maybe the solution is to institutionalize a sustainable system positive for all. That can be enjoyed by both Singer and Karnofsky. Possibly, Peter Singer emphasizes ‘making sure that the future is good for individuals,’ which is a thought that Holden Karnofsky seeks to provoke[1] in more individuals whose interest was originally captivated by high-tech solutions which benefit a few elites.
Holden Karnofsky specifies the “appropriate reaction” to the most important century thesis as ”… Oh … wow … I don’t know what to say and I somewhat want to vomit … I have to sit down and think about this one.”
I was surprised to read this from Peter Singer, a thoroughgoing utilitarian who I often see as a little extreme in how EA his beliefs are.
I don’t particularly agree with this conclusion:
It seems extremely unlikely to me that global poverty is just as good at reducing existential risk as things that are more targeted, such as AI safety research. At least, Singer’s point requires significant elaboration on why he believes this to be the case. MichaelStJules writes more about this in his comment here.
Nevertheless, I found it valuable to see how Peter Singer views longtermism, which can provide a window into future public perceptions.
Yaroslav Elistratov writes more on Peter Singer’s thoughts on existential risk here.
I agree with your assessment. It is interesting to note that Singer’s comments are in response to Holden, who used to hold a similar view but no longer does (I believe).
The other part I found surprising was Singer’s comparison of longtermism with past harmful ideologies. At least in principle, I do think that, when evaluating moral views, we should take into consideration not only the contents of those views but also the consequences of publicizing them. But:
These two types of evaluation should be clearly distinguished and done separately, both for conceptual clarity and because they may require different responses. If the problem with a view is not that it is false but that it is dangerous, the appropriate response is probably not to reject the view, but to instead be strategic about how one discusses it publicly (e.g. give preference to less public contexts, frame the discussion in ways that reduce the view’s dangers, etc.)
As Richard Chappell pointed out recently, if one is going to consider the consequences of publicizing a view when evaluating it, one should also consider the consequences of publicizing objections to that view. And it seems like objections of the form “we should reject X because publicizing X will have bad consequences” have often had bad consequences historically.
The moral evaluation of the consequences expected to result from public discussion of a view should not beg the question against the view under consideration! Longtermists believe that people in the future, no matter how removed from us, are moral patients whom we should help. So in evaluating longtermism, one cannot ignore that, from a longtermist perspective, publicly demonizing this view—by comparing it to the Third Reich, Soviet communism, or white supremacy—will likely have very bad consequences (e.g. by making society less willing to help far-future people). (Note that this is very different from the usual arguments for utilitarianism being self-effacing: those arguments purport to establish that publicizing utilitarianism has bad consequences, as evaluated by utilitarianism itself. Here, by contrast, a non-longtermist moral standard is assumed when evaluating the consequences of publicizing longtermism.)
Picking reference classes is tricky. Perhaps it’s plausible to put longtermism in the reference class of “utopian ideology with considerable abuse potential”. But it also seems plausible to put longtermism in the reference class of “enlightened worldview that seeks to expand the circle of moral concern” (cf. Holden’s “Radical empathy”). In considering the consequences of publicizing longtermism, it seems objectionable to highlight one reference class, which suggests bad consequences, and ignore the other reference class, which suggests good consequences.
>It seems extremely unlikely to me that global poverty is just as good at …
Wealth inequality is an xrisk factor. See the HANDY model.
https://www.sciencedirect.com/science/article/pii/S0921800914000615
https://arxiv.org/pdf/1908.02870.pdf
Maybe the solution is to institutionalize a sustainable system positive for all. That can be enjoyed by both Singer and Karnofsky. Possibly, Peter Singer emphasizes ‘making sure that the future is good for individuals,’ which is a thought that Holden Karnofsky seeks to provoke[1] in more individuals whose interest was originally captivated by high-tech solutions which benefit a few elites.
Holden Karnofsky specifies the “appropriate reaction” to the most important century thesis as ”… Oh … wow … I don’t know what to say and I somewhat want to vomit … I have to sit down and think about this one.”