Response to Torres’ ‘The Case Against Longtermism’
This short post responds to some of the criticisms of longtermism in Torres’ minibook: Were the Great Tragedies of History “Mere Ripples”? The Case Against Longtermism, which I came across in this syllabus.
I argue that while many of the criticisms of Bostrom strike true, newer formulations of longtermism and existential risk – most prominently Ord’s The Precipice (but also Greaves, MacAskill, etc) – do not face the same challenges. I split the criticisms into two sections: the first on problematic ethical assumptions or commitments, the second on problematic policy proposals.
Note that I both respect and disagree with all three authors. Torres piece is insightful and thought-provoking, as well as polemical; Ord’s book is a great restatement of the ethical case, though I disagree with his prioritisation of climate change, nuclear weapons and collapse; and Bostrom is a groundbreaking visionary, though one can dispute many of his views.
Problematic ethical assumptions or commitments
Torres argues that longtermism rests on assumptions and makes commitments that are problematic and unusual/niche. He is correct that Bostrom has a number of unusual ethical views, and in his early writing he was perhaps overly fond of a contrarian ‘even given these incredibly conservative assumptions the argument goes through’ framing. But Torres does not sufficiently appreciate that these limitations and constraints have largely been acknowledged by longtermist philosophers, who have (re)formulated longtermism so as to not require these assumptions and commitments.
Total utilitarianism
Torres suggests that longtermism is based on an ethical assumption of total utilitarianism, a view in which we should maximise wellbeing based on adding together the wellbeing of all the individuals in a group. Such a ‘more is better’ ethical view accords significant weight to trillions of future individuals. He points out that total utilitarianism is not a majority opinion amongst moral philosophers.
However, although total utilitarianism strongly supports longtermism, longtermism doesn’t need to be based on total utilitarianism. One of the achievements of The Precipice is Ord’s arguments pointing out the affinities between longtermism with other ethical traditions, such as conservatism, obligations to the past, virtue ethics. One can be committed to a range of ethical views and endorse longtermism.
Trillions of simulations on computronium
Torres suggests that the scales are tilted towards longtermism by including in the calculation quadrillions of simulations of individuals living flourishing lives. The view that such simulations would be moral agents, or that this future is desirable, is certainly unusual.
But one doesn’t have to be committed to this view for the argument to work. The argument goes through if we assume that humanity never leaves Earth, and simply survives until the Earth is uninhabitable – or even more conservatively, survives the duration of an average mammalian species. There are still trillions of future individuals, whose interests and dignity matter.
‘Reducing risk from 0.001% to 0.0001% is not the same as saving thousands of lives’
Torres implies that longtermism is committed to a view of the form that reducing risk from 0.001% to 0.0001% is morally equivalent to saving e.g. thousands of present day lives. This a clear example of early Bostrom stating his argument in a philosophically robust, but very counterintuitive way. Worries about this framing have been common for over a decade, in the debate over ‘Pascal’s Mugging’.
However, longtermism does not have to be stated in such a way. The probabilities are unfortunately likely higher – for example Ord gives a 1⁄6 (~16%) probability of existential risk this century – and the reductions in risk are likely higher too. That is, with the right policies (e.g. robust arms control regimes) we could potentially reduce existential risk by 1-10%. Specifically on Pascal’s Mugging, a number of decision-theory responses have been proposed, which I will not discuss here.
Transhumanism and space settlement & ‘Not reaching technological maturity = existential risk’
Torres suggests that longtermism is committed to transhumanism and space settlement (in order to expand the number of future individuals), and argues that Bostrom bakes this commitment into existential risk through a negative definition of existential risk as any future that does not achieve technological maturity (through extinction, plateauing, etc).
However, while Bostrom certainly does think this future is ethically desirable, longtermism is not committed to it. Torres underplays the crucial changes Ord makes with his definition of existential risk as the “destruction of humanity’s potential” and the institution of the “Long Reflection” to decide what we should do with this potential. Long Reflection proponents specifically propose not engaging in transhumanist enhancement or substantial space settlement before the Long Reflection is completed. Longtermism is not committed to any particular outcome from the Long Reflection. For example, if after the Long Reflection humanity decided to never become post-humans, and never leave Earth, this would not necessarily be viewed by longtermists as a destruction of humanity’s potential, simply one choice as to how to spend that potential.
Problematic policy proposals
Torres argues that longtermists are required to endorse problematic policy proposals. I argue that they are not – I personally would not endorse these proposals.
‘Continue developing technology to reduce natural risk’
Torres argues that longtermists are commited to continued technological development for transhumanist/space settlement reasons – and to prevent natural risks – but that this is “nuts” because (as he fairly points out) longtermists themselves argue that natural risk is tiny compared to anthropogenic risk.
However, the more common longtermist policy proposal is differential technological development – to try to foster and speed up the development of risk-reducing (or more generally socially beneficial) technologies and to slow down the development of risk-increasing (or socially harmful) technologies. This is not a call to continue technological development in order to become post-humans or reduce asteroid/supervolcano risk – it is to differentially progress technology, assuming that overall technological development is hard/impossible to stop. I would agree with this assumption, but one may reasonably question it, especially when phrased as a form of strong ‘technological completism’ (any technology that can get invented will get invented).
Justifies surveillance
Torres argues against the “turnkey totalitarianism” (extensive and intrusive mass surveillance and control to prevent misuse of advanced technology) explored in Bostrom’s ‘Vulnerable World Hypothesis’, and implies that longtermism is committed to such a policy.
However, longtermism does not have to be committed to such a proposal. In particular, one can simply object that Bostrom has a mistaken threat model. The existential risks we have faced so far (nuclear and biological weapons, climate change) have largely come from state militaries and large companies, and the existential risks we may soon face (from new biotechnologies and transformative AI) will also come from the same threat sources. The focus of existential risk prevention should therefore be on states and companies. Risks from individuals and small groups are relatively much smaller. These small benefits from the kind of mass surveillance Bostrom explores means it is not justified by a cost-benefit analysis.
Nevertheless, in the contrived hypothetical of ‘anyone with a microwave could have a nuclear weapon’, would longtermism be committed to restrictions on liberty? I address this in the next heading.
Justifies mass murder
Torres argues that longtermists would have to be willing to commit horrendous acts (e.g. destroy Germany with nuclear weapons) if it would prevent extinction.
This is a classic objection to all forms of consequentialism and utilitarianism – from the Trolley Problem to the Colosseum objection. There are many classic responses, ranging from disputing the hypothetical to pointing out that other ethical views are also committed to such an action.
It is not a unique objection to longtermism, and loses some of its force as longtermism does not have to be based on utilitarianism (as I said above). I would also point out that it is an odd accusation to level, as longtermism places such high priority on peace, disarmament and avoiding catastrophes.
Justifies giving money to the rich rather than the extreme poor, which is a form of white supremacy
Torres suggests that longtermism is committed to donating to the rich rather than to those in extreme poverty (or indeed animals). He further argues that this reinforces “racial subordination and maintain[s] a normalized White privilege.”
However, longtermism is not committed to donating (much less transferring wealth from poor countries) to present rich people. Longtermists might in practice donate to NGOs or scientists in the developed world, but the ultimate beneficiaries are future generations. Indeed, the same might be true of other cause areas e.g. work on a malaria vaccine or clean meat. Torres does not seem to accord much weight to how much longtermists recognise this as a moral dilemma and feel very conflicted – most longtermists began as committed to ending the moral crimes of extreme poverty, or of factory farming. There are many huge tragedies, but one must unfortunately chose were to spend one’s limited time and resources.
Longtermism is committed to the view that future generations matter morally. They are moral equals. When someone is born is a morally irrelevant fact, like their race, gender, nationality or sexuality. Furthermore, present people are in a unjust, exploitative power imbalance with future generations. Future generations have no voice or vote in our political and economic systems. They can do nothing to affect us. Our current political and economic systems are set up to overwhelmingly benefit those currently alive, often at the cost of exploiting, and loading costs onto, future generations.
This lack of recognition of moral equality, lack of representation, power imbalance and exploitation shares many characteristics with white supremacy/racism/colonialism and other unjust power structures. It is ironic to accuse a movement arguing on behalf of the voiceless of being a form of white supremacy.
- Rowing and Steering the Effective Altruism Movement by 9 Jan 2022 17:28 UTC; 146 points) (
- 22 Aug 2022 9:48 UTC; 51 points) 's comment on Thoughts on Émile P. Torres’ new article, ‘Understanding “longtermism”: Why this suddenly influential philosophy is so toxic’? by (
- EA Updates for April 2021 by 26 Mar 2021 14:26 UTC; 39 points) (
- 12 May 2022 3:37 UTC; 32 points) 's comment on EA will likely get more attention soon by (
- 13 Oct 2021 16:05 UTC; 26 points) 's comment on Effective Altruism, Before the Memes Started by (
- 15 Oct 2021 2:52 UTC; 25 points) 's comment on Effective Altruism, Before the Memes Started by (
- 14 Sep 2023 17:36 UTC; 21 points) 's comment on Radical Longtermism and the Seduction of Endless Growth: A Critique of William MacAskill’s ‘What We Owe the Future’ by (
- GWWC April 2021 Newsletter by 22 Apr 2021 7:09 UTC; 9 points) (
- 21 Oct 2021 15:15 UTC; 6 points) 's comment on The Phil Torres essay in Aeon attacking Longtermism might be good by (
It is very generous to characterise Torres’ post as insightful and thought provoking. He characterises various long-termists as white supremacists on the flimsiest grounds imaginable. This is a very serious accusation and one that he very obviously throws around due to his own personal vendettas against certain people. e.g. despite many of his former colleagues at CSER also being long-termists he doesn’t call them nazis because he doesn’t believe they have slighted him. Because I made the mistake of once criticising him, he spent much of the last two years calling me a white supremacist, even though the piece of mine he cited did not even avow belief in long-termism.
A quick point of clarification that Phil Torres was never staff at CSER; he was a visitor for a couple of months a few years ago. He has unfortunately misrepresented himself as working at CSER on various media (unclear if deliberate or not). (And FWIW he has made similar allusions, albeit thinly veiled, about me).
I’m really sorry to hear that from both of you, I agree it’s a serious accusation.
For longtermism as a whole, as I argued in the post, I don’t understand describing it as white supremacy—like e.g. antiracism or feminism, longtermism is opposed to an unjust power structure.
If you agree it is a serious and baseless allegation, why do you keep engaging with him? The time to stop engaging with him was several years ago. You had sufficient evidence to do so at least two years ago, and I know that because I presented you with it, e.g. when he started casually throwing around rape allegations about celebrities on facebook and tagging me in the comments, and then calling me and others nazis. Why do you and your colleagues continue to extensively collaborate with him?
To reiterate, the arguments he makes are not sincere: he only makes them because he thinks the people in question have wronged him.
[disclaimer: I am co-Director at CSER. While much of what I will write intersects with professional responsibilities, it is primarily written from a personal perspective, as this is a deeply personal matter for me. Apologies in advance if that’s confusing, this is a distressing and difficult topic for me, and I may come back and edit. I may also delete my comment, for professional or personal/emotional reasons].
I am sympathetic to Halstead’s position here, and feel I need to write my own perspective. Clearly to the extent that CSER has—whether directly or indirectly—served to legitimise such attacks by Torres on colleagues in the field, I bear a portion of responsibility as someone in a leadership position. I do not feel it would be right or appropriate for me to speak for all colleagues, but I would like to emphasise that individually I do not, in any way, condone this conduct, and I apologise for it, and for any failings on my individual part that may have contributed.
My personal impression supports the case Halstead makes. Comments about my ‘whiteness’, and insinuations regarding my ‘real’ reasons for objecting to positions taken by Torres only came after I objected publicly to Torres’s characterisations of Halstead, Olle Hagstrom, Nick Beckstead, Toby Ord and others. I have been informed by Torres that I owe him an apology for not siding with him [edit: to emphasise, this is my personal subjective impression/interpretation based on communications with me].
As well as the personal motivation, this mode of engagement reflects another aspect of this discourse I find deeply troubling: while I think there are valid arguments against longtermism, and alternative perspectives, it becomes impossible to discuss the issues, and in particular, the unfair characterisation of individuals, on the object level. Object level disagreement is met with an insinuation that this is the white supremacists closing ranks. I do believe there is a valid argument in some cases that one can be unaware of biases, and one can be unconsciously influenced by the ‘background radiation’ of a privileged society. Personally I have experienced this in unconscious, and sometimes deliberate, racism experienced as an Irish person living in Britain, and I have no doubt that non-white people have it much worse. However, this principle can also most certainly be overused uncharitably, or even ‘weaponised’ to shut down constructive intellectual engagement. And it is profoundly anti-intellectual to permit only those from outside a system of privilege to challenge scholarship.
There are other rhetorical moves I find deeply troubling. The common-society use of ‘white supremacy’ is something like “people who believe that white people are superior to other races and should dominate them; and are willing to act on that through violent means.”. Torres has typically not defined the term, but when challenged, he has explained that he is using the term in the more narrowly-used way used in critical race theory; of “of white people benefiting from and maintaining a system where the legacy of colonial privilege is maintained”. (note that he does define it in the mini-book, although as the ‘academic’ definition, which I think is overstatement). When challenged, Torres insults people for not automatically knowing he is using the more esoteric CRT definition rather than the common-use definition. This is not a reasonable position to take. And it is not reasonable to expect people not to be deeply hurt and offended by the language used.
Even accounting for the CRT definition, this is still an extremely serious and harmful accusation, and one that should not be made without extremely careful consideration and very strong evidence. In my own case, as someone from a culture overwhelmingly defined by the harms of colonialism, it is another way of shutting down any possible discussion; it is so violently upsetting that it renders me incapable of continuing to engage.
To the extent that scholars at CSER are still collaborating with Torres: I am not. I have spoken regarding my concerns to those who have let me know they are still collaborating with him, and have let them make their own choices. Most collaborations are the legacy of projects initiated during his visit 2 years ago (which I authorised, not knowing some of the more serious issue Halstead raises, but being aware of some more minor concerns). Papers take a long time to go through the academic system, and it would be a very unusual and hostile step to e.g. take an author’s name off a paper against their wishes. In some instances, people wished to engage with some aspects of Torres’ critique and collaborate with presenting them in a more constructive and less polemical way (e.g. see several examples of Beard+Torres). I have respected their choices. This may not be the case with all collaborations; at CSER’s current size I am not always aware of every paper being written. But I think it is fair to say my view on this style of engagement are well-known.
I have not taken the step of banning colleagues at CSER from collaborating with Torres. This would be an extremely unusual step in academia, running contrary to some fundamental principles of academic freedom. Further, I am concerned that such steps would reinforce another set of attack lines: Torres has already publicly claimed that he ‘has no doubt’ that employees at CSER that disagreed with me would be fired for it. I value having scope for intellectual disagreement greatly, and I would not want this perspective to take hold.
I do not claim that my decisions have been correct.
I do think there is significant value in engaging with critics. I admire engagement of the sort that Haydn has just undertaken. As a committed longtermist, to ‘turn the other cheek’ and engage in good faith with a steelmanned, charitable interpretation of a polemical and hostile document is something I find admirable in itself. And as noted elsewhere in this discussion, enough people have found some value in the challenge Torres has presented to ideas within longtermism (even where presented uncharitably) that it seems reasonable for some to engage with it. However at the same time, I do worry that beyond some point, engaging so charitably may legitimise a mode of discourse that I find distressingly hostile and inimical to kind and constructive, and open discourse.
These are challenging, and sometimes controversial topics. There will very often be issues on which reasonable people will disagree. There will sometimes be positions taken that others will be profoundly uncomfortable with. This is not unique to Xrisk or longtermism; the same is true of global development and animal rights. I believe it is of paramount importance that we be able to interact with each other as thinkers and doers in a kind, constructive and charitable way; and above all to adopt these principles when we critique each other. After all, when we are wrong, this is nearly always the most effective way to change minds. While not everyone will agree with me on this, this is the view I have always put forward in the centres I have been a part of.
Addendum: There’s a saying that “no matter what side of an argument you’re on, you’ll always find someone on your side who you wish was on the other side”.
There is a seam running through Torres’s work that challenges xrisk/longtermism/EA on the grounds of the limitations of being led and formulated by a mostly elite, developed-world community.
Like many people in longtermism/xrisk, I think there is a valid concern here. xrisk/longtermism/EA all started in a combination of elite british universities + US communities e.g. bay. They had to start somewhere. I am of the view that they shouldn’t stay that way.
I think it’s valid to ask whether there are assumptions embedded within these frameworks at this stage that should be challenged, and to posit that these would be challenged most effectively by people with a very different background and perspective. I think it’s valid to argue that thinking, planning for, and efforts to shape the long-term future should not be driven by a community that is overwhelmingly from one particular background and that doesn’t draw on and incorporate the perspectives of a community that reflects more of global societies and cultures. Work by such a community would likely miss important values and considerations, might reflect founder-effect biases, and would lack legitimacy and buy-in when it came to implementation. I think it’s valid to expect it to engage with frameworks beyond utilitarianism, and I’m pleased to see GPI, The Precipice, amongst others do this.
As both xrisk and longtermism grow and mature, a core part of the project should be, in my view, and likely will be, expanding beyond this starting point. Such efforts are underway. They take a long time. And I would like to see people, both internal and external to the community, challenge the community on this where needed .
However, for someone on this side of the argument, I am deeply frustrated by Torres’s approach. It salts the earth for engagement with people who disagree with this view and actively works against finding common ground. It alienates people from diverse backgrounds outside xrisk/longtermism from engaging with xrisk/longtermism, and thus makes the project harder. And it strengthens the views of those who disagree with the case I’ve put, especially when they perceive those they disagree with acting in bad faith. The book ends with the claim “More than anything, I want this mini-book to help rehabilitate “longtermism,” and hence Existential Risk Studies.” I do not believe this hostile, polemical approach serves that aim; rather I worry that it is undermining it.
I completely agree with all of this, and am glad you laid it out so clearly.
Seconded.
I just wanted to say that this is a beautiful comment. Thank you for sharing your perspective in such an elegant, careful and nuanced manner.
Again, Sean, more intellectual dishonesty: “I have been informed by Torres that I owe him an apology for not siding with him.” I’m tempted to take screenshots and share them here. These are lies.
I am trying to stay calm, but I am honestly pretty f*cking upset that you repeatedly lie in your comments above, Sean. See here for a screenshot: https://c8df8822-f112-4676-8332-ffffad89713358e3.filesusr.com/ugd/d9aaad_5494c7f6e8034730afb01cdbc9bd5a62.pdf. I won’t include your response, Sean, because I’m not a jerk like you.
The link above has an additional ”.” at the end that prevents it from properly working.
(Sorry for cursing. The dishonest rancor of Sean is just pretty hard to deal with.)
I don’t have any comment to make about Torres or his motives (I think I was in a room with him once). However, as a more general point, I think it can still make sense to engage with someone’s arguments, whatever their motivation, at least if there are other people who take them seriously. I also don’t have a view on whether others in the longtermism/X-risk world do take Torres’s concern seriously, it’s not really my patch.
“He has unfortunately misrepresented himself as working at CSER on various media (unclear if deliberate or not).” No, I haven’t, Sean, and you know this from our personal exchanges. I forgot to change the CSER affiliation on FB—and only FB—for a few months after leaving. As soon as you pointed it out, I changed it immediately. Your intellectual dishonesty here is really upsetting.
I don’t know how to embed snapshots, but anyone who wishes is welcome to type “phil torres” into linkedin or email me for the snapshots I’ve just taken right now—it brings up “Researcher at Centre for the Study of Existential Risk, University of Cambridge”. As I say, it’s unclear if this is deliberate—it may well be an oversight, but it has contributed to the mistaken external impression that Phil Torres is or was research staff at CSER.
That I didn’t know about, Sean, nor did you mention it. If you look at my profile, it hasn’t been updated in years. (It says that I still write for Motherboard and live in Carrboro, which haven’t been the case for years.)
You repeatedly lied if your comments above. Unprofessional. I don’t know how you can keep your job while lying about a colleague like that. I will delete the LinkedIn profile immediately. I honestly didn’t even remember that I had it. Had you mentioned it earlier, of course I would have
Your malicious behavior here is unacceptable. I have been nothing but willing to apologize, concede points, reconsider ideas, and change my views in response to you. When you’ve been rude and hurtful to me, and I’ve asked for an apology, you’ve refused.
[Apologies for getting mad. But the truth is, being lied about is upsetting, and as a human being, it would be odd if I weren’t hurt.]
Thank you.
Haydn, Michael Plant, etc. etc. I am happy to release screenshots of everything to show that Sean is lying. Over and over again, above, he lies. Here is proof of his lie about about me “misrepresenting [myself] as working at CSER on various media (unclear if deliberate or not).” I absolutely did no such thing! The only medium this was an issue on was FB, and I corrected it immediately (although there was some delay, for reasons I don’t understand) with an explicit apology (because, I say in the screenshot from 2019, I genuinely, honestly didn’t realize that it still says “works at”). Indeed, throughout our exchanges, I am repeatedly open and receptive to criticisms, constantly hedging, frequently apologizing, while Sean is, well, not exactly the interlocutor I’d hoped. Ask me about any of his silly, hurtful accusations above and I’ll address them with verifiable evidence. What is wrong with this community? https://c8df8822-f112-4676-8332-ad89713358e3.filesusr.com/ugd/d9aaad_d37202b3a9014315ba15d1220421d682.pdf (Check timestamps, please. I think one screenshot is out of order—apologies for that.)
How can someone lie this much about a colleague and still have a job?
Despite disagreeing with most of it, including but not limited to the things highlighted in this post, I think that Torres’s post is fairly characterised as thought-provoking. I’m glad Joshua included it in the syllabus, also glad he caveated its inclusion, and think this response by Hayden is useful.
I haven’t interacted with Phil much at all, so this is a comment purely on the essay, and not a defense of other claims he’s made or how he’s interacted with you.
edit in 2022, as this comment is still occasionally receiving votes:
I stand by the above, but having read several other pieces since, displaying increasing levels of bad faith, I’m increasingly sympathetic to those who would rather not engage with it.
I second most of what Alex says here. Like him, I only know about this particular essay from Torres, so I will limit my comments to that.
Notwithstanding my own objections to its tone and arguments, this essay did provoke important thoughts for me – as well as for other committed longtermists with whom I shared it – and that was why I ultimately ended up including it on the syllabus. The fact that, within 48 hours, someone put in enough effort to write a detailed forum post about the substance of the essay suggests that it can, in fact, provoke the kinds of discussions about important subjects that I was hoping to see.
Indeed, it is exactly because I think the presentation in this essay leaves something to be desired that I would love to see more community discussion on some of these critiques of longtermism, so that their strongest possible versions can be evaluated. I realise I haven’t actually specified which among the essay’s many arguments that I find interesting, so I hope I will find time to do that at some point, whether in this thread or a separate post.
I personally do not think it is appropriate to include an essay in a syllabus or engage with it in a forum post when (1) this essay characterizes the views it argues against using terms like ‘white supremacy’ and in a way that suggests (without explicitly asserting it, to retain plausible deniability) that their proponents—including eminently sensible and reasonable people such as Nick Beckstead and others— are white supremacists, and when (2) its author has shown repeatedly in previous publications, social media posts and other behavior that he is not writing in good faith and that he is unwilling to engage in honest discussion.
(To be clear: I think the syllabus is otherwise great, and kudos for creating it!)
EDIT: See Seán’s comment for further elaboration on points (1) and (2) above.
Genuine question: if someone has views that are widely considered repugnant (in this case that longtermists are white supremacists) but otherwise raises points that some people find interesting and thought-provoking, should we:
A) Strongly condemn the repugnant ideas whilst genuinely engaging with the other ideas
B) Ignore the person completely / cancel them
If the person is clearly trolling or not writing in good faith then I’d imagine B) is the best response, but if Torres is in fact trolling then I find it surprising that some people find some of his ideas interesting / thought-provoking.
(Just to reiterate this is a genuine question I’m not stating a view one way or the other and I also haven’t read Torres’ post)
In this case, I would say it’s not the mere fact that they hold views widely considered repugnant, but the conjunction of that fact with decisive evidence of intellectual dishonesty (that some people found his writings thought provoking isn’t necessarily in tension with the existence of this evidence). Even then you probably could conceive of scenarios where the points raised are so insightful that one should still engage with the author, but I think it’s pretty clear this isn’t one of those cases.
The last time I tried to isolate the variable of intellectual dishonesty using a non-culture war example on this forum (in this case using fairly non-controversial (to EAs) examples of intellectual dishonesty, and with academic figures that I at least don’t think are unusually insightful by EA lights), commentators appeared to be against the within-EA cancellation of them, and instead opted for a position more like:
This appears broadly analogous to how jtm presented Torres’ book in his syllabus. Now of course a) there are nontrivial framing effects so perhaps people might like to revise their conclusions in my comment and b) you might have alternative reasons to not cite Torres in certain situations (eg very high standard for quality of argument, deciding that personal attacks on fellow movement members is verbotten), but at least the triplet-conjunction presented in your comment (
bad opinions + intellectual dishonesty + lack of extraordinary insight) did not, at the time, seem to be sufficient criteria in the relatively depoliticized examples I cited.
Alongside our ban announcement for Phil, I’m issuing a warning for this comment and this other comment, both of which made strong negative claims about Phil without furnishing any evidence or examples. While some people on the thread presumably knew what they were referring to, it’s hard for public discussions to go well when comments like this don’t include more context.
However, when I discussed the negative claims with Halstead, he provided me with evidence that they were broadly correct — the warning only concerns the way the claims were presented. While it’s still important to back up negative claims about other people when you post them, it does matter whether or not those claims can be reasonably backed up.
I’m pretty surprised and disappointed by this warning. I made 3 claims about ways that Phil has interacted with me.
I didn’t share the facebook messages because I thought it would be a breach of privacy to share a private message thread without Phil’s permission, and I don’t want to talk to him, so I can’t get his permission.
I also don’t especially want to link to the piece calling me a racist, which anyone familiar with Phil’s output would already know about, in any case.
There is a reason I didn’t share the screenshot of the the paedophilia/rape accusations, which is that I thought it would be totally unfair to the people accused. This is why I called them ‘celebrities’ rather vaguely.
As you say, I have shown all of these claims to be true in private in any case.
This feels a lot like punishing someone for having the guts to call out a vindictive individual in the grip of a lifelong persecution complex. As illustrated by the upvotes on my comments, lots of people agree with me, but didn’t want to say anything, for whatever reason. If you were going to offer any sanction for anyone, I would have thought it would be the people at CSER, such as Simon Beard and Luke Kemp, who have kept collaborating with him and endorsing his work for the last few years, despite knowing about the behaviour that you have just banned him for.
I appreciate that these kinds of moderation decisions can be difficult, but I also don’t agree with the warning to Halstead. And if it is to be given, then I am uncomfortable that Halstead has been singled out—it would seem consistent to apply the same warning to me, as I supported Halstead’s claims, and added my own, both without providing evidence.
With regard to the people mentioned, neither are forum regulars, and my understanding is that neither have plans for continued collaborations with Phil.
Simon Beard is providing the foreword for his forthcoming book, and Luke Kemp has provided a supporting quote for it.
(As with other comments in this thread, I’m responding as an individual moderator rather than as a voice of the moderation team.)
Thank you for sharing this comment. While I read your comment closely when considering a warning to Halstead, I don’t think it encounters the same problems:
Regarding your support for Halstead’s claims — I think the original claimant should try very hard to present evidence, but I don’t think the same burden falls on people who support them (in part because they might not have evidence of their own).
Regarding your own claims: While your comments had some unsupported accusations, many of the accusations did have support, and most of what you wrote was a discussion of Phil’s writing rather than his actions or character (making it easier for someone to verify). To the extent that you violated the norm of providing evidence for accusations, you violated it to a lesser degree than Halstead — the accusations were less severe, and weren’t essential to the overall message of your comments.
That said, I don’t think it was fair to only “warn” Halstead — looking back, I think the ideal response might have been to reply to the ban announcement (or write a separate post) reminding people to try to avoid making accusations without evidence, and pointing to examples from multiple users. Our goal was to reinforce a norm, not to punish anyone.
(Since I drafted the original message, and it was only reviewed and approved by other moderators, I’ll use “I” in some parts of this thread.)
I owe you an apology for a lack of clarity in this message, and for not discussing my concerns with you in private before posting it (given that we’d already been discussing other aspects of the situation).
“Warning” was the wrong word to use. The thing we were trying to convey wasn’t “this is the kind of content that could easily lead to a ban”, but instead “this goes against a norm we want to promote on the Forum, and we think this was avoidable”. There were much better ways to express the latter.
You’ve been an excellent contributor to this site (e.g. winning two Forum prizes). We didn’t intend the comment to feel like a punishment, but the end result clearly didn’t match our intentions. It’s understandable that you kept your comments brief and vague — particularly the one addressed to Sean, who presumably had the necessary context to understand it.
I also should have said that I think the discussion resulting from your comment was really valuable, and I’m glad you wrote it — I prefer the norm-skirting version of your comment to the comment not existing.
But I still think the norm is important, and I also think that comments like yours are more likely to have good consequences when they include more evidence.
*****
While most voters obviously endorsed your statement, I got a concerned message from one user (good Forum contributor, fairly new to the community) who was confused about the situation and didn’t know what had happened. They didn’t understand why Phil was being attacked for using language similar to the people who were attacking him, or why his claim that you had lied was being downvoted with no responses to disprove it.
When I wrote the comment, I was trying to keep that person in mind, and others like them — both by encouraging the use of evidence, and by clarifying from a neutral perspective that your claims actually had more backing than Phil’s.
*****
On your points 1-3:
I’m not sure which part of your comments this maps to, but I assume the Facebook messages in question were insults from Phil. I agree it’s reasonable not to share those.
I understand your reluctance to link the piece, but in this case, it’s public writing that has been widely shared (and heavily criticized as unfair and misleading). I think that sharing it would have made the conversation easier to follow and validated your claims against Phil — in particular, by showing that his denial at the end of this comment was wrong.
That reasoning makes sense. There may have been a way to show what happened without compromising the people accused (e.g. sharing a screenshot with names blotted out), but the post being on someone’s Facebook wall (and presumably not publicly viewable) could still make that dicey.
I don’t mean to argue that every Forum comment needs to have as much evidence as possible. But when a personal accusation is at stake, and can’t easily be verified by an outside reader, I do think it’s important to provide at least some backing — at least if there’s a quick way to do so without violating someone’s privacy (e.g. linking to a public paper).
One complication in this situation is that Phil doesn’t have a good reputation among the Forum’s users, some of whom have had unpleasant personal interactions with him (myself included, several times over). But I don’t want our norms about personal accusations to depend on how popular or pleasant the targets are. If you were accusing me of calling you a Nazi, I’d hope you would link to evidence, and I want the same standard to hold for Phil.
As we said in our ban announcement, Phil was banned for his behavior on the Forum. It’s not impossible that someone’s conduct outside the Forum might lead us to ban them, but that would require much more evidence. And I find it hard to imagine a realistic scenario wherein we’d also sanction that person’s academic collaborators just for working with them.
*****
To close off this reply, I want to reiterate that I’m sorry for the message. I could have handled this better, and I understand your frustration. But while I wish I’d expressed my concerns differently, I still think that the norm of making at least a small effort to back up personal accusations with evidence is an important one.
Hi Aaron, I appreciate this and understand the thought process behind the decision. I do generally agree that it is important to provide evidence for this kind of thing, but there were reasons not to do so in this case, which made it a bit unusual.
I think the EA community (and rationality community) is systematically too much at risk of being too charitable. I don’t have a citation for that but my impression is very much that this has been pointed out repeatedly in the instances where there was community discussion on problematic behavior of people who seemed interpersonally incorrigible. I think it’s really unwise and has bad consequences to continue repeating that mistake.
While I mostly agree with you in general (e.g. Gleb Tsipursky getting too many second chances), I’m not quite sure what you’re trying to say in this case.
Do you think that the moderators were too charitable toward Phil? He was banned from the Forum for a year, and we tried to make it clear that his comments were rude and unacceptable. Before that thread, his comments were generally unremarkable, with the exception of one bitter exchange of the type that happens once in a while for many different users. And I’m loathe to issue Forum-based consequences for someone’s interpersonal behavior outside the Forum unless it’s a truly exceptional circumstance.
*****
To the extent that someone’s problematic interpersonal behavior is being discussed on the Forum, I still believe we should try to actually show evidence. Many Forum readers are new to the community, or otherwise aren’t privy to drama within the field of longtermist research. If someone wants to warn the entire community that someone is behaving badly, the most effective warnings will include evidence. (Though as I said in my reply to Halstead’s reply, his comment was still clearly valuable overall.)
Imagine showing a random person from outside the EA community* (say, someone familiar with Twitter) this comment and this comment, as well as the karma scores. That person might conclude “Halstead was right and Phil was wrong”. They might also conclude “Halstead is a popular member of the ingroup and Phil is getting cancelled for wrongthink”.
To many of us inside the community, it’s obvious that the first conclusion is more accurate. But the second thing happens all the time, and a good way to prove that we’re not in the “cancelled for wrongthink” universe is to have a strong norm that negative claims come with evidence.
*This isn’t to say that all moderation should necessarily pass the “would make sense to a random Twitter user” test. But I think it’s a useful test to run in this case.
No, I didn’t mean to voice an opinion on that part. (And the moderation decision seemed reasonable to me.)
My comment was prompted by the concern that giving a warning to Halstead (for not providing more evidence) risks making it difficult for people to voice concerns in the future. My impression is that it’s already difficult enough to voice negative opinions on others’ character. Specifically, I think there’s an effect where, if you voice a negative opinion and aren’t extremely skilled at playing the game of being highly balanced, polite and charitable (e.g., some other people’s comments in the discussion strike me as almost superhumanly balanced and considerate), you’ll offend the parts of the EA forum audience that implicitly consider being charitable to the accused a much more fundamental virtue than protecting other individuals (the potential victims of bad behavior) and the community at large (problematic individuals in my view tend to create a “distortion field” around them that can have negative norm-eroding consequences in various ways – though that was probably much more the case with other community drama than here, given that Phil wrote articles mostly at the periphery of the community.)
Of course, these potential drawbacks I mention only count in worlds where the concerns raised are in fact accurate. The only way to get to the bottom of things is indeed with truth-tracking norms, and being charitable (edit: and thorough) is important for that.
I just feel that the demands for evidence shouldn’t be too strong or absolute, partly also because there are instances where it’s difficult to verbalize why exactly someone’s behavior seems unacceptable (even though it may be really obvious to people who are closely familiar with the situation that it is).
Lastly, I think it’s particularly bad to disincentivize people for how they framed things in instances where they turned out to be right. (It’s different if there was a lot of uncertainty as to whether Halstead had valid concerns, or whether he was just pursuing a personal vendetta against someone.)
Of course, these situations are really, really tricky, and I don’t envy the forum moderators for having to navigate the waters.
True, but that also means that the right incentives are already there. If someone doesn’t provide the evidence, it could be that they find that it’s hard to articulate, that there are privacy concerns, or that the person doesn’t have the mental energy at the time to polish their evidence and reasoning, but feels strongly enough that they’d like to speak up with a shorter comment. Issuing a warning discourages all those options. All else equal, providing clear evidence is certainly best. But I wouldn’t want to risk missing out on the relevant info that community veterans (whose reputation is automatically on the line when they voice a strong concern) have a negative opinion for one reason or another.
It is very unusual to issue a moderation warning for a comment at +143 karma, the second most upvoted comment on the entire page, for undermining public discussion. Creating public knowledge about hostile behaviour can be a very useful service, and I think a lot of people would agree that is the case here. Indeed, since this thread was created I have seen it productively referenced elsewhere as evidence on an important matter.
Furthermore, failing to show screenshots, private emails etc. can be an admirable display of restraint. I do not think we want to encourage people to go around leaking private communication all the time.
(As with other comments in this thread, I’m responding as an individual moderator rather than as a voice of the moderation team.)
On the one hand — yes, certainly unusual, and one could reasonably interpret karma as demonstrating that many people thought a comment was valuable for public discussion.
However, I am exceedingly wary of changing the way moderation works based on a comment’s karma score, particularly when the moderation is the “reminder of our norms” kind rather than the “you’re banned” kind. (And almost all of our moderation is the former; we’ve issued exactly two bans since the new Forum launched in 2018, other than for spammers.)
While some users contribute more value to Forum discussion than others, and karma can be a signal of this, I associate the pattern of “giving ‘valued’ users more leeway to bend rules/norms” with many bad consequences in many different settings.
I agree with both statements, but I also think that providing a bit more evidence can move a comment from “a lot of people agree, because they trust the author/have access to non-public information” to “everyone agrees, because they can see the evidence”.
As I noted in my reply to Halstead, some users don’t have the inside knowledge required to verify unsupported claims, and I don’t want those people getting left out of public discussions because e.g. they didn’t see a certain Facebook thread.
(If someone claims hostile behavior occurred, but doesn’t show evidence, does that actually “create public knowledge” of the behavior itself? It might help some people connect the dots, but for many people, all they see is a claim.)
I agree that leaking private communication is a behavior to discourage in most cases. And I agree with Halstead that at least one of his claims (maybe two) would have been difficult to provide evidence for without disclosing private information. However, another claim was based on an academic paper shared widely in EA spaces, and not linking to the paper seems more confusing than helpful (though as I say in my reply to Halstead’s reply, his comment was still clearly valuable overall).
Substantiated true claims are the best, but sometimes merely stating important true facts can also be a public service...
This violates all the comment guidelines.
“He characterises various long-termists as white supremacists on the flimsiest grounds imaginable.” I would encourage you to contact, well, quite literally anyone who studies “white supremacy.” This is precisely what I did BEFORE making the criticisms I made. Literally every single scholar I spoke with—including some at Princeton—were shocked and appalled by that quote from Nick Beckstead, as well as some other quotes I provided to them (in context, of course). The “white supremacy” claim is not mine, John. I’m just relaying what anyone who studies the issue will tell you, if you were sufficiently curious to contact the relevant scholars. Furthermore, I have never once called you a “white supremacist.” That is an egregious and defamatory lie that you should taken back immediately (or you should provide, for all to see, evidence to the contrary).
So funny to me that this has “-14” right now. What are people downvoting—scholarship? Me having consulted relevant experts? Does anyone want to explain?
I didn’t downvote this comment, but
a) This may have not been your intention, but even in context, the “white supremacy” claim in the e-book does read as your claim
b) I don’t think “poorer countries should transfer their wealth to richer countries” supports “a political, economic and cultural system in which whites overwhelmingly control power and material resources”. The richest countries include many countries that aren’t majority white such as Singapore, Qatar, UAE, Taiwan etc, so I don’t think the ‘overwhelmingly’ criterion is met here.
c) I’m of the opinion that people should refrain from ever using terms “in a legal scholarly sense”; instead they should either use the term in its usual sense or create a new term with a more specific definition.
That being said, I think a charitable reading of your e-book makes it seem like you are describing certain conclusions of longetermism as supporting ‘white supremacy’, and that you are using the term in a ‘legal scholarly sense’ and defining it as “a political, economic and cultural system in which whites overwhelmingly control power and material resources”. I don’t know if you have made this claim elsewhere but it did not seem like your e-book claims that “longtermists are white supremacists”.
As the Forum’s lead moderator, I’m posting this message, but it was written collaboratively by several moderators after a long discussion.
As a result of several comments on this post, as well as a pattern of antagonistic behavior, Phil Torres has been banned from the EA Forum for one year.
Our rules say that we discourage, and may delete, “unnecessary rudeness or offensiveness” and “behavior that interferes with good discourse”. Calling someone a jerk and swearing at them is unnecessarily rude, and interferes with good discourse.
Phil also repeatedly accuses Sean of lying:
After having seen the material shared by Phil and Sean (who sent us some additional material he didn’t want shared on the Forum), we think the claims in question are open to interpretation but clearly not deliberate lies.
For example, Sean said that Phil “has unfortunately misrepresented himself as working at CSER on various media (unclear if deliberate or not).” It’s evident from screenshots that Phil did list himself on Facebook and LinkedIn as working at CSER after he was no longer there. This is the kind of mistake that’s easy to make, but repeatedly saying someone has lied by pointing out the mistake is another example of unnecessary rudeness.
Of course, it’s understandable to have strong feelings if you believe someone is lying about you, but we expect Forum users to express strong feelings in a more productive way (“I think you’re mistaken about that, and here’s why”). Phil is sometimes more courteous, but we feel that his comments often fail to represent the culture we want to see on the Forum.
This ban is not related to Phil’s academic work. We appreciate having well-informed critics on the Forum; even criticism which seems overly harsh, or somewhat off-target, can generate good discussion (e.g. this post and this response to it). For another example, see this defense of some of Phil’s views.
*****
We encourage people to alert us to any other instances of name-calling, swearing at people, or unsubstantiated personal accusations. We aim to apply these rules consistently and proportionately to the frequency/extent of their violation.
In several milder cases, we’ve messaged people with private warnings; because this case led to a ban, we’re sharing this comment publicly. And on this post, I’ve issued a warning to Halstead for accusations that he hadn’t substantiated at the time he posted them, though he later shared satisfactory evidence with me.
People are still welcome to cross-post Phil’s work, quote him, argue for his points, and all the rest — but he won’t be permitted to post here himself until 12 May, 2022.
[This comment is a tangential and clarifying question; I haven’t yet read your post]
If I didn’t know anything about you, I’d assume this meant “Toby Ord suggests climate change, nuclear weapons, and collapse should be fairly high priorities. I disagree (while largely agreeing with Ord’s other priorities).”
But I’m guessing you might actually mean “Toby Ord suggests climate change, nuclear weapons, and collapse should be much lower priorities than things like AI and biorisk (though they should still get substantial resources, and be much higher priorities than things like bednet distribution). I disagree; I think those things should be similarly high priorities to things like AI and biorisk.”
Is that guess correct?
I’m not sure whether my guess is based on things I’ve read from you, vs just a general impression about what views seem common at CSER, so I could definitely be wrong.
That’s right, I think they should be higher priorities. As you show in your very useful post, Ord has nuclear and climate change at 1/1000 and AI at 1⁄10. I’ve got a draft book chapter on this, which I hope to be able to share a preprint of soon.
Is your preprint available now? I’d be curious to read your thoughts about why climate change and nuclear war should be prioritized more.
Thanks, Haydn, for writing this thoughtful post. I am glad that you (hopefully) found something from the syllabus useful and that you took the time to read and write about this essay.
I would love to write a longer post about Torres’ essay and engage in a fuller discussion of your points right away, but I’m afraid I wouldn’t get around to that for a while. So, as an unsatisfactory substitute, I will instead just highlight three parts of your post that I particularly agreed with, as well as two parts that I believe deserve further clarification or context.
A)
I agree with this and think that any critique of longtermism’s moral foundations should engage seriously with the fact many of its key proponents have written extensively about moral uncertainty and pluralism, and that this informs longtermist thinking considerably. I don’t think Torres’ essay does that.
B)
Agreed, this seems like another important omission from the essay and one that is quite conspicuous given Bostrom’s prominent essay on the topic.
C)
As above, this seems like a critical omission
D)
Unless I’m misunderstanding something, this section seems to conflate three distinct quantities:
The estimated marginal effect on existential risk of some action EAs could take.
The estimated absolute existential risk this century.
The estimated marginal effect on existential risk of some big policy change, e.g. arms control.
While (2) might indeed be as high as ~16%, and (3) may be as high as 1-10%, both of these quantities are very different from (1). Very rarely, if ever, do EAs have the option ‘spend $50M to achieve a robust arms control regime’; it’s much more likely to be ‘spend $50M to increase the likelihood of such a regime by 1-5%.’
So, unless you think the tens of millions of “EA dollars” allocated towards longtermist causes reduce existential risk by >>0.001% per, say, ten million dollars spent, then it seems like you would indeed have to be committed to Torres’ formulation of the tiny-risk-reduction vs. current-lives-saved tradeoff.
Of course, you may believe that the marginal effects of many EA actions are, in fact, >>>0.001% risk reduction. And even if you don’t, the tradeoff may still be a reasonable ethical position to take.
I just think it’s important to recognise that that tradeoff does seem to be a part of the deal for x-risk-focused longtermism.
E)
For a discussion of this point, I think it is only fair to also include the quote from Nick Beckstead’s dissertation that Torres discusses in the relevant section. I include it in full below, for context:
Here, I should perhaps note that while I’ve read parts of Beckstead’s work, I don’t think I’ve read that particular section, and I would appreciate hearing if there is a crucial piece of context that’s missing. Either way, I think this quote deserves a fuller discussion – I will, for now, simply note that I certainly think the quote, as written, is very objectionable and potentially warrants indignation.
Again, thanks for writing the post, I look very much forward to the discussions in the comments!
A little historical background—one of my first introductions to proto-effective altruism was through corresponding with Nick Beckstead while he was a graduate student, around the time he would have been writing this dissertation. He was one of the first American members of Giving What We Can (which at the time was solely focused on global poverty), and at the time donated 10% of his graduate stipend to charities addressing global poverty. When I read this passage from his dissertation, I think of the context provided by his personal actions.
I think that “other things being equal” is doing a lot of work in the passage. I know that he was well aware of how much more cost-effective it is to save lives in poor economies than in rich ones, which is why he personally put his money toward global health.
Thanks for the context. I should note that I did not in any way intend to disparage Beckstead’s personal character or motivations, which I definitely assume to be both admirable and altruistic.
As stated in my comment, I found the quote relevant for the argument from Torres that Haydn discussed in this post. I also just generally think the argument itself is worth discussing, including by considering how it might be interpreted by readers who do not have the context provided by the author’s personal actions.
Happy to have a go; the “in/out of context” is a large part of the problem here. (Note that I don’t think I agree with Beckstead’s argument for reasons given towards the end).
(1) The thesis (198 pages of it!) is about shaping the far future, and operates on staggering timescales. Some of it like this quote is written in the first person, which has the effect of putting it in the present-day context, but these are at their heart philosophical arguments abstracted from time and space. This is a thing philosophers do.
If I were to apply the argument to the 12th century world, I might claim that saving a person in what is now modern day Turkey would have greater ripple effects than saving a person in war-ravaged Britain. The former was lightyears further ahead in science and technology, chock full of incredible muslim scholar-engineers like Al Jazari (seriously; read about this guy). I might be wrong of course; the future is unpredictable and these ripples might be wiped out in the next century by a Mongol Horde (as for the most part did happen); but wrong on different grounds.
And earlier in the thesis Beckstead provides a whole heap of caveats (in addition to ‘all other things being equal’, including that his argument explicitly does not address issues “such as whose responsibility that is, how much the current generation should be required to sacrifice for the sake of future generations, how shaping the far future stacks up against special obligations or issues of justice”; these are all “good questions” but out of scope.)
If Beckstead further developed the ‘it is better to save lives in rich countries’ argument in the thesis, explicitly embedding it within the modern context and making practical recommendations that would exacerbate the legacy of harm of postcolonial inequality, then Torres might have a point. He does not. It’s a paragraph on one page of a 198 page PhD thesis. Reading the paragraph in the context of the overall thesis gives a very different impression than the deliberately leading context that Torres places the paragraph in.
(2) Now consider the further claims that Torres has repeatedly made—that this paragraph taints the entire field in white supremacy; and that any person or organisation who praised the thesis is endorsing white supremacy. This is an even more extreme version of the same set of moves. I have found nothing—nothing -anywhere in the EA or longtermist literature building on and progressing this argument.
(3) The same can be seen, but in a more extreme fashion, for the Mogensen paper. Again, an abstract philosophical argument. Here Mogensen (in a very simplified version) observes that over three dimensions—the world—total utilitarianism says you should spread your resources over all people in that space. But if you introduce a 4th dimension—time, then the same axiology says you should spread your resources over space and time, and the majority of that obligation lies in the future. It’s an abstract philosophical argument. Torres reads in white supremacy, and invites the reader to read in white supremacy.
(4) The problem here is that no body of scholarship can realistically withstand this level of hostile scrutiny and leading analysis. And no field can realistically withstand the level of hostile analysis where one paragraph in a PhD thesis taken out of context is used to damn an entire field. I don’t think I personally agree with the argument on its own terms—it’s hard to prove definitively but I would have a concern that inequality has often been argued to be a driver of systemic instability, and that if so, any intervention that increases inequality might contribute to negative ‘ripple effects’ regardless of what countries were rich and poor at a given time. And I think the paragraph itself could reasonably be characterised as ‘thoughtless’, given the author is a white western person writing in C21, even if the argument is not explicitly in this context.
However the extreme criticism presented in Torres’s piece stands in stark contrast to the much more serious racism that goes unchallenged in so much of scholarship and modern life. Any good-faith actor will in the first instance pursue these, rather than reading the worst ills possible into a paragraph of a PhD thesis. I’ve run out of time, but will illustrate this shortly with a prominent example of what I consider to be much more significant racism from Torres’s own work.
Here is an article by Phil Torres arguing that the rise of Islam represents a very significant and growing existential risk.
https://hplusmagazine.com/2015/11/17/to-survive-we-must-go-extinct-apocalyptic-terrorism-and-transhumanism/
I will quote a key paragraph:
“Consider the claim that there will be 2.76 billion Muslims by 2050. Now, 1% of this number equals 27.6 million people, roughly 26.2 million more than the number of military personnel on active duty in the US today. It follows that if even 1% of this figure were to hold “active apocalyptic” views, humanity could be in for a catastrophe like nothing we’ve ever experienced before.”
Firstly, this is nonsense. The proposition that 1% of Muslims would hold “active apocalyptic” views and be prepared to act on it is pure nonsense. And “if even 1%” suggests this is the author lowballing.
Secondly, this is fear-mongering against one of the most feared and discriminated-against communities in the West, being written for a Western audience.
Thirdly, it utilises another standard racism trope, population replacement—look at the growing number of scary ‘other’. They threaten to over-run the US’s good ’ol apple pie armed forces.
This was not a paragraph in a thesis. It was a public article, intended to reach as wide an audience as possible. It used to be prominently displayed on his now-defunct website. The article above was written several years more recently than Beckstead’s thesis.
I will say, to Torres’s credit, that his views on Islam have become more nuanced over time, and that I have found his recent articles on Islam less problematic. This is to be praised. And he has moved on from attacking Muslims to ‘critiquing’ right-wing Americans, the Atheist community, and the EA community. This is at least punching sidewards, rather than down.
But he has not subject his own body of work, or other more harmful materials, to anything like the level of critique that he has subjected Beckstead, Mogensen etc al. I consider this deeply problematic in terms of scholarly responsibility.
Understood!
Can you say a bit more about why the quote is objectionable? I can see why the conclusion ‘saving a life in a rich country is substantially more important than saving a life in a poor country’ would be objectionable. But it seems Beckstead is saying something more like ‘here is an argument for saving lives in rich countries being relatively more important than saving lives in poor countries’ (because he says ‘other things being equal’).
I’m not sure I understand your distinction – are you saying that while it would be objectionable to conclude that saving lives in rich countries is more “substantially more important”, it is not objectionable to merely present an argument in favour of this conclusion?
I think if you provide arguments that lead to a very troubling conclusion, then you should ensure that they’re very strongly supported, eg by empirical or historical evidence. Since Beckstead didn’t do that (which perhaps is to be expected in a philosophy thesis), I think it would at the very least have been appropriate to recognise that the premises for the argument are extremely speculative.
I also think the argument warrants some disclaimers – e.g., a warning that following this line of reasoning could lead to undesirable neglect of global poverty or a disclaimer that we should be very wary of any argument that leads to conclusions like ‘we should prioritise people like ourselves.’
Like Dylan Balfour said above, I am otherwise a big fan of this important dissertation; I just think that this quote is not a great look and it exemplifies a form of reasoning that we longtermists should be careful about.
Yep that is what I’m saying. I think I don’t agree but thanks for explaining :)
The main issue I have with this quote is that it’s so divorced from the reality of how cost effective it is to save lives in rich countries vs. poor countries (something that most EAs probably know already). I understand that this objection is addressed by the caveat ‘other things being equal’, but it seems important to note that it costs orders of magnitude more to save lives in rich countries, so unless Beckstead thinks the knock-on effects of saving lives in rich countries are sufficient to offset the cost differences, it would still follow that we should focus our money on saving lives in poor countries.
I don’t understand why thinking like that quote isn’t totally passe to EAs. At least to utilitarian EAs. If anyone’s allowed to think hypothetically (“divorced from the reality”) I would think it would be a philosophy grad student writing a dissertation.
I think there should be strong norms against making arguments that justify shifting resources from the least well-off people to the best-off people in the world. These types of ideas have been used by people in power to justify global inequality.
In 1991, Larry Summers, then the chief economist at the World Bank, sent a memo arguing that pollution should be pushed to poorer places because it’s more economically efficient. Around the same time, Texaco was leaving open pools of carcinogenic substances all over the Ecuadorian rainforest, which contributed to elevated cancer rates in the local population. There were ways to safely dispose of the toxic waste produced by oil drilling, but they weren’t employed because the lives of indigenous Ecuadorian people weren’t sufficiently valued by Texaco.
If Beckstead had added a parenthetical like “(However, it’s typically many orders of magnitude cheaper to save lives in poor countries than in rich countries),” I wouldn’t take the same issue with the quote.
I think it’s important for EA to promote high decoupling in intellectual spaces. You also have to consider that this is a philosophy dissertation, which is an almost maximally decoupling space.
Again, Beckstead could have made the exact same point while offering my parenthetical. It would have communicated the same idea while also acknowledging the real world context. I’m not opposed to decoupling or thought experiments to help clarify our positions on things.
Are you implying that Larry Summers was wrong or that Texaco’s actions were somehow his fault?
Yes I think that Summers was wrong. Extending his logic, companies should take even fewer steps to mitigate pollution in industrial practices in poor countries than they do in rich countries, because the economic costs of doing so are lower in poor countries and because it’s probably cheaper and therefore more economically efficient to not mitigate pollution. He even says in the memo that moral reasons and social concerns could be invoked to oppose his line of reasoning, which seems relevant to people who claim to want to do good in the world, not just maximize a narrow understanding of economic productivity.
What that can look like in practice is what Texaco did in Ecuador. I’m not claiming a direct causal link between the Summers’ memo and Texaco’s actions. I’m simply saying that when intellectual elites make arguments that it’s okay to pollute more in poor countries, we shouldn’t be surprised when they do so.
I just wanted to echo your sentiments in the last part of your comment re: Beckstead’s quote about the value of saving lives in the developed world. Having briefly looked at where this quote is situated in Beckstead’s PhD thesis (which, judging by the parts I’ve previously read, is excellent), the context doesn’t significantly alter how this quote ought to be construed.
I think this is at the very least an eyebrow-raising claim, and I don’t think Torres is too far off the mark to think that the label of white supremacism, at least in the “scholarly” sense of the term, could apply here. Though it’s vital to note that this is in no way to insinuate that Beckstead is a white supremacist, i.e., someone psychologically motivated by white supremacist ideas. If Torres has insinuated this elsewhere, then that’s another matter.
It also needs noting that, contra Torres, longtermism simpliciter is not committed to the view espoused in the Beckstead quote. This view falls out of some particular commitments which give rise to longtermism (e.g. total utilitarianism). The OP does a good job of pointing out that there are other “routes” to longtermism, which Ord articulates, and I think these approaches could plausibly avoid the implication that we ought to prioritise members of the developed world over the contemporaneous global poor.
I’m oblivious to Torres’ history with various EAs, so I’m anxious about stepping into what seems like quite a charged debate here (especially with my first forum post), but I think it’s worth noting that, were various longtermist ideas to enter mainstream discourse, this is exactly the kind of critique they’d receive (unfairly or not!) - so it’s worth considering how plausible these charges are, and how longtermists might respond. The OP develops some promising initial responses, but I also think a longer discussion would be beneficial.
Rational discourse becomes very difficult when a position is characterized by a term with an extremely negative connotation in everyday contexts—and one which, justifiably, arouses strong emotions—on the grounds that the term is being used in a “technical” sense whose meaning or even existence remains unknown to the vast majority of the population, including many readers of this forum. For the sake of both clarity and fairness to the authors whose views are being discussed, I strongly suggest tabooing this term.
>but I think it’s worth noting that, were various longtermist ideas to enter mainstream discourse, this is exactly the kind of critique they’d receive (unfairly or not!) - so it’s worth considering how plausible these charges are, and how longtermists might respond.
This is a good point, and worth being mindful of as longtermism becomes more mainstream/widespread.
Well there’s a huge obvious problem with an “all generations are equal” moral theory. How do you even know whether you’re talking about actual moral agents? For all we know, maybe in the next few years some giant asteroid will wipe out human life entirely.
We can try to work with expected values and probabilities, but that only really works when you properly justify what probability you’re giving certain outcomes. I have no idea how someone gets something like a 1/6th probability of extinction risk from causes xyz especially when the science and tech of a few of these causes are speculative, and frankly it doesn’t sound possible.
We actually do have a good probability for a large asteroid striking the earth within the next 100 years, btw. It was the product of a major investigation, I believe it was 1⁄150,000,000.
Probabilities don’t have to be a product of a legible, objective or formal process. It can be useful to state our subjective beliefs as probabilities to use them as inputs to a process like that, but also generally it’s just good mental habit to try to maintain a sense of your level of confidence about uncertain events.
If Ord is giving numbers like a 1⁄6 chance, he needs to back them up with math. Sure, the chance of asteroid extinction can be calculated by astronomers, but probability of extinction by climate change or rogue AI is a highly suspect endeavor when one of those things is currently purely imaginary and the other is a complex field with uncertain predictive models that generally only agree on pretty broad aspects of the planet.
For whatever it’s worth, I show in a forthcoming, peer-reviewed philosophy paper that Ord’s view is, in fact, worse than Bostrom’s in multiple ways. I will, of course, happily share a link to he document once it’s published (although I know some folks at FHI have a copy right now).
“I argue that while many of the criticisms of Bostrom strike true, newer formulations of longtermism and existential risk – most prominently Ord’s The Precipice (but also Greaves, MacAskill, etc) – do not face the same challenges.”
“There are still trillions of future individuals, whose interests and dignity matter.”
How can the interests and dignity of unidentified fictional persons matter?
I view this aspect of longtermism much the way I view forced-birthers (so-called “pro-life”) who pretend to care about “unborn babies”. The fake concern in both cases is in service of an ideology. Many of the forced birthers are sociopaths who show no concern for actual human beings. In the case of the cargo cult concern of longtermism, it strikes me as something I might expect of people on the spectrum … but I don’t know any of these people personally, so I have no concrete reason to think this is true, but I find a concern for trillions of abstract people who are merely imagined to some day exist to be so bizarre that I’m grasping for an explanation.
“When someone is born is a morally irrelevant fact”
Whether “someone” actually exists is a morally relevant fact. This use of “someone” is a highly misleading equivocation or amphiboly—in one case it references a specific organism; in the other case it doesn’t reference anything it at all. Perhaps it would help to try to formulate longtermism in Loglan.
As a moderator, I think this comment is rude and uncivil, breaking Forum norms. Please don’t leave any more comments like this or you will be banned from the Forum.
Welcome to the forum.
I’m sorry you’ve had a rough time with your first posts! The norms here are somewhat different than a lot of other places on the internet. Personally I think they’re better, but they can lead to a lot of backlash against people when they act in a way that wouldn’t be unusual on, say, Twitter. Specifically, I would look at our commenting guidelines:
Commenting guidelines:
Aim to explain, not persuade
Try to be clear, on-topic, and kind
Approach disagreements with curiosity
This comment doesn’t really fit the last two. It’s rather uncharitable and uncurious to assume that people are faking concern for future people / unborn babies, even if you can’t personally think of a reason why someone would genuinely care about these things. It is a pretty counterintuitive worldview, but on this forum we tend to think the right response to ideas we don’t understand is “Why do you believe that?” not “Nobody could actually believe that.”
As for a reason for why someone might genuinely care about longtermism, maybe I can provide one.
EA started with the idea that we should care about people we don’t know, people on the other side of the world who might not look like us or share our language, as much as we care about our own communities. This led to a lot of great work done on alleviating global poverty, which continues to this day.
Now—does anyone care about any future people? I think the answer is clearly yes here—some parents begin preparing for a better life for their kid before they ever get pregnant—they’re still a purely conceptual child at this point. Many people report wanting to leave a better world for their children’s children, whether they currently exist or not. That means we can care about at least some future people, if they feel close to us.
So why not care about future people who aren’t close to us? In the same way that I can care about people in Africa who I’ll never meet, I can care about future people who I don’t feel personally close to as well. In this way, caring about future people is a logical expansion of the moral circle, just like caring about people outside one’s own country.
You may not agree with this argument, and thats fine, but hopefully that lets you see why someone might legitimately care about people who don’t yet exist, rather than just pretend to do so in service of some other goal.