First, I agree that racism isn’t the most worrying criticism of longtermism—though is the one that has been highlighted recently. But it is a valid criticism of at least some longtermist ideas, and I think we should take this seriously. Sean’s argument is one sketch of a real problem, though I think there is a broader point about racism in existential risk reduction, which I make below. But there is also more to longtermism than preventing extinction risks, which is what you defended. As the LARB article notes, transhumanism borders on some very worrying ideas, and there is non-trivial overlap with the ideas of communities which emphatically embrace racism. (And for that reason the transhumanist community has worked hard to distance itself from those ideas.)
And even within X-risk reduction. it’s not the case that attempts to reduce existential risks are obviously on their own a valid excuse for behavior that disadvantages others. For example, a certainty of faster western growth that disadvantages the world’s poor for a small reduction in risk of human extinction a century from now is a tradeoff that disadvantages others, albeit probably one I would make, were it up to me. But essential to the criticism is that I shouldn’t decide for them. And if utilitarian views about saving the future are contrary to the views of most of humanity, longermists should be very wary of unilateralism, or at least think very, very carefully before deciding to ignore others’ preferences to “help” them.
It seems strange to criticise longtermists on the basis that hypothetical actions that they might take (but haven’t taken) disadvantage certain demographic groups. If I were going to show that they were racist (a very serious and reputation-destroying charge), I would show that some of the things that they have actually done were actually bad for certain demographic groups. I just can’t think of any example of this.
It also seems strange to defend longtermists as only being harmful in theory, since the vast majority of longtermism is theory, and relatively few actions have been taken. That is, almost all longtermist ideas so far have implications which are currently only hypothetical.
But there is at least one concrete thing that has happened—many people in effective altruism who previously worked on and donated to near-term causes in global health and third world poverty have shifted focus away from those issues. And I don’t disagree with that choice, but if that isn’t an impact of longtermism which counterfactually harms the global poor, what do you think would qualify?
I just want to highlight your second point― resource allocation within the movement away from the global poor and towards longtermsism― seems to be a big part of what is concretely criticized in the Current Affairs piece. Quoting:
This means that if you want to do the most good, you should focus on these far-future people rather than on helping those in extreme poverty today. As [Hilary Greaves and Will MacAskill] write, “for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1,000) years, focusing primarily on the further-future effects. Short-run effects act as little more than tie-breakers.”
...
Since our resources for reducing existential risk are finite, Bostrom argues that we must not “fritter [them] away” on what he describes as “feel-good projects of suboptimal efficacy.” Such projects would include, on this account, not just saving people in the Global South—those most vulnerable, especially women—from the calamities of climate change, but all other non-existential philanthropic causes, too.
This doesn’t seem to me like a purely hypothetical harm. If you value existing people much more than potential future people (not an uncommon moral intuition) then this is concretely bad, especially since the EA community is able to move around a lot of philanthropic capital.
Yes but the counter-argument is that longtermists don’t accept the antecedent—they don’t value current people more than future people. And if you don’t accept the antecedent then it could equally be said that near-termist people are inflicting harm on non-white people. So, the argument doesn’t take us anywhere
Fair enough; it’s unsurprising that a major critique of longtermism is “actually, present people matter more than future people”. To me, a more productive framing of this criticism than racist/non-racist is about longtermist indifference to redistribution. I’ve seen various recent critiques quoting the following paragraph of Nick Beckstead’s thesis:
Saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their workers are much more economically productive. By ordinary standards—at least by ordinary enlightened humanitarian standards—saving and improving lives in rich countries is about equally as important as saving and improving lives in poor countries, provided lives are improved by roughly comparable amounts. But it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.
The standard neartermist response is “all other things are definitely not equal, it’s much easier to save a life in a poor country than a rich country”, while the standard longtermist response is (I think) “this is the wrong comparison to pay attention to, we should focus on protecting humanity’s potential”. Given this difference, I disagree a little with this bit of the OP:
the motivations for the part of the community which embraces longtermism still includes Peter Singer’s embrace of practical ethics and effective altruist ideas like the Giving Pledge
in that some of the foundational values embedded in Peter Singer’s writings (e.g. The Life You Can Save) strike me as redistributive commitments. This is very much reflected in the quote from Sanjay included in the OP. As far as I can tell (reading the EA Forum, The Precipice, and various Bostrom papers) longtermist philosophy typically does not emphasize redistribution or fairness as core values, but instead focuses on the overwhelming value of the far future.
(That said, I have seen some fairness-based arguments that future people are a constituency whose interests are underweighted politically, for example in response to the proposed UN Special Envoy for Future Generations.)
in that some of the foundational values embedded in Peter Singer’s writings (e.g. The Life You Can Save) strike me as redistributive commitments.
One thing to note is that redistributive commitments flow from impartial utilitarianism as well as the weaker normative commitments that Singer espouses as a largely empirical claim about a) human psychology and b) the world we live in.
Singer’s strong principle: “If it is in our power to prevent something bad from happening, without thereby sacrificing anything of comparable moral importance, we ought, morally, to do it.”
Singer’s weak principle: “If it is in our power to prevent something very bad from happening, without sacrificing anything morally significant, we ought, morally, to do it.”
I understood the outer framing of the drowning child etc as making not only normative claims about what’s right to do in the abstract but also empirical claims about the best way to apply those normative principles in the world we live in. I think the idea that existential risk is very bad and that we are morally compelled to stop it if we aren’t sacrificing things of comparable moral significance[1] is fully consistent with Singerian notions.
[1] or that both existential risk and present suffering is morally significant, so choosing one over the other is superergoatory under Singer’s principles, but not necessarily under classical utilitarianism.
I would note that Toby and others in the long-termist camp do, in fact, very clearly embrace “the foundational values embedded in Peter Singer’s writings.” I agree that some people who embrace long-termism could decide to do so on other bases than impartial utilitarianism or similar arguments which agree with both redistribution and some importance of the long term, but I don’t hear them involved in the discussions, and so I don’t think it works as a criticism when the actual people do also advocate for near-term redistributive causes.
I don’t think I quite understand this reply. Are you saying that (check all that apply):
In your experience, the people involved in discussions do embrace redistribution and fairness as core values, they are just placing more value on future people.
Actual longtermists also advocate for near-term redistributive causes, so criticism about resource allocation within the movement away from the global poor and towards longtermism doesn’t make sense (i.e. it’s not zero-sum).
Redistributive commitments are only one part of the “foundational values”, and Toby and others in the longtermist camp are still motivated by the same underlying impartial utilitarianism, so pointing at less emphasis on redistribution is an unfair nitpick.
On the first para, that doesn’t seem to me to be true of work on AI safety or biorisk, as I understand it.
On the second para, the first thing to say is that longtermists shouldn’t be the target of particular criticism on this score—almost no-one is wholly focused on improving the welfare of the global poor. If this decision by longtermists is racist then so is almost everyone else in the world.
Secondly, no I don’t think it counterfactually harms the global poor. That only works if you take a person-affecting view of people’s interests. If you count future people, then the shift is counterfactually very beneficial for the global poor and for both white and non-white people.
I don’t think it’s necessarily very good for the global poor as a changing group defined by their poverty, depending on how quickly global poverty declines. There’s also a big drop in the strength of evidence in this shift, so it depends on how skeptical you are.
Plus, person-affecting views (including asymmetric ones) or at least somewhat asymmetric views (e.g. prioritarianism) are not uncommon, and I would guess especially among those concerned with the global poor and inequality. Part of the complaint made by some is about ethical views that say extinction would be an astronomical loss and deserves overwhelming priority as a result, over all targeted anti-poverty work. This is a major part of the disagreement, not something to be quickly dismissed.
I disagree about at least some Biorisk, as the allocation of scarce resources in public health has distributive effects, and some work on pandemic preparedness has reduced focus for near-term campaigns on vaccinations. I suspect the same is true, to a lesser extent, in pushing people who might otherwise work on near-term ML bias to work on longer term concerns. But as this relates to your second point, and the point itself, I agree completely, and don’t think it’s reasonable to say it’s blameworthy or morally unacceptable, though as I argued, I think we should worry about the impacts.
But the last point confuses me. Even ignoring person-affecting or not, shifting efforts to help John can (by omission, at the very least,) injure Sam. “The global poor” isn’t a uniform pool, and helping those who are part of “the global poor” in a century by, say, taxing someone now is a counterfactual harm for the person now. If you aggregate the way you prefer, this problem goes away, but there are certainly ethical views, even within utilitarianism, where this isn’t acceptable—for example, if the future benefit is discounted so heavily that it’s outweighed by the present harm.
On your first para, I was responding to this claim: “It also seems strange to defend longtermists as only being harmful in theory, since the vast majority of longtermism is theory, and relatively few actions have been taken. That is, almost all longtermist ideas so far have implications which are currently only hypothetical.” I said that most work on bio and AI was not just theory but was applied. I don’t think the things you say in the first para present any evidence against that claim, but rather they seem to grant my initial point.
I agree that there are some things in Bio and AI that are applied—though the vast majority of the work in both areas is still fairly far from application. But my point which granted your initial point was responding to “I don’t think it counterfactually harms the global poor.”
person-affecting view of ethics, which longtermists reject
I’m a longtermist and I don’t reject (asymmetric) person(-moment-)affecting views, at least not those that think necessary ≠ only present people. I would be very hard-pressed to give a clean formalization of necessary people though. I think it’s bad if effective altruists think longtermism can only be justified with astronomical waste-style arguments and not at all if someone has person-affecting intuitions. (Staying in a broadly utilitarian framework. There are, of course, also obligation-to-ancestor-type justifications for longtermism or similar.) The person-affecting part of me just pushes me in the direction of caring more about trajectory change than extinction risk.
Since I could only ever give very handwavey defenses of person-affecting views and even handwaveier explanations of my overall moral views: Here’s a paper by someone that AFAICT is at least sympathetic to longtermism and discusses asymmetric person-affecting views. (I have to admit I never got around to read the paper.) (Writing a paper on how an asymmetric person-affecting view obviously also doesn’t necessarily mean that the author doesn’t actually reject person-affecting views)
Many current individuals will be worse off when resources don’t go to them, for instance, because they are saving future lives, versus when they do, for instance, funds focused on near-term utilitarian goals like poverty reduction. And if, as most of us expect, the world’s wealth will continue to grow, effectively all future people who are helped by existential risk reduction are not what we’d now consider poor. You can defend this via the utilitarian calculus across all people, but that doesn’t change the distributive impact between groups.
Equally, many future people will be worse-off than they would have been if we don’t reduce extinction risks. The claim is about the net total impact on non-white people
Your definition of problematic injustice seems far too narrow, and I explicitly didn’t refer to race in the previous post. The example I gave was that the most disadvantaged people are in the present, and are further injured—not that non-white people (which under current definitions will describe approximately all of humanity in another half dozen generations) will be worse off.
On the second point, yes I agree that there are some popular views on which we would discount or ignore future people. I just don’t think that they are plausible. If someone held a view which said that they only count the interests of white future people, I think it would be quite clear that this was bad for the interests of non-white people in a very important way. Therefore, if I ignore all future people, then I ignore all future non-white people, which is bad for their interests in a very important way
As I said above in a different comment thread, it seems clear we’re talking past one another.
Yes, being racist would be racist, and no, that’s not the criticism. You said that “there are some popular views on which we would discount or ignore future people. I just don’t think that they are plausible.” And I think part of the issue is exactly this dismissiveness. As a close analogy, imagine someone said “there are some popular views where AI could be a risk to humans. I just don’t think that these are plausible,” and went on to spend money building ASI instead of engaging with the potential that they are wrong, or taking any action to investigate or hedge that possibility.
I don’t really understand your response. Most of the people who argue for a longtermist ethical standpoint have spent many many years thinking about the possibility that they are wrong and arguing against person-affecting views, during their philosophy degrees. I could talk to you for several weeks about the merits and demerits of such views and the published literature on them.
“Yes, being racist would be racist, and no, that’s not the criticism.” I don’t really understand your point here.
My point is that many people who disagree with the longtermist ethical viewpoint also spent years thinking about the issues, and dismissing the majority of philosophers, and the vast, vast majority of people’s views as not plausible, is itself one of the problems I tried to highlight on the original post when I said that a small group talking about how to fix everything should raise flags.
And my point about racism is that criticism of choices and priorities which have a potential to perpetuate existing structural disadvantages and inequity is not the same as calling someone racist.
The standards in the first para appear to be something like ‘you can never say that something is implausible if some philosophers believe it’. That seems like a pretty weird standard. Another way of making saying it is implausible is just saying that “I think it is probably false”.
Near-termists are also a small group talking about how to fix everything.
this is perhaps too meta, but on the second para, if that is what you meant, I don’t understand how it is a response to the comment your response was to.
I’m pointing out that you’re privileging your views over those of others—not “some philosophers,” but “most people.”
And unless you’re assuming a fairly strong version of moral realism, this isn’t a factual question, it’s a values question—so it’s strange to me to think that we should get to assume we’re correct despite being a small minority, without at least a far stronger argument that most people would agree with longermism if properly presented—and I think Stefan Schubert’s recent work implies that is not at all clear.
Any time you take a stance on anything you are privileging your view over some other people. Your argument also applies to people working on animal welfare and on global poverty. In surveys, most people don’t even seem to care about saving more lives than less.
If we are going to go down the route of saying that what EAs do should be decided by the majority opinion of the current global population, then that would be the end of EA of any kind. As I understand it, your claim is that the total view is false (or we don’t have reason to act on it) because the vast majority of the world population do not believe in the total view. Is that right?
It is difficult not to come up with examples. In 1500, most people would have believed that violence against women and slavery were permissible. Would that have made you stop campaigning to bring an end to that? These are also values, after all
Also, the demographic criticism also applies to EAs who are working on global development: people in that area also skew white and highly educated.
People who work on farm animal welfare are not focused on the global poor either, but this seems to me an extremely flimsy basis on which to call them racist.
Note: I did not call anyone racist, other than to note that there are groups which embrace some views which themselves embrace that label—but on review, you keep saying that this is about calling someone racist, whereas I’m talking about unequal impacts and systemic impacts of choices—and I think this is a serious confusion which is hampering our conversation.
Perhaps I have misunderstood, but I interpreted your post as saying we should take the two critiques of longtermism seriously. I think the quality of the critiques is extremely poor, and am trying to explain why.
I might have been unclear. As I said initially, I claim it’s good to publicly address concerns about “the (indisputable) fact that avoiding X-risks can be tied to racist or eugenic historical precedents”, and this is what the LARB piece actually discussed. And I think that slightly more investigation into the issue should have convinced the author that any concerns about continued embrace of the eugenic ideas, or ignorance of the issues, were misplaced. I initially pointed out that specific claims about longtermism being similar to eugenics are “farcical.” More generally, I tried to point out in this post that many the attacks are unserious or uniformed- as Scott pointed out in his essay, which this one quoted and applied to this situation, the criticisms aren’t new.
More serious attempts at dialog, like some of the criticisms in the LARB piece are not bad-faith or unreasonable claims, even if they fail to be original. And I agree that “we cannot claim to take existential risk seriously — and meaningfully confront the grave threats to the future of human and nonhuman life on this planet — if we do not also confront the fact that our ideas about human extinction, including how human extinction might be prevented, have a dark history.” But I also think it’s obvious that others working on longtermism agree, so the criticism seems to be at best a weak man argument. Unfortunately, I think we’ll need to wait another year or so for Will’s new book, which I understand has a far more complete discussion of this, much of which was written before either of these pieces were published.
It doesn’t make sense to think that you can flush racism etc. out of a system run by affluent white westerners by self reflection. Maybe that could highlight some points that need to be addressed, but it is sure to miss others that different perspectives would spot.
So we absolutely cannot claim “our multimillion dollar project isn’t racist because we can’t find anything racist about it.” Such a project, if it does not include a much more diverse population in its decision making, is bound to end up harming those unrepresented.
that also applies to people working on global development as well, and to pretty much all philanthropy. So, there is nothing special about longtermism on this score
Sure. So each one should be interested in outside feedback about whether it seems racist, or fail on other counts -and take it seriously when outsiders say it is a concern.
But you have presented this post as something that is specific to longtermism. Do you not think it would have been more informative/less misleading to say that this also applies to all social movements, including those working to improve health in Africa?
No, because no-one is really providing this specific bit of outside feedback to most of those groups. As the post says, there have been recent attacks *on longtermism*.
I second David’s comment: this reply doesn’t abide by the Forum’s norms. (Specifically “other behavior that interferes with good discourse”.)
Calling for someone to be removed from the community (as I think was appropriate in Phil’s case) isn’t the same as saying we should give the same treatment to their ideas. And that’s what you seem to be doing when you link “more credit than [the critiques] deserve” to “redressing personal slights” and “doesn’t believe what he is saying”.
If you think Phil’s vendetta or personal beliefs are relevant to the reasonableness of his critique, you should explain why — it’s not clear to me what the connection is.
I think his arguments fail as arguments, and would still fail as arguments even if he believed in them sincerely. If the arguments are solid, any hypothetical bad faith seems irrelevant.
Put another way, would “Phil believes this” be important evidence in favor of the critique? If not, why does “Phil doesn’t really believe this” serve as important counterevidence?
Without the connection between Phil’s personal life and his arguments, this comment seems like a personal attack unrelated to your point. And it’s an unfortunate contrast with your other comments, which reliably engaged with the critiques rather than the critic. (As well as a contrast with basically all the other stuff you’ve written on the Forum, which is consistently backed by lots of evidence.)
I would usually agree that we should play ball and not man, but I think the critiques would be mystifying unless you knew the background on what is actually driving him to write this stuff. I think it is relevant evidence that he doesn’t really believe what he is saying because eg it should influence what you should believe about how faithfully he reports what the people he criticises say. We could look into each specific claim that he makes and check how faithfully he quotes what people actually say. As it turns out, it is often not faithful and often very uncharitable. But if you know that someone’s motives are randomly to trash people who have crossed him, that should update us about how much we should trust all of the claims they make. It’s like ignoring the fact that a journalist has fabricated quotes when examining what they write, and instead just focusing on all of their object-level claims each time they write anything.
I agree that knowing someone’s personal motives can help you judge the likelihood of unproven claims they make, and should make you suspicious of any chance they have to e.g. selectively quote someone. But some of the language I’ve seen used around Torres seems to imply “if he said it, we should just ignore it”, even in cases where he actually links to sources, cites published literature, etc.
Of course, it’s much more difficult to evaluate someone’s arguments when they’ve proven untrustworthy, so I’d give an evaluation of Phil’s claims lower priority than I would evaluations of other critics who don’t share his background (all else being equal). But I don’t want them to be thrown out entirely.
I think the critiques would be mystifying unless you knew the background on what is actually driving him to write this stuff.
When Phil shares this material, I often see comments (on Twitter, Aeon, etc.) from people saying things like “yes, this is also how I feel” or “this experience you said someone had is similar to the experience I had”. You could argue that these people probably have false beliefs or biases of their own, but they don’t seem mystified, and they probably don’t share Phil’s personal background. He seems to represent a particular viewpoint/worldview that others also hold for non-vengeful reasons.
I understand that Phil Torres was banned for the forum, I think for good reason. But I don’t think that your reply here is acceptable given the norms on the forum for polite disagreement—especially because it’s both mean-spirited, and in parts simply incorrect.
That said, I am presenting the fact that his claims are being taken seriously by others, as the second article shows, and yes, steelmanning his view to an extent that I think is reasonable—especially since certain of his critiques are both anticipated by, and have been further extensively discussed by people in EA since. Regardless of whether Phil believes them—and it’s clear that he does—the critiques aren’t a fringe position outside of EA, and beating up the strawman while ignoring the many, many people who agree seems at best disingenuous.
Finally, global development has spent a huge amount of time and effort addressing the reasonable criticisms of colonialism, especially given the incredible damage that such movements have caused in many instances. (Even though it’s been positive on net, that doesn’t excuse avoidable damage—which seems remarkably similar to what I’m concerned about here.) In any case, saying that global development is also attacked, as if that means longtermists couldn’t be similarly guilty, seems like a very, very strange defense.
I don’t see how it is either mean-spirited or incorrect. Which part is incorrect?
The context is crucial here because it illustrates that he is not arguing in good faith, which is quite clear to anyone who knows the background to this.
On your last paragraph
you said: “no-one is really providing this specific bit of outside feedback [that they risk racism] to most of those groups”.
I said this wasn’t true because people eg say that global health is colonialist all the time.
You then characterise me as “saying that global development is also attacked, as if that means longtermists couldn’t be similarly guilty”.
Obviously, this was not what I was doing. I was arguing against the initial thing that you said (which you have now conceded). This is now the second time this has happened in this conversation, so I think we should probably draw this to a close.
You said he “clearly doesn’t believe what he is saying.” That was the place that seems obviously over the line, mean spirited, and incorrect. It was, of course, imputing motives, which is generally considered unacceptable. But more than that, you’re confused about what he’s saying, or you’re assuming that because he opposes some views longtermists hold, he must disagree with all of them. You need to very, very careful about reading what is said closely when you’re making such bold and insulting claims. He does not say anywhere in the linked article that engineered pandemics and AGI are not challenges, nor, in other forums, has he changed his mind about them as risks—but he does say that X-risk folks ignoring climate change is worrying, and that in his view, it is critical. And that’s a view that many others in EA share—just not the longtermists who are almost exclusively focused on X-risks. And his concerns about fanaticism are not exactly coming out of nowhere. The concern of fanaticism in longtermist thinking was brought up a half dozen times at the GPI conference earlier this week, and his concern about it seems far more strident, but is clearly understandable—even if you think, as I have said publicly and told him privately, that he’s misreading the most extreme philosophical arguments which have been proposed about longtermism as personal viewpoints held by people, rather than speculation.
to clarify, when I said he had applied for jobs at the organisations he criticises, I didn’t mean to be criticising him for that (I have also applied at jobs at many of those orgs and been rejected). My point was that it is a bit improbable that he has had such a genuine intellectual volte-face given this fact
In the second half of your comment, your analysis of the conversation, you claim that I’ve been doing something repeatedly. I think you are taking an excerpt and accidentally engaged in a motte-and-bailey—and given that the conversation took place over weeks, I assume that was because you didn’t go back and trace the entire thread. But I want to make this clearer, because I think my claims were misread.
Initially, you said of the criticism, “that also applies to people working on global development as well, and to pretty much all philanthropy.” I then agreed that “each [area] should be interested in outside feedback about whether it seems racist, or fail on other counts.” You replied that my criticism was “specific to longtermism… [but it] also applies to all social movements” I responded that “no-one is really providing this specific bit of outside feedback to most of those groups.” (And I will note that your claims until here are about “all philanthropy” and “all social movements,” no longer referring to just global development.) You said “there are also attacks on all global development charity for being colonialist.” (I disagree—there were, especially decades ago, but,) I responded, “global development has spent a huge amount of time and effort addressing the reasonable criticisms of colonialism… saying that global development is also attacked, as if that means longtermists couldn’t be similarly guilty, seems like a very, very strange defense.”
So I said everyone receiving the criticism should take it seriously. You said everyone (motte) in philanthropy is criticised in this was. I said *most of those groups* are not. You replied that global development (bailey) was criticised. I agreed—but again, pointed out that that was quite a while ago, and they have addressed the issues, i.e. did what I said EA should do. So I admitted that your bailey was correct—that an example which is not “all social movements” or “most groups” was criticised, and did the thing I said EA should do. And I’ll point out that you never went back and addressed the motte you first claimed, that it is a universal fact. Finally, “[I] then characterise [you] as “saying that global development is also attacked, as if that means longtermists couldn’t be similarly guilty.” And yes, that seems to encapsulate my point exactly.
But essential to the criticism is that I shouldn’t decide for them.
It seems like this is a central point in David’s comment, but I don’t see it addressed in any of what follows. What exactly makes it morally okay for us to be the deciders?
It’s worth noting that in both US philanthropy and the international development field, there is currently a big push toward incorporating affected stakeholders and people with firsthand experience with the issue at hand directly into decision-making for exactly this reason. (See participatory grantmaking, the Equitable Evaluation Initiative, and the process that fed into the Sustainable Development Goals, e.g.) I recognize that longtermism is premised in part on representing the interests of moral patients who can’t represent themselves. But the question remains: what qualifies us to decide on their behalf? I think the resistance to longtermism in many quarters has much more to do with a suspicion that the answer to that question is “not much” than any explicit valuation of present people over future people.
First, I agree that racism isn’t the most worrying criticism of longtermism—though is the one that has been highlighted recently. But it is a valid criticism of at least some longtermist ideas, and I think we should take this seriously. Sean’s argument is one sketch of a real problem, though I think there is a broader point about racism in existential risk reduction, which I make below. But there is also more to longtermism than preventing extinction risks, which is what you defended. As the LARB article notes, transhumanism borders on some very worrying ideas, and there is non-trivial overlap with the ideas of communities which emphatically embrace racism. (And for that reason the transhumanist community has worked hard to distance itself from those ideas.)
And even within X-risk reduction. it’s not the case that attempts to reduce existential risks are obviously on their own a valid excuse for behavior that disadvantages others. For example, a certainty of faster western growth that disadvantages the world’s poor for a small reduction in risk of human extinction a century from now is a tradeoff that disadvantages others, albeit probably one I would make, were it up to me. But essential to the criticism is that I shouldn’t decide for them. And if utilitarian views about saving the future are contrary to the views of most of humanity, longermists should be very wary of unilateralism, or at least think very, very carefully before deciding to ignore others’ preferences to “help” them.
It seems strange to criticise longtermists on the basis that hypothetical actions that they might take (but haven’t taken) disadvantage certain demographic groups. If I were going to show that they were racist (a very serious and reputation-destroying charge), I would show that some of the things that they have actually done were actually bad for certain demographic groups. I just can’t think of any example of this.
It also seems strange to defend longtermists as only being harmful in theory, since the vast majority of longtermism is theory, and relatively few actions have been taken. That is, almost all longtermist ideas so far have implications which are currently only hypothetical.
But there is at least one concrete thing that has happened—many people in effective altruism who previously worked on and donated to near-term causes in global health and third world poverty have shifted focus away from those issues. And I don’t disagree with that choice, but if that isn’t an impact of longtermism which counterfactually harms the global poor, what do you think would qualify?
I just want to highlight your second point― resource allocation within the movement away from the global poor and towards longtermsism― seems to be a big part of what is concretely criticized in the Current Affairs piece. Quoting:
This doesn’t seem to me like a purely hypothetical harm. If you value existing people much more than potential future people (not an uncommon moral intuition) then this is concretely bad, especially since the EA community is able to move around a lot of philanthropic capital.
Yes but the counter-argument is that longtermists don’t accept the antecedent—they don’t value current people more than future people. And if you don’t accept the antecedent then it could equally be said that near-termist people are inflicting harm on non-white people. So, the argument doesn’t take us anywhere
Fair enough; it’s unsurprising that a major critique of longtermism is “actually, present people matter more than future people”. To me, a more productive framing of this criticism than racist/non-racist is about longtermist indifference to redistribution. I’ve seen various recent critiques quoting the following paragraph of Nick Beckstead’s thesis:
The standard neartermist response is “all other things are definitely not equal, it’s much easier to save a life in a poor country than a rich country”, while the standard longtermist response is (I think) “this is the wrong comparison to pay attention to, we should focus on protecting humanity’s potential”. Given this difference, I disagree a little with this bit of the OP:
in that some of the foundational values embedded in Peter Singer’s writings (e.g. The Life You Can Save) strike me as redistributive commitments. This is very much reflected in the quote from Sanjay included in the OP. As far as I can tell (reading the EA Forum, The Precipice, and various Bostrom papers) longtermist philosophy typically does not emphasize redistribution or fairness as core values, but instead focuses on the overwhelming value of the far future.
(That said, I have seen some fairness-based arguments that future people are a constituency whose interests are underweighted politically, for example in response to the proposed UN Special Envoy for Future Generations.)
One thing to note is that redistributive commitments flow from impartial utilitarianism as well as the weaker normative commitments that Singer espouses as a largely empirical claim about a) human psychology and b) the world we live in.
I understood the outer framing of the drowning child etc as making not only normative claims about what’s right to do in the abstract but also empirical claims about the best way to apply those normative principles in the world we live in. I think the idea that existential risk is very bad and that we are morally compelled to stop it if we aren’t sacrificing things of comparable moral significance[1] is fully consistent with Singerian notions.
[1] or that both existential risk and present suffering is morally significant, so choosing one over the other is superergoatory under Singer’s principles, but not necessarily under classical utilitarianism.
I would note that Toby and others in the long-termist camp do, in fact, very clearly embrace “the foundational values embedded in Peter Singer’s writings.” I agree that some people who embrace long-termism could decide to do so on other bases than impartial utilitarianism or similar arguments which agree with both redistribution and some importance of the long term, but I don’t hear them involved in the discussions, and so I don’t think it works as a criticism when the actual people do also advocate for near-term redistributive causes.
I don’t think I quite understand this reply. Are you saying that (check all that apply):
In your experience, the people involved in discussions do embrace redistribution and fairness as core values, they are just placing more value on future people.
Actual longtermists also advocate for near-term redistributive causes, so criticism about resource allocation within the movement away from the global poor and towards longtermism doesn’t make sense (i.e. it’s not zero-sum).
Redistributive commitments are only one part of the “foundational values”, and Toby and others in the longtermist camp are still motivated by the same underlying impartial utilitarianism, so pointing at less emphasis on redistribution is an unfair nitpick.
I think all of these are true, but I was pointing to #2 specifically.
On the first para, that doesn’t seem to me to be true of work on AI safety or biorisk, as I understand it.
On the second para, the first thing to say is that longtermists shouldn’t be the target of particular criticism on this score—almost no-one is wholly focused on improving the welfare of the global poor. If this decision by longtermists is racist then so is almost everyone else in the world.
Secondly, no I don’t think it counterfactually harms the global poor. That only works if you take a person-affecting view of people’s interests. If you count future people, then the shift is counterfactually very beneficial for the global poor and for both white and non-white people.
I don’t think it’s necessarily very good for the global poor as a changing group defined by their poverty, depending on how quickly global poverty declines. There’s also a big drop in the strength of evidence in this shift, so it depends on how skeptical you are.
Plus, person-affecting views (including asymmetric ones) or at least somewhat asymmetric views (e.g. prioritarianism) are not uncommon, and I would guess especially among those concerned with the global poor and inequality. Part of the complaint made by some is about ethical views that say extinction would be an astronomical loss and deserves overwhelming priority as a result, over all targeted anti-poverty work. This is a major part of the disagreement, not something to be quickly dismissed.
I disagree about at least some Biorisk, as the allocation of scarce resources in public health has distributive effects, and some work on pandemic preparedness has reduced focus for near-term campaigns on vaccinations. I suspect the same is true, to a lesser extent, in pushing people who might otherwise work on near-term ML bias to work on longer term concerns. But as this relates to your second point, and the point itself, I agree completely, and don’t think it’s reasonable to say it’s blameworthy or morally unacceptable, though as I argued, I think we should worry about the impacts.
But the last point confuses me. Even ignoring person-affecting or not, shifting efforts to help John can (by omission, at the very least,) injure Sam. “The global poor” isn’t a uniform pool, and helping those who are part of “the global poor” in a century by, say, taxing someone now is a counterfactual harm for the person now. If you aggregate the way you prefer, this problem goes away, but there are certainly ethical views, even within utilitarianism, where this isn’t acceptable—for example, if the future benefit is discounted so heavily that it’s outweighed by the present harm.
On your first para, I was responding to this claim: “It also seems strange to defend longtermists as only being harmful in theory, since the vast majority of longtermism is theory, and relatively few actions have been taken. That is, almost all longtermist ideas so far have implications which are currently only hypothetical.” I said that most work on bio and AI was not just theory but was applied. I don’t think the things you say in the first para present any evidence against that claim, but rather they seem to grant my initial point.
I agree that there are some things in Bio and AI that are applied—though the vast majority of the work in both areas is still fairly far from application. But my point which granted your initial point was responding to “I don’t think it counterfactually harms the global poor.”
This is question begging: it only counterfactually harms the poor on a person-affecting view of ethics, which longtermists reject
I’m a longtermist and I don’t reject (asymmetric) person(-moment-)affecting views, at least not those that think necessary ≠ only present people. I would be very hard-pressed to give a clean formalization of necessary people though. I think it’s bad if effective altruists think longtermism can only be justified with astronomical waste-style arguments and not at all if someone has person-affecting intuitions. (Staying in a broadly utilitarian framework. There are, of course, also obligation-to-ancestor-type justifications for longtermism or similar.) The person-affecting part of me just pushes me in the direction of caring more about trajectory change than extinction risk.
Since I could only ever give very handwavey defenses of person-affecting views and even handwaveier explanations of my overall moral views: Here’s a paper by someone that AFAICT is at least sympathetic to longtermism and discusses asymmetric person-affecting views. (I have to admit I never got around to read the paper.) (Writing a paper on how an asymmetric person-affecting view obviously also doesn’t necessarily mean that the author doesn’t actually reject person-affecting views)
Is that true?
Many current individuals will be worse off when resources don’t go to them, for instance, because they are saving future lives, versus when they do, for instance, funds focused on near-term utilitarian goals like poverty reduction. And if, as most of us expect, the world’s wealth will continue to grow, effectively all future people who are helped by existential risk reduction are not what we’d now consider poor. You can defend this via the utilitarian calculus across all people, but that doesn’t change the distributive impact between groups.
Equally, many future people will be worse-off than they would have been if we don’t reduce extinction risks. The claim is about the net total impact on non-white people
Your definition of problematic injustice seems far too narrow, and I explicitly didn’t refer to race in the previous post. The example I gave was that the most disadvantaged people are in the present, and are further injured—not that non-white people (which under current definitions will describe approximately all of humanity in another half dozen generations) will be worse off.
On the second point, yes I agree that there are some popular views on which we would discount or ignore future people. I just don’t think that they are plausible. If someone held a view which said that they only count the interests of white future people, I think it would be quite clear that this was bad for the interests of non-white people in a very important way. Therefore, if I ignore all future people, then I ignore all future non-white people, which is bad for their interests in a very important way
As I said above in a different comment thread, it seems clear we’re talking past one another.
Yes, being racist would be racist, and no, that’s not the criticism. You said that “there are some popular views on which we would discount or ignore future people. I just don’t think that they are plausible.” And I think part of the issue is exactly this dismissiveness. As a close analogy, imagine someone said “there are some popular views where AI could be a risk to humans. I just don’t think that these are plausible,” and went on to spend money building ASI instead of engaging with the potential that they are wrong, or taking any action to investigate or hedge that possibility.
I don’t really understand your response. Most of the people who argue for a longtermist ethical standpoint have spent many many years thinking about the possibility that they are wrong and arguing against person-affecting views, during their philosophy degrees. I could talk to you for several weeks about the merits and demerits of such views and the published literature on them.
“Yes, being racist would be racist, and no, that’s not the criticism.” I don’t really understand your point here.
My point is that many people who disagree with the longtermist ethical viewpoint also spent years thinking about the issues, and dismissing the majority of philosophers, and the vast, vast majority of people’s views as not plausible, is itself one of the problems I tried to highlight on the original post when I said that a small group talking about how to fix everything should raise flags.
And my point about racism is that criticism of choices and priorities which have a potential to perpetuate existing structural disadvantages and inequity is not the same as calling someone racist.
The standards in the first para appear to be something like ‘you can never say that something is implausible if some philosophers believe it’. That seems like a pretty weird standard. Another way of making saying it is implausible is just saying that “I think it is probably false”.
Near-termists are also a small group talking about how to fix everything.
this is perhaps too meta, but on the second para, if that is what you meant, I don’t understand how it is a response to the comment your response was to.
I’m pointing out that you’re privileging your views over those of others—not “some philosophers,” but “most people.”
And unless you’re assuming a fairly strong version of moral realism, this isn’t a factual question, it’s a values question—so it’s strange to me to think that we should get to assume we’re correct despite being a small minority, without at least a far stronger argument that most people would agree with longermism if properly presented—and I think Stefan Schubert’s recent work implies that is not at all clear.
Any time you take a stance on anything you are privileging your view over some other people. Your argument also applies to people working on animal welfare and on global poverty. In surveys, most people don’t even seem to care about saving more lives than less.
If we are going to go down the route of saying that what EAs do should be decided by the majority opinion of the current global population, then that would be the end of EA of any kind. As I understand it, your claim is that the total view is false (or we don’t have reason to act on it) because the vast majority of the world population do not believe in the total view. Is that right?
It is difficult not to come up with examples. In 1500, most people would have believed that violence against women and slavery were permissible. Would that have made you stop campaigning to bring an end to that? These are also values, after all
Also, the demographic criticism also applies to EAs who are working on global development: people in that area also skew white and highly educated.
People who work on farm animal welfare are not focused on the global poor either, but this seems to me an extremely flimsy basis on which to call them racist.
Note: I did not call anyone racist, other than to note that there are groups which embrace some views which themselves embrace that label—but on review, you keep saying that this is about calling someone racist, whereas I’m talking about unequal impacts and systemic impacts of choices—and I think this is a serious confusion which is hampering our conversation.
Perhaps I have misunderstood, but I interpreted your post as saying we should take the two critiques of longtermism seriously. I think the quality of the critiques is extremely poor, and am trying to explain why.
I might have been unclear. As I said initially, I claim it’s good to publicly address concerns about “the (indisputable) fact that avoiding X-risks can be tied to racist or eugenic historical precedents”, and this is what the LARB piece actually discussed. And I think that slightly more investigation into the issue should have convinced the author that any concerns about continued embrace of the eugenic ideas, or ignorance of the issues, were misplaced. I initially pointed out that specific claims about longtermism being similar to eugenics are “farcical.” More generally, I tried to point out in this post that many the attacks are unserious or uniformed- as Scott pointed out in his essay, which this one quoted and applied to this situation, the criticisms aren’t new.
More serious attempts at dialog, like some of the criticisms in the LARB piece are not bad-faith or unreasonable claims, even if they fail to be original. And I agree that “we cannot claim to take existential risk seriously — and meaningfully confront the grave threats to the future of human and nonhuman life on this planet — if we do not also confront the fact that our ideas about human extinction, including how human extinction might be prevented, have a dark history.” But I also think it’s obvious that others working on longtermism agree, so the criticism seems to be at best a weak man argument. Unfortunately, I think we’ll need to wait another year or so for Will’s new book, which I understand has a far more complete discussion of this, much of which was written before either of these pieces were published.
Sorry to jump in the conversation, but Toby Ord has another book? Maybe you’re talking about Will MacAskill’s upcoming book on longtermism?
Right—fixed. Whoops!
It doesn’t make sense to think that you can flush racism etc. out of a system run by affluent white westerners by self reflection. Maybe that could highlight some points that need to be addressed, but it is sure to miss others that different perspectives would spot.
So we absolutely cannot claim “our multimillion dollar project isn’t racist because we can’t find anything racist about it.” Such a project, if it does not include a much more diverse population in its decision making, is bound to end up harming those unrepresented.
that also applies to people working on global development as well, and to pretty much all philanthropy. So, there is nothing special about longtermism on this score
Sure. So each one should be interested in outside feedback about whether it seems racist, or fail on other counts -and take it seriously when outsiders say it is a concern.
But you have presented this post as something that is specific to longtermism. Do you not think it would have been more informative/less misleading to say that this also applies to all social movements, including those working to improve health in Africa?
No, because no-one is really providing this specific bit of outside feedback to most of those groups. As the post says, there have been recent attacks *on longtermism*.
there are also attacks on all global development charity for being colonialist.
Also, you are giving more credit to the critiques than they deserve.
I second David’s comment: this reply doesn’t abide by the Forum’s norms. (Specifically “other behavior that interferes with good discourse”.)
Calling for someone to be removed from the community (as I think was appropriate in Phil’s case) isn’t the same as saying we should give the same treatment to their ideas. And that’s what you seem to be doing when you link “more credit than [the critiques] deserve” to “redressing personal slights” and “doesn’t believe what he is saying”.
If you think Phil’s vendetta or personal beliefs are relevant to the reasonableness of his critique, you should explain why — it’s not clear to me what the connection is.
I think his arguments fail as arguments, and would still fail as arguments even if he believed in them sincerely. If the arguments are solid, any hypothetical bad faith seems irrelevant.
Put another way, would “Phil believes this” be important evidence in favor of the critique? If not, why does “Phil doesn’t really believe this” serve as important counterevidence?
Without the connection between Phil’s personal life and his arguments, this comment seems like a personal attack unrelated to your point. And it’s an unfortunate contrast with your other comments, which reliably engaged with the critiques rather than the critic. (As well as a contrast with basically all the other stuff you’ve written on the Forum, which is consistently backed by lots of evidence.)
I would usually agree that we should play ball and not man, but I think the critiques would be mystifying unless you knew the background on what is actually driving him to write this stuff. I think it is relevant evidence that he doesn’t really believe what he is saying because eg it should influence what you should believe about how faithfully he reports what the people he criticises say. We could look into each specific claim that he makes and check how faithfully he quotes what people actually say. As it turns out, it is often not faithful and often very uncharitable. But if you know that someone’s motives are randomly to trash people who have crossed him, that should update us about how much we should trust all of the claims they make. It’s like ignoring the fact that a journalist has fabricated quotes when examining what they write, and instead just focusing on all of their object-level claims each time they write anything.
I agree that knowing someone’s personal motives can help you judge the likelihood of unproven claims they make, and should make you suspicious of any chance they have to e.g. selectively quote someone. But some of the language I’ve seen used around Torres seems to imply “if he said it, we should just ignore it”, even in cases where he actually links to sources, cites published literature, etc.
Of course, it’s much more difficult to evaluate someone’s arguments when they’ve proven untrustworthy, so I’d give an evaluation of Phil’s claims lower priority than I would evaluations of other critics who don’t share his background (all else being equal). But I don’t want them to be thrown out entirely.
When Phil shares this material, I often see comments (on Twitter, Aeon, etc.) from people saying things like “yes, this is also how I feel” or “this experience you said someone had is similar to the experience I had”. You could argue that these people probably have false beliefs or biases of their own, but they don’t seem mystified, and they probably don’t share Phil’s personal background. He seems to represent a particular viewpoint/worldview that others also hold for non-vengeful reasons.
I understand that Phil Torres was banned for the forum, I think for good reason. But I don’t think that your reply here is acceptable given the norms on the forum for polite disagreement—especially because it’s both mean-spirited, and in parts simply incorrect.
That said, I am presenting the fact that his claims are being taken seriously by others, as the second article shows, and yes, steelmanning his view to an extent that I think is reasonable—especially since certain of his critiques are both anticipated by, and have been further extensively discussed by people in EA since. Regardless of whether Phil believes them—and it’s clear that he does—the critiques aren’t a fringe position outside of EA, and beating up the strawman while ignoring the many, many people who agree seems at best disingenuous.
Finally, global development has spent a huge amount of time and effort addressing the reasonable criticisms of colonialism, especially given the incredible damage that such movements have caused in many instances. (Even though it’s been positive on net, that doesn’t excuse avoidable damage—which seems remarkably similar to what I’m concerned about here.) In any case, saying that global development is also attacked, as if that means longtermists couldn’t be similarly guilty, seems like a very, very strange defense.
I don’t see how it is either mean-spirited or incorrect. Which part is incorrect?
The context is crucial here because it illustrates that he is not arguing in good faith, which is quite clear to anyone who knows the background to this.
On your last paragraph
you said: “no-one is really providing this specific bit of outside feedback [that they risk racism] to most of those groups”.
I said this wasn’t true because people eg say that global health is colonialist all the time.
You then characterise me as “saying that global development is also attacked, as if that means longtermists couldn’t be similarly guilty”.
Obviously, this was not what I was doing. I was arguing against the initial thing that you said (which you have now conceded). This is now the second time this has happened in this conversation, so I think we should probably draw this to a close.
You said he “clearly doesn’t believe what he is saying.” That was the place that seems obviously over the line, mean spirited, and incorrect. It was, of course, imputing motives, which is generally considered unacceptable. But more than that, you’re confused about what he’s saying, or you’re assuming that because he opposes some views longtermists hold, he must disagree with all of them. You need to very, very careful about reading what is said closely when you’re making such bold and insulting claims. He does not say anywhere in the linked article that engineered pandemics and AGI are not challenges, nor, in other forums, has he changed his mind about them as risks—but he does say that X-risk folks ignoring climate change is worrying, and that in his view, it is critical. And that’s a view that many others in EA share—just not the longtermists who are almost exclusively focused on X-risks.
And his concerns about fanaticism are not exactly coming out of nowhere. The concern of fanaticism in longtermist thinking was brought up a half dozen times at the GPI conference earlier this week, and his concern about it seems far more strident, but is clearly understandable—even if you think, as I have said publicly and told him privately, that he’s misreading the most extreme philosophical arguments which have been proposed about longtermism as personal viewpoints held by people, rather than speculation.
to clarify, when I said he had applied for jobs at the organisations he criticises, I didn’t mean to be criticising him for that (I have also applied at jobs at many of those orgs and been rejected). My point was that it is a bit improbable that he has had such a genuine intellectual volte-face given this fact
In the second half of your comment, your analysis of the conversation, you claim that I’ve been doing something repeatedly. I think you are taking an excerpt and accidentally engaged in a motte-and-bailey—and given that the conversation took place over weeks, I assume that was because you didn’t go back and trace the entire thread. But I want to make this clearer, because I think my claims were misread.
Initially, you said of the criticism, “that also applies to people working on global development as well, and to pretty much all philanthropy.”
I then agreed that “each [area] should be interested in outside feedback about whether it seems racist, or fail on other counts.”
You replied that my criticism was “specific to longtermism… [but it] also applies to all social movements”
I responded that “no-one is really providing this specific bit of outside feedback to most of those groups.” (And I will note that your claims until here are about “all philanthropy” and “all social movements,” no longer referring to just global development.)
You said “there are also attacks on all global development charity for being colonialist.” (I disagree—there were, especially decades ago, but,)
I responded, “global development has spent a huge amount of time and effort addressing the reasonable criticisms of colonialism… saying that global development is also attacked, as if that means longtermists couldn’t be similarly guilty, seems like a very, very strange defense.”
So I said everyone receiving the criticism should take it seriously. You said everyone (motte) in philanthropy is criticised in this was. I said *most of those groups* are not. You replied that global development (bailey) was criticised. I agreed—but again, pointed out that that was quite a while ago, and they have addressed the issues, i.e. did what I said EA should do. So I admitted that your bailey was correct—that an example which is not “all social movements” or “most groups” was criticised, and did the thing I said EA should do. And I’ll point out that you never went back and addressed the motte you first claimed, that it is a universal fact. Finally, “[I] then characterise [you] as “saying that global development is also attacked, as if that means longtermists couldn’t be similarly guilty.” And yes, that seems to encapsulate my point exactly.
It seems like this is a central point in David’s comment, but I don’t see it addressed in any of what follows. What exactly makes it morally okay for us to be the deciders?
It’s worth noting that in both US philanthropy and the international development field, there is currently a big push toward incorporating affected stakeholders and people with firsthand experience with the issue at hand directly into decision-making for exactly this reason. (See participatory grantmaking, the Equitable Evaluation Initiative, and the process that fed into the Sustainable Development Goals, e.g.) I recognize that longtermism is premised in part on representing the interests of moral patients who can’t represent themselves. But the question remains: what qualifies us to decide on their behalf? I think the resistance to longtermism in many quarters has much more to do with a suspicion that the answer to that question is “not much” than any explicit valuation of present people over future people.