Thanks, Haydn, for writing this thoughtful post. I am glad that you (hopefully) found something from the syllabus useful and that you took the time to read and write about this essay.
I would love to write a longer post about Torres’ essay and engage in a fuller discussion of your points right away, but I’m afraid I wouldn’t get around to that for a while. So, as an unsatisfactory substitute, I will instead just highlight three parts of your post that I particularly agreed with, as well as two parts that I believe deserve further clarification or context.
A)
Torres suggests that longtermism is based on an ethical assumption of total utilitarianism (...) However, although total utilitarianism strongly supports longtermism, longtermism doesn’t need to be based on total utilitarianism.
I agree with this and think that any critique of longtermism’s moral foundations should engage seriously with the fact many of its key proponents have written extensively about moral uncertainty and pluralism, and that this informs longtermist thinking considerably. I don’t think Torres’ essay does that.
B)
However, the more common longtermist policy proposal is differential technological development – to try to foster and speed up the development of risk-reducing (or more generally socially beneficial) technologies and to slow down the development of risk-increasing (or socially harmful) technologies.
Agreed, this seems like another important omission from the essay and one that is quite conspicuous given Bostrom’s prominent essay on the topic.
C)
Torres underplays the crucial changes Ord makes with his definition of existential risk as the “destruction of humanity’s potential” and the institution of the “Long Reflection” to decide what we should do with this potential. Long Reflection proponents specifically propose not engaging in transhumanist enhancement or substantial space settlement before the Long Reflection is completed.
As above, this seems like a critical omission
D)
Torres implies that longtermism is committed to a view of the form that reducing risk from 0.001% to 0.0001% is morally equivalent to saving e.g. thousands of present day lives. (...)
However, longtermism does not have to be stated in such a way. The probabilities are unfortunately likely higher – for example Ord gives a 1⁄6 (~16%) probability of existential risk this century – and the reductions in risk are likely higher too. That is, with the right policies (e.g. robust arms control regimes) we could potentially reduce existential risk by 1-10%.
Unless I’m misunderstanding something, this section seems to conflate three distinct quantities:
The estimated marginal effect on existential risk of some action EAs could take.
The estimated absolute existential risk this century.
The estimated marginal effect on existential risk of some big policy change, e.g. arms control.
While (2) might indeed be as high as ~16%, and (3) may be as high as 1-10%, both of these quantities are very different from (1). Very rarely, if ever, do EAs have the option ‘spend $50M to achieve a robust arms control regime’; it’s much more likely to be ‘spend $50M to increase the likelihood of such a regime by 1-5%.’
So, unless you think the tens of millions of “EA dollars” allocated towards longtermist causes reduce existential risk by >>0.001% per, say, ten million dollars spent, then it seems like you would indeed have to be committed to Torres’ formulation of the tiny-risk-reduction vs. current-lives-saved tradeoff.
Of course, you may believe that the marginal effects of many EA actions are, in fact, >>>0.001% risk reduction. And even if you don’t, the tradeoff may still be a reasonable ethical position to take.
I just think it’s important to recognise that that tradeoff does seem to be a part of the deal for x-risk-focused longtermism.
E)
Torres suggests that longtermism is committed to donating to the rich rather than to those in extreme poverty (or indeed animals). He further argues that this reinforces “racial subordination and maintain[s] a normalized White privilege.”
However, longtermism is not committed to donating (much less transferring wealth from poor countries) to present rich people.
For a discussion of this point, I think it is only fair to also include the quote from Nick Beckstead’s dissertation that Torres discusses in the relevant section. I include it in full below, for context:
“Saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their workers are much more economically productive. By ordinary standards—at least by ordinary enlightened humanitarian standards—saving and improving lives in rich countries is about equally as important as saving and improving lives in poor countries, provided lives are improved by roughly comparable amounts. But it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.” (Beckstead, 2013, quoted in Torres, 2021)
Here, I should perhaps note that while I’ve read parts of Beckstead’s work, I don’t think I’ve read that particular section, and I would appreciate hearing if there is a crucial piece of context that’s missing. Either way, I think this quote deserves a fuller discussion – I will, for now, simply note that I certainly think the quote, as written, is very objectionable and potentially warrants indignation.
Again, thanks for writing the post, I look very much forward to the discussions in the comments!
A little historical background—one of my first introductions to proto-effective altruism was through corresponding with Nick Beckstead while he was a graduate student, around the time he would have been writing this dissertation. He was one of the first American members of Giving What We Can (which at the time was solely focused on global poverty), and at the time donated 10% of his graduate stipend to charities addressing global poverty. When I read this passage from his dissertation, I think of the context provided by his personal actions.
I think that “other things being equal” is doing a lot of work in the passage. I know that he was well aware of how much more cost-effective it is to save lives in poor economies than in rich ones, which is why he personally put his money toward global health.
Thanks for the context. I should note that I did not in any way intend to disparage Beckstead’s personal character or motivations, which I definitely assume to be both admirable and altruistic.
As stated in my comment, I found the quote relevant for the argument from Torres that Haydn discussed in this post. I also just generally think the argument itself is worth discussing, including by considering how it might be interpreted by readers who do not have the context provided by the author’s personal actions.
Happy to have a go; the “in/out of context” is a large part of the problem here. (Note that I don’t think I agree with Beckstead’s argument for reasons given towards the end).
(1) The thesis (198 pages of it!) is about shaping the far future, and operates on staggering timescales. Some of it like this quote is written in the first person, which has the effect of putting it in the present-day context, but these are at their heart philosophical arguments abstracted from time and space. This is a thing philosophers do.
If I were to apply the argument to the 12th century world, I might claim that saving a person in what is now modern day Turkey would have greater ripple effects than saving a person in war-ravaged Britain. The former was lightyears further ahead in science and technology, chock full of incredible muslim scholar-engineers like Al Jazari (seriously; read about this guy). I might be wrong of course; the future is unpredictable and these ripples might be wiped out in the next century by a Mongol Horde (as for the most part did happen); but wrong on different grounds.
And earlier in the thesis Beckstead provides a whole heap of caveats (in addition to ‘all other things being equal’, including that his argument explicitly does not address issues “such as whose responsibility that is, how much the current generation should be required to sacrifice for the sake of future generations, how shaping the far future stacks up against special obligations or issues of justice”; these are all “good questions” but out of scope.)
If Beckstead further developed the ‘it is better to save lives in rich countries’ argument in the thesis, explicitly embedding it within the modern context and making practical recommendations that would exacerbate the legacy of harm of postcolonial inequality, then Torres might have a point. He does not. It’s a paragraph on one page of a 198 page PhD thesis. Reading the paragraph in the context of the overall thesis gives a very different impression than the deliberately leading context that Torres places the paragraph in.
(2) Now consider the further claims that Torres has repeatedly made—that this paragraph taints the entire field in white supremacy; and that any person or organisation who praised the thesis is endorsing white supremacy. This is an even more extreme version of the same set of moves. I have found nothing—nothing -anywhere in the EA or longtermist literature building on and progressing this argument.
(3) The same can be seen, but in a more extreme fashion, for the Mogensen paper. Again, an abstract philosophical argument. Here Mogensen (in a very simplified version) observes that over three dimensions—the world—total utilitarianism says you should spread your resources over all people in that space. But if you introduce a 4th dimension—time, then the same axiology says you should spread your resources over space and time, and the majority of that obligation lies in the future. It’s an abstract philosophical argument. Torres reads in white supremacy, and invites the reader to read in white supremacy.
(4) The problem here is that no body of scholarship can realistically withstand this level of hostile scrutiny and leading analysis. And no field can realistically withstand the level of hostile analysis where one paragraph in a PhD thesis taken out of context is used to damn an entire field. I don’t think I personally agree with the argument on its own terms—it’s hard to prove definitively but I would have a concern that inequality has often been argued to be a driver of systemic instability, and that if so, any intervention that increases inequality might contribute to negative ‘ripple effects’ regardless of what countries were rich and poor at a given time. And I think the paragraph itself could reasonably be characterised as ‘thoughtless’, given the author is a white western person writing in C21, even if the argument is not explicitly in this context.
However the extreme criticism presented in Torres’s piece stands in stark contrast to the much more serious racism that goes unchallenged in so much of scholarship and modern life. Any good-faith actor will in the first instance pursue these, rather than reading the worst ills possible into a paragraph of a PhD thesis. I’ve run out of time, but will illustrate this shortly with a prominent example of what I consider to be much more significant racism from Torres’s own work.
“Consider the claim that there will be 2.76 billion Muslims by 2050. Now, 1% of this number equals 27.6 million people, roughly 26.2 million more than the number of military personnel on active duty in the US today. It follows that if even 1% of this figure were to hold “active apocalyptic” views, humanity could be in for a catastrophe like nothing we’ve ever experienced before.”
Firstly, this is nonsense. The proposition that 1% of Muslims would hold “active apocalyptic” views and be prepared to act on it is pure nonsense. And “if even 1%” suggests this is the author lowballing.
Secondly, this is fear-mongering against one of the most feared and discriminated-against communities in the West, being written for a Western audience.
Thirdly, it utilises another standard racism trope, population replacement—look at the growing number of scary ‘other’. They threaten to over-run the US’s good ’ol apple pie armed forces.
This was not a paragraph in a thesis. It was a public article, intended to reach as wide an audience as possible. It used to be prominently displayed on his now-defunct website. The article above was written several years more recently than Beckstead’s thesis.
I will say, to Torres’s credit, that his views on Islam have become more nuanced over time, and that I have found his recent articles on Islam less problematic. This is to be praised. And he has moved on from attacking Muslims to ‘critiquing’ right-wing Americans, the Atheist community, and the EA community. This is at least punching sidewards, rather than down.
But he has not subject his own body of work, or other more harmful materials, to anything like the level of critique that he has subjected Beckstead, Mogensen etc al. I consider this deeply problematic in terms of scholarly responsibility.
Can you say a bit more about why the quote is objectionable? I can see why the conclusion ‘saving a life in a rich country is substantially more important than saving a life in a poor country’ would be objectionable. But it seems Beckstead is saying something more like ‘here is an argument for saving lives in rich countries being relatively more important than saving lives in poor countries’ (because he says ‘other things being equal’).
I’m not sure I understand your distinction – are you saying that while it would be objectionable to conclude that saving lives in rich countries is more “substantially more important”, it is not objectionable to merely present an argument in favour of this conclusion?
I think if you provide arguments that lead to a very troubling conclusion, then you should ensure that they’re very strongly supported, eg by empirical or historical evidence. Since Beckstead didn’t do that (which perhaps is to be expected in a philosophy thesis), I think it would at the very least have been appropriate to recognise that the premises for the argument are extremely speculative.
I also think the argument warrants some disclaimers – e.g., a warning that following this line of reasoning could lead to undesirable neglect of global poverty or a disclaimer that we should be very wary of any argument that leads to conclusions like ‘we should prioritise people like ourselves.’
Like Dylan Balfour said above, I am otherwise a big fan of this important dissertation; I just think that this quote is not a great look and it exemplifies a form of reasoning that we longtermists should be careful about.
I’m not sure I understand your distinction – are you saying that while it would be objectionable to conclude that saving lives in rich countries is more “substantially more important”, it is not objectionable to merely present an argument in favour of this conclusion?
Yep that is what I’m saying. I think I don’t agree but thanks for explaining :)
The main issue I have with this quote is that it’s so divorced from the reality of how cost effective it is to save lives in rich countries vs. poor countries (something that most EAs probably know already). I understand that this objection is addressed by the caveat ‘other things being equal’, but it seems important to note that it costs orders of magnitude more to save lives in rich countries, so unless Beckstead thinks the knock-on effects of saving lives in rich countries are sufficient to offset the cost differences, it would still follow that we should focus our money on saving lives in poor countries.
I don’t understand why thinking like that quote isn’t totally passe to EAs. At least to utilitarian EAs. If anyone’s allowed to think hypothetically (“divorced from the reality”) I would think it would be a philosophy grad student writing a dissertation.
I think there should be strong norms against making arguments that justify shifting resources from the least well-off people to the best-off people in the world. These types of ideas have been used by people in power to justify global inequality.
In 1991, Larry Summers, then the chief economist at the World Bank, sent a memo arguing that pollution should be pushed to poorer places because it’s more economically efficient. Around the same time, Texaco was leaving open pools of carcinogenic substances all over the Ecuadorian rainforest, which contributed to elevated cancer rates in the local population. There were ways to safely dispose of the toxic waste produced by oil drilling, but they weren’t employed because the lives of indigenous Ecuadorian people weren’t sufficiently valued by Texaco.
If Beckstead had added a parenthetical like “(However, it’s typically many orders of magnitude cheaper to save lives in poor countries than in rich countries),” I wouldn’t take the same issue with the quote.
I think it’s important for EA to promote high decoupling in intellectual spaces. You also have to consider that this is a philosophy dissertation, which is an almost maximally decoupling space.
Again, Beckstead could have made the exact same point while offering my parenthetical. It would have communicated the same idea while also acknowledging the real world context. I’m not opposed to decoupling or thought experiments to help clarify our positions on things.
Yes I think that Summers was wrong. Extending his logic, companies should take even fewer steps to mitigate pollution in industrial practices in poor countries than they do in rich countries, because the economic costs of doing so are lower in poor countries and because it’s probably cheaper and therefore more economically efficient to not mitigate pollution. He even says in the memo that moral reasons and social concerns could be invoked to oppose his line of reasoning, which seems relevant to people who claim to want to do good in the world, not just maximize a narrow understanding of economic productivity.
What that can look like in practice is what Texaco did in Ecuador. I’m not claiming a direct causal link between the Summers’ memo and Texaco’s actions. I’m simply saying that when intellectual elites make arguments that it’s okay to pollute more in poor countries, we shouldn’t be surprised when they do so.
I just wanted to echo your sentiments in the last part of your comment re: Beckstead’s quote about the value of saving lives in the developed world. Having briefly looked at where this quote is situated in Beckstead’s PhD thesis (which, judging by the parts I’ve previously read, is excellent), the context doesn’t significantly alter how this quote ought to be construed.
I think this is at the very least an eyebrow-raising claim, and I don’t think Torres is too far off the mark to think that the label of white supremacism, at least in the “scholarly” sense of the term, could apply here. Though it’s vital to note that this is in no way to insinuate that Beckstead is a white supremacist, i.e., someone psychologically motivated by white supremacist ideas. If Torres has insinuated this elsewhere, then that’s another matter.
It also needs noting that, contra Torres, longtermism simpliciter is not committed to the view espoused in the Beckstead quote. This view falls out of some particular commitments which give rise to longtermism (e.g. total utilitarianism). The OP does a good job of pointing out that there are other “routes” to longtermism, which Ord articulates, and I think these approaches could plausibly avoid the implication that we ought to prioritise members of the developed world over the contemporaneous global poor.
I’m oblivious to Torres’ history with various EAs, so I’m anxious about stepping into what seems like quite a charged debate here (especially with my first forum post), but I think it’s worth noting that, were various longtermist ideas to enter mainstream discourse, this is exactly the kind of critique they’d receive (unfairly or not!) - so it’s worth considering how plausible these charges are, and how longtermists might respond. The OP develops some promising initial responses, but I also think a longer discussion would be beneficial.
Rational discourse becomes very difficult when a position is characterized by a term with an extremely negative connotation in everyday contexts—and one which, justifiably, arouses strong emotions—on the grounds that the term is being used in a “technical” sense whose meaning or even existence remains unknown to the vast majority of the population, including many readers of this forum. For the sake of both clarity and fairness to the authors whose views are being discussed, I strongly suggest tabooing this term.
>but I think it’s worth noting that, were various longtermist ideas to enter mainstream discourse, this is exactly the kind of critique they’d receive (unfairly or not!) - so it’s worth considering how plausible these charges are, and how longtermists might respond.
This is a good point, and worth being mindful of as longtermism becomes more mainstream/widespread.
Thanks, Haydn, for writing this thoughtful post. I am glad that you (hopefully) found something from the syllabus useful and that you took the time to read and write about this essay.
I would love to write a longer post about Torres’ essay and engage in a fuller discussion of your points right away, but I’m afraid I wouldn’t get around to that for a while. So, as an unsatisfactory substitute, I will instead just highlight three parts of your post that I particularly agreed with, as well as two parts that I believe deserve further clarification or context.
A)
I agree with this and think that any critique of longtermism’s moral foundations should engage seriously with the fact many of its key proponents have written extensively about moral uncertainty and pluralism, and that this informs longtermist thinking considerably. I don’t think Torres’ essay does that.
B)
Agreed, this seems like another important omission from the essay and one that is quite conspicuous given Bostrom’s prominent essay on the topic.
C)
As above, this seems like a critical omission
D)
Unless I’m misunderstanding something, this section seems to conflate three distinct quantities:
The estimated marginal effect on existential risk of some action EAs could take.
The estimated absolute existential risk this century.
The estimated marginal effect on existential risk of some big policy change, e.g. arms control.
While (2) might indeed be as high as ~16%, and (3) may be as high as 1-10%, both of these quantities are very different from (1). Very rarely, if ever, do EAs have the option ‘spend $50M to achieve a robust arms control regime’; it’s much more likely to be ‘spend $50M to increase the likelihood of such a regime by 1-5%.’
So, unless you think the tens of millions of “EA dollars” allocated towards longtermist causes reduce existential risk by >>0.001% per, say, ten million dollars spent, then it seems like you would indeed have to be committed to Torres’ formulation of the tiny-risk-reduction vs. current-lives-saved tradeoff.
Of course, you may believe that the marginal effects of many EA actions are, in fact, >>>0.001% risk reduction. And even if you don’t, the tradeoff may still be a reasonable ethical position to take.
I just think it’s important to recognise that that tradeoff does seem to be a part of the deal for x-risk-focused longtermism.
E)
For a discussion of this point, I think it is only fair to also include the quote from Nick Beckstead’s dissertation that Torres discusses in the relevant section. I include it in full below, for context:
Here, I should perhaps note that while I’ve read parts of Beckstead’s work, I don’t think I’ve read that particular section, and I would appreciate hearing if there is a crucial piece of context that’s missing. Either way, I think this quote deserves a fuller discussion – I will, for now, simply note that I certainly think the quote, as written, is very objectionable and potentially warrants indignation.
Again, thanks for writing the post, I look very much forward to the discussions in the comments!
A little historical background—one of my first introductions to proto-effective altruism was through corresponding with Nick Beckstead while he was a graduate student, around the time he would have been writing this dissertation. He was one of the first American members of Giving What We Can (which at the time was solely focused on global poverty), and at the time donated 10% of his graduate stipend to charities addressing global poverty. When I read this passage from his dissertation, I think of the context provided by his personal actions.
I think that “other things being equal” is doing a lot of work in the passage. I know that he was well aware of how much more cost-effective it is to save lives in poor economies than in rich ones, which is why he personally put his money toward global health.
Thanks for the context. I should note that I did not in any way intend to disparage Beckstead’s personal character or motivations, which I definitely assume to be both admirable and altruistic.
As stated in my comment, I found the quote relevant for the argument from Torres that Haydn discussed in this post. I also just generally think the argument itself is worth discussing, including by considering how it might be interpreted by readers who do not have the context provided by the author’s personal actions.
Happy to have a go; the “in/out of context” is a large part of the problem here. (Note that I don’t think I agree with Beckstead’s argument for reasons given towards the end).
(1) The thesis (198 pages of it!) is about shaping the far future, and operates on staggering timescales. Some of it like this quote is written in the first person, which has the effect of putting it in the present-day context, but these are at their heart philosophical arguments abstracted from time and space. This is a thing philosophers do.
If I were to apply the argument to the 12th century world, I might claim that saving a person in what is now modern day Turkey would have greater ripple effects than saving a person in war-ravaged Britain. The former was lightyears further ahead in science and technology, chock full of incredible muslim scholar-engineers like Al Jazari (seriously; read about this guy). I might be wrong of course; the future is unpredictable and these ripples might be wiped out in the next century by a Mongol Horde (as for the most part did happen); but wrong on different grounds.
And earlier in the thesis Beckstead provides a whole heap of caveats (in addition to ‘all other things being equal’, including that his argument explicitly does not address issues “such as whose responsibility that is, how much the current generation should be required to sacrifice for the sake of future generations, how shaping the far future stacks up against special obligations or issues of justice”; these are all “good questions” but out of scope.)
If Beckstead further developed the ‘it is better to save lives in rich countries’ argument in the thesis, explicitly embedding it within the modern context and making practical recommendations that would exacerbate the legacy of harm of postcolonial inequality, then Torres might have a point. He does not. It’s a paragraph on one page of a 198 page PhD thesis. Reading the paragraph in the context of the overall thesis gives a very different impression than the deliberately leading context that Torres places the paragraph in.
(2) Now consider the further claims that Torres has repeatedly made—that this paragraph taints the entire field in white supremacy; and that any person or organisation who praised the thesis is endorsing white supremacy. This is an even more extreme version of the same set of moves. I have found nothing—nothing -anywhere in the EA or longtermist literature building on and progressing this argument.
(3) The same can be seen, but in a more extreme fashion, for the Mogensen paper. Again, an abstract philosophical argument. Here Mogensen (in a very simplified version) observes that over three dimensions—the world—total utilitarianism says you should spread your resources over all people in that space. But if you introduce a 4th dimension—time, then the same axiology says you should spread your resources over space and time, and the majority of that obligation lies in the future. It’s an abstract philosophical argument. Torres reads in white supremacy, and invites the reader to read in white supremacy.
(4) The problem here is that no body of scholarship can realistically withstand this level of hostile scrutiny and leading analysis. And no field can realistically withstand the level of hostile analysis where one paragraph in a PhD thesis taken out of context is used to damn an entire field. I don’t think I personally agree with the argument on its own terms—it’s hard to prove definitively but I would have a concern that inequality has often been argued to be a driver of systemic instability, and that if so, any intervention that increases inequality might contribute to negative ‘ripple effects’ regardless of what countries were rich and poor at a given time. And I think the paragraph itself could reasonably be characterised as ‘thoughtless’, given the author is a white western person writing in C21, even if the argument is not explicitly in this context.
However the extreme criticism presented in Torres’s piece stands in stark contrast to the much more serious racism that goes unchallenged in so much of scholarship and modern life. Any good-faith actor will in the first instance pursue these, rather than reading the worst ills possible into a paragraph of a PhD thesis. I’ve run out of time, but will illustrate this shortly with a prominent example of what I consider to be much more significant racism from Torres’s own work.
Here is an article by Phil Torres arguing that the rise of Islam represents a very significant and growing existential risk.
https://hplusmagazine.com/2015/11/17/to-survive-we-must-go-extinct-apocalyptic-terrorism-and-transhumanism/
I will quote a key paragraph:
“Consider the claim that there will be 2.76 billion Muslims by 2050. Now, 1% of this number equals 27.6 million people, roughly 26.2 million more than the number of military personnel on active duty in the US today. It follows that if even 1% of this figure were to hold “active apocalyptic” views, humanity could be in for a catastrophe like nothing we’ve ever experienced before.”
Firstly, this is nonsense. The proposition that 1% of Muslims would hold “active apocalyptic” views and be prepared to act on it is pure nonsense. And “if even 1%” suggests this is the author lowballing.
Secondly, this is fear-mongering against one of the most feared and discriminated-against communities in the West, being written for a Western audience.
Thirdly, it utilises another standard racism trope, population replacement—look at the growing number of scary ‘other’. They threaten to over-run the US’s good ’ol apple pie armed forces.
This was not a paragraph in a thesis. It was a public article, intended to reach as wide an audience as possible. It used to be prominently displayed on his now-defunct website. The article above was written several years more recently than Beckstead’s thesis.
I will say, to Torres’s credit, that his views on Islam have become more nuanced over time, and that I have found his recent articles on Islam less problematic. This is to be praised. And he has moved on from attacking Muslims to ‘critiquing’ right-wing Americans, the Atheist community, and the EA community. This is at least punching sidewards, rather than down.
But he has not subject his own body of work, or other more harmful materials, to anything like the level of critique that he has subjected Beckstead, Mogensen etc al. I consider this deeply problematic in terms of scholarly responsibility.
Understood!
Can you say a bit more about why the quote is objectionable? I can see why the conclusion ‘saving a life in a rich country is substantially more important than saving a life in a poor country’ would be objectionable. But it seems Beckstead is saying something more like ‘here is an argument for saving lives in rich countries being relatively more important than saving lives in poor countries’ (because he says ‘other things being equal’).
I’m not sure I understand your distinction – are you saying that while it would be objectionable to conclude that saving lives in rich countries is more “substantially more important”, it is not objectionable to merely present an argument in favour of this conclusion?
I think if you provide arguments that lead to a very troubling conclusion, then you should ensure that they’re very strongly supported, eg by empirical or historical evidence. Since Beckstead didn’t do that (which perhaps is to be expected in a philosophy thesis), I think it would at the very least have been appropriate to recognise that the premises for the argument are extremely speculative.
I also think the argument warrants some disclaimers – e.g., a warning that following this line of reasoning could lead to undesirable neglect of global poverty or a disclaimer that we should be very wary of any argument that leads to conclusions like ‘we should prioritise people like ourselves.’
Like Dylan Balfour said above, I am otherwise a big fan of this important dissertation; I just think that this quote is not a great look and it exemplifies a form of reasoning that we longtermists should be careful about.
Yep that is what I’m saying. I think I don’t agree but thanks for explaining :)
The main issue I have with this quote is that it’s so divorced from the reality of how cost effective it is to save lives in rich countries vs. poor countries (something that most EAs probably know already). I understand that this objection is addressed by the caveat ‘other things being equal’, but it seems important to note that it costs orders of magnitude more to save lives in rich countries, so unless Beckstead thinks the knock-on effects of saving lives in rich countries are sufficient to offset the cost differences, it would still follow that we should focus our money on saving lives in poor countries.
I don’t understand why thinking like that quote isn’t totally passe to EAs. At least to utilitarian EAs. If anyone’s allowed to think hypothetically (“divorced from the reality”) I would think it would be a philosophy grad student writing a dissertation.
I think there should be strong norms against making arguments that justify shifting resources from the least well-off people to the best-off people in the world. These types of ideas have been used by people in power to justify global inequality.
In 1991, Larry Summers, then the chief economist at the World Bank, sent a memo arguing that pollution should be pushed to poorer places because it’s more economically efficient. Around the same time, Texaco was leaving open pools of carcinogenic substances all over the Ecuadorian rainforest, which contributed to elevated cancer rates in the local population. There were ways to safely dispose of the toxic waste produced by oil drilling, but they weren’t employed because the lives of indigenous Ecuadorian people weren’t sufficiently valued by Texaco.
If Beckstead had added a parenthetical like “(However, it’s typically many orders of magnitude cheaper to save lives in poor countries than in rich countries),” I wouldn’t take the same issue with the quote.
I think it’s important for EA to promote high decoupling in intellectual spaces. You also have to consider that this is a philosophy dissertation, which is an almost maximally decoupling space.
Again, Beckstead could have made the exact same point while offering my parenthetical. It would have communicated the same idea while also acknowledging the real world context. I’m not opposed to decoupling or thought experiments to help clarify our positions on things.
Are you implying that Larry Summers was wrong or that Texaco’s actions were somehow his fault?
Yes I think that Summers was wrong. Extending his logic, companies should take even fewer steps to mitigate pollution in industrial practices in poor countries than they do in rich countries, because the economic costs of doing so are lower in poor countries and because it’s probably cheaper and therefore more economically efficient to not mitigate pollution. He even says in the memo that moral reasons and social concerns could be invoked to oppose his line of reasoning, which seems relevant to people who claim to want to do good in the world, not just maximize a narrow understanding of economic productivity.
What that can look like in practice is what Texaco did in Ecuador. I’m not claiming a direct causal link between the Summers’ memo and Texaco’s actions. I’m simply saying that when intellectual elites make arguments that it’s okay to pollute more in poor countries, we shouldn’t be surprised when they do so.
I just wanted to echo your sentiments in the last part of your comment re: Beckstead’s quote about the value of saving lives in the developed world. Having briefly looked at where this quote is situated in Beckstead’s PhD thesis (which, judging by the parts I’ve previously read, is excellent), the context doesn’t significantly alter how this quote ought to be construed.
I think this is at the very least an eyebrow-raising claim, and I don’t think Torres is too far off the mark to think that the label of white supremacism, at least in the “scholarly” sense of the term, could apply here. Though it’s vital to note that this is in no way to insinuate that Beckstead is a white supremacist, i.e., someone psychologically motivated by white supremacist ideas. If Torres has insinuated this elsewhere, then that’s another matter.
It also needs noting that, contra Torres, longtermism simpliciter is not committed to the view espoused in the Beckstead quote. This view falls out of some particular commitments which give rise to longtermism (e.g. total utilitarianism). The OP does a good job of pointing out that there are other “routes” to longtermism, which Ord articulates, and I think these approaches could plausibly avoid the implication that we ought to prioritise members of the developed world over the contemporaneous global poor.
I’m oblivious to Torres’ history with various EAs, so I’m anxious about stepping into what seems like quite a charged debate here (especially with my first forum post), but I think it’s worth noting that, were various longtermist ideas to enter mainstream discourse, this is exactly the kind of critique they’d receive (unfairly or not!) - so it’s worth considering how plausible these charges are, and how longtermists might respond. The OP develops some promising initial responses, but I also think a longer discussion would be beneficial.
Rational discourse becomes very difficult when a position is characterized by a term with an extremely negative connotation in everyday contexts—and one which, justifiably, arouses strong emotions—on the grounds that the term is being used in a “technical” sense whose meaning or even existence remains unknown to the vast majority of the population, including many readers of this forum. For the sake of both clarity and fairness to the authors whose views are being discussed, I strongly suggest tabooing this term.
>but I think it’s worth noting that, were various longtermist ideas to enter mainstream discourse, this is exactly the kind of critique they’d receive (unfairly or not!) - so it’s worth considering how plausible these charges are, and how longtermists might respond.
This is a good point, and worth being mindful of as longtermism becomes more mainstream/widespread.