Concerns/​Thoughts over international aid, longtermism and philosophical notes on speaking with Larry Temkin.

Background and Intro to Larry Temkin’s thoughts

Background/​Intro: The post I’ve been writing has turned out very long for blogpost. So I am splitting it into different parts. [1]ggIn this section I am going to link to my recent podcast with Larry Temkin, and extract one example of his concerns on an over commitment to expected utility and longtermism (small probabilities and expected utility and the like). It is useful to know (and possible here the other parts of the podcast) that this is intertwined with Temkin’s arguments that transitivity (A > B, B > C, then A > C ?) may not hold for moral and social choices.


Background on Larry Temkin and his concerns. Larry Temkin provides a good faith critique of certain EA (effective altruism) philosophical foundations. In particular, with respect to global poverty initiatives and international aid, and to an extent on long-termisim and the use of expected value utility calculations for moral choices. Temkin is short on answers but has ideas for further exploration and is suggestive of pluralistic answers. He has changed his mind on parts of the philosophical framework of parts of EA. I distill his ideas.

Short Temkin Bio: Larry Temkin is a moral philosopher. He has major works on inequality (book: Inequality); transitivity and social choices (when A > B > C, A > C ?; book: Rethinking the Good) and recently on the philosophies of doing good (critiquing some aspects of Effective Altruism, long-termism, international aid, utilitarianism | book: Being Good in a World of Need). As of 2022, he was Distinguished Professor of Philosophy at Rutgers University. He was Derek Parfit’s first Phd student, and he supervised Nick Beckstead’s thesis.


Structure of podcast: The podcast is in two parts. The second part focuses on Effective Altruism (EA) ideas. The first part looks at transitivity and other debates in philosophy through a pluralist lens. I am still reflecting greatly on the conversation where I judge I learned a lot.[2] The whole conversation is 3 hours long, so please feel free to dip in and out of it, and if you are intrigued go and look at Larry’s original works* (and other critiques of Larry). I provide some links at the end, to his books and some commentary from others (Tyler Cowen, reviewers).

Link to podcast here: https://​​www.thendobetter.com/​​arts/​​2022/​​7/​​24/​​larry-temkin-transitivity-critiques-of-effective-altruism-international-aid-pluralism-podcast


This is one section. I hope to provide other blog posts as part of this series.

An over-commitment to expected utility can lead us off the rails

Temkin’s critique of long termism’s use of expected utility using a toy example:

…a toy example involving some numbers. In doing this, I want to emphasize that the numbers I have chosen are purely for illustrative purposes. I am not claiming that they are realistic. Whether they are, or not, is irrelevant to my point.  

Suppose that we have some funds with which we want to do good. We have two ways of spending those funds: option one, Poverty Elimination; or option two, Asteroid Deflector.

Suppose that if we do Poverty Elimination, we will, with certainty, significantly improve the quality of life of one hundred million people in desperate need.

More particularly, we would transform their lives from ones that would be short, and filled with great misery, into ones that would be long, and very well off. So, for the sake of the example, let us say that if we don’t do Poverty Elimination, each person will live only 15 years, at level −50 (utiles, meaning that their lives would be significantly below the level at which life ceases to be worth living), while if we do Poverty Elimination, each person will live for 75 years at level 100 (utiles, meaning that their lives would be well worth living).

We can calculate the expected value of Poverty Elimination as 825 billion years of utility. ((50(utiles)*15(years)*100 million(people)) + (100(utiles)*75(years)*100 million(people)).) Undeniably, that is a lot of expected good.  

In the real world, of course, there may be all sorts of further consequences from lifting 100 million people out of poverty, and keeping them alive for 75 years, rather than having them all die before their fifteenth birthday. Many of these effects will be positive, others may be negative. However, for the purposes of keeping this example as clean, and clear, as possible, let us suppose that there are no other effects, positive or negative, from adopting Poverty Elimination, other than those described above. So, as stated, in this example, the total amount of expected value of Poverty Elimination is 825 billion years of utility.  

Next, suppose that scientists have calculated that in any given year, there is a one in 50 million chance that all life on Earth will be wiped out due to collision with a giant asteroid. This means that there is a one in 250,000 chance that life will be wiped out by an asteroid collision sometime in the next 200 years. If one spends one’s funds on Asteroid Deflector, there is only a one in a thousand chance that it will work. Is it worth spending money on Asteroid Deflector, which would only reduce the odds of human extinction due to an asteroid collision from one in 250,000 sometime in the next 200 years (0.000004 percent chance) to one in 250,250 sometime in the next 200 years (~0.000003996 percent chance)? That is, is spending the money on Asteroid Deflector worth increasing the odds of our surviving for the next 200 years by a mere four billionths (0.000000004)?  

The answer to the preceding question depends on all sorts of factors that are difficult to accurately estimate. However, if we assume that the likelihood of our going extinct in the next 200 years due to an asteroid collision is not connected to the likelihood of our going extinct from other causes, and also assume, more controversially, that reducing the chance of extinction from any given type of extinction event by n percent reduces the total chance of extinction by n percent, then one might think about this issue along the following lines.  

It is reasonable to suppose that if we manage to survive the next 200 years, there will be an explosion of developments in knowledge, creativity, morality, politics, technology, medicine, and so on. This explosion will enable humans to live vastly longer lives and, as importantly, vastly better lives. It will also allow humans to explore other planets, and possibly other galaxies, and hence to live in vastly greater numbers. It will also allow humans to effectively anticipate and avoid future existential risks, so members of our species will be able to continue to exist, perhaps spread throughout the solar system or galaxies, for a very long time into the future. Given how bright the future might be for us as a species, it might seem extremely important to try to ensure that we manage to attain that future.

How important?  

Well, for the sake of the example, let us suppose that the following were true. If we manage to avoid destroying ourselves in the next 200 years, or being destroyed by a virus, asteroid, or some other cataclysmic disaster, then there is at least a 90 percent probability that the future will include at least 100 billion humans, living an average of 200 years each, at a very high quality of, say, 200 utiles throughout their lives, for a total of at least 500 million years. The expected value of that outcome would be at least 9,000 billion billion years of utility (0.9(the probability of the desirable future)*100 billion(people who would be living in that future at any given time)*200(utiles, the quality of each person’s life at each moment of their life)*500 million(years of existence such humans would live)). That is an extraordinary amount of utility which, by hypothesis, hangs in the balance, depending on whether we survive the next 200 years.  So, even if funding Asteroid Deflector only increases the chances of our attaining that golden future by 0.000000004, that would still have an expected value of 3.6 million million years of utility, or 36,000 billion years of utility. That, of course, is considerably more than the expected value of Poverty Elimination, which we calculated at 825 billion people years of utility. In fact, given these figures, the expected value of Asteroid Deflector would be over 43 times greater than the expected value of Poverty Elimination. Therefore, in accordance with Expected Utility Theory, we should fund Asteroid Deflector.  

Suppose, next, that further funds become available that we could use to help the one billion people with terrible life prospects. However, we also learn that there is a one in 20 chance that a crucial component in Asteroid Deflector will fail, and that if we fund Backup Machine One we can protect against this failure. By the lights of Expected Utility Theory, we should fund Backup Machine One, rather than Poverty Elimination, since the expected value of funding the former will be 1,800 billion years of utility (0.05 (the chance that we will need Backup Machine One)*36,000 billion years of utility(the amount of good that is at stake if Asteroid Deflector fails and we don’t have Backup Machine One)), while the expected utility of funding the latter will still “only” be 825 billion years of utility.  

Next, suppose that we once again find ourselves with additional resources that would enable us to fund Poverty Elimination. However, we now learn that there is a 50 percent chance that Backup Machine One will fail, and that we could build Backup Machine Two to protect against this. By the lights of Expected Utility Theory, we should then fund Backup Machine Two, rather than Poverty Elimination, since the expected value of funding the former will be 900 billion years of utility (0.5(the chance that we will need Backup Machine Two)*1,800 billion years of utility(the amount of good that is at stake if both Asteroid Deflector fails and Backup Machine One fail, and we don’t have Backup Machine Two)), while the expected utility of funding the latter will still “only” be 825 billion people years of utility.  

This toy example helps illustrate how considerations of vast numbers of possible future people, all of whom might live much better, longer, lives than anyone alive today, can utterly swamp the urgent claims of very large numbers of actual people who are suffering terribly. This can happen if one allows one’s judgments about which endeavors to support to be driven by the “do the most (expected) good” approach of Effective Altruism, and if one follows Expected Utility Theory in determining the expected good (or value, or utility) of one’s actions.  

Effective Altruists who believe that we should focus on existential risk may quibble with the details of my example, or the way that I have put some of my claims, but they should not, I think, object to the gist of my claims. To the contrary, they should insist that the sorts of considerations I have offered here are no objection to their views, but rather an explanation, and vindication, of their views.  With all due respect, however—and I really do respect the people who advocate the position in question—I can’t, myself, accept such a view. I do believe, and have argued in print, that it is very important to try to ensure that high quality life exist as far into the future as possible.2 In addition, I readily accept that in thinking about such issues, there is much to be said for considering the expected value of one’s different alternatives. Even so, in my judgment, Expected Utility Theory is a tool, nothing more, which needs to be employed carefully. It can generate a mathematical truth about the relevant amounts of an artificial construct—expected utility—that we can attach to different options. But it cannot tell us how much weight, if any, to attach to that mathematical truth in deciding what we ought to do.  

To be clear, and fair, real-world Effective Altruists who believe we should focus on existential risk are not arguing that we should be spending money on projects like Asteroid Deflector. Indeed, they would argue, as Nick Beckstead pointed out in correspondence, that Asteroid Deflector “may be among the least effective of existential risk interventions, and that the Effective Altruist community thinks that it’s possible to get thousands or millions of times better [expected] returns by focusing on nuclear weapons policy, artificial intelligence, and threats to liberal democracy.” Indeed, it is worth recalling, as previously noted in Section 1.7, that Will MacAskill believes that thoughtful efforts to reduce global catastrophic risks “have a better cost-effectiveness than organizations like AMF [Against Malaria Foundation, long rated as one of the top global aid charities by GiveWell and The Life You Can Save], even if we just consider the potential deaths of people over the next 100 years (emphasis added).” If true, this is important. However, it does not undermine the point of this example.

I am not disputing whether there is good reason to devote significant resources to reducing existential risks. There is. Nor am I disputing whether considerations of Expected Utility Theory have an important role to play in our thinking about how to distribute our resources. They do. However, none of this changes the fact that if we let our thinking about such issues be driven by Expected Utility Theory, rather than merely partly guided by it, then even after we have addressed all of the vastly more cost-effective approaches to reducing existential risk, we should still be spending money on projects like Asteroid Deflector, rather than Poverty Elimination, because the former has a greater expected utility than the latter. At this point, I believe, we have, in the name of Expected Utility Theory, effectively abandoned the needy, for the sake of a far-flung future filled with well-off individuals; a future whose contours may or may not look anything like we now hope it will, and whose realization may or may not turn on the decisions we now make.  

My own view is that adherence to Expected Utility Theory will drive us off the rails if, based on it, we choose to fund Asteroid Deflector rather than Poverty Elimination. Moreover, in my judgment, we go further off the rails if we later fund Backup Machine One, and still further off the rails if we subsequently fund Backup Machine Two. Faced with the choice of substantially helping one hundred million people who are desperately in need now, or supporting a one in a thousand chance to prevent an occurrence of something which would itself only have a one in 250,000 chance of occurring, so as to thereby ever-so-slightly increase the possibility of lots of distantly future people living incredibly great lives, I prioritize the former over the latter. And if I did postpone meeting the urgent needs of some of the world’s worst-off, to ever-so-slightly increase the possibility of a future, lengthy, Eden for our descendants, I certainly wouldn’t do this a second time, let alone a third time.  

This is not, I believe, because I fail to understand how the math works out. I simply fail to accept that what we ought to do, in such cases, is determined by the math. There is, as I argue throughout this book, so much more to being good in a world of need, than merely “doing the most expected good” that one can.


This ends the quote section of this part.

More overall thoughts in the Link to podcast: https://​​www.thendobetter.com/​​arts/​​2022/​​7/​​24/​​larry-temkin-transitivity-critiques-of-effective-altruism-international-aid-pluralism-podcast

  1. ^

    In a future post, I will add a personal view on ideas on where those areas EAprojects might explore might be (arts/​culture, governance, systems; messy areas like politics cf. SBF; institutions) and my pluralist view on impact given I’m donating 10% of income a year, and am sympathetic to many EA ideas and tools. I distinguish between “Singer-style utilatian EA” and “EA as a project to increase (human) welfare”. The tentative actions might be: I would uprate co-ordination and systems work with respect to disaster relief (disaster relief not normally considered EA effective), I would caveat use of Expected Utility a little more, I would caveat the goodness of outcomes as a simple additive function; I would uprate some systems work, such as governance, institutions; and I would consider other human virtues, fairness and justice, as possible to embrace to a degree; I would uprate direct giving to poor people, that you know somewhat, and are in better governed nations. More tentatively, I wonder about the place of art and education on second impacts on humanity and welfare.

    Short personal reflection

    I’m mostly an EA outsider but have engaged deeply enough with some EA community to feel part aligned to some ideas (eg let’s think about how to do good, better). I considered the movement and people important enough that I went to EAG and I interviewed two people I viewed as having interesting perspectives (Nadia Ashapouohvoa and a Larry Temkin). Previously I had interviewed Alec Stapp and Leopold Aschenbrenner both of who have been influenced by EA ideas.

    I’ve concentrated on Larry Temkin’s ideas. In part as he has thought deeply about these topics for over 40 years, he has offered a good faith critique; and he has an in-depth book on the subject; Temkin has also changed his mind, which I view as important and note worthy.


    To supplement this there is a podcast in two parts. The second part concentrates on EA but the first part is important to understand why Temkin worries about the strong use of expected utility and adds details about how consequentialists, non-consequentialists and pluralists might differ over a range of topics.

    I’ve been late to understanding how many types of EA thinking fall under the EA banner. There are different “wings” or “styles”. For newer readers I think understanding these differences are an important unlock. Perhaps like religion (and Leopold Aschenbrenner in passing characterises parts of EA as a possible defective religion) it can cover many thinkers. I will mention three as I’ve come to understand them. (1) Peter Singer style-EA rooted in utilitarian thinking (more recently “hedonic style”) (1a) Singer style EA with higher regard for non-human welfare (2) Longtermism EA—a high regard for the mass of future humans (3) EA as a project to think about the question of how to (tentatively) improve human [mostly] welfare, effectively.

    A new comer to EA may want to explore those wings as they come to read the critiques.

    Lastly, in the personal reflection, I’ve now come across a number of stories from EA or former EA people about how it’s not worked out for them whether from burnout, or the high bars, or other factors they are giving themselves. There may, in part, be a challenge of the “hedonic paradox” here.

    My answer on this is “balance”. This is what Larry Temkin articulates in the last part of the second podcast. And in a way what Benjamin Todd alludes to in a 9 July 2022 tweet thread. Find (and refine) a balance, or a mix that works for you, and don’t become so stressed or narrow as to become ineffective or—in essence—really unhappy. That’s not a major finding on the ideas of EA but a personal synthesis on finding those who have left EA or become unhappy.

  2. ^

    I now know much of this thinking and critiques are discussed back and forth in academic philosophy circles, so it would not be a surprise to eg Nick Beckstead and similar.