Blog update: Reflective altruism

1. Introduction

My blog, Reflective Altruism, aims to use academic research to drive positive change within and around the effective altruism movement. Part of that mission involves engagement with the effective altruism community. For this reason, I try to give periodic updates on blog content and future directions (previous updates: here and here)

In today’s post, I want to say a bit about new content published in 2024 (Sections 2-3) and give an overview of other content published so far (Section 4). I’ll also say a bit about upcoming content (Section 5) as well as my broader academic work (Section 6) and talks (Section 7) related to longtermism. Section 8 concludes with a few notes about other changes to the blog.

I would be keen to hear reactions to existing content or suggestions for new content. Thanks for reading.

2. New series this year

I’ve begun five new series since last December.

  1. Against the singularity hypothesis: One of the most prominent arguments for existential risk from artificial agents is the singularity hypothesis. The singularity hypothesis holds roughly that self-improving artificial agents will grow at an accelerating rate until they are orders of magnitude more intelligent than the average human. I think that the singularity hypothesis is not on as firm ground as many advocates believe. My paper, “Against the singularity hypothesis,” makes the case for this conclusion. I’ve written a six-part series Against the singularity hypothesis summarizing this paper. Part 1 introduces the singularity hypothesis. Part 2 and Part 3 together give five preliminary reasons for doubt. The next two posts examine defenses of the singularity hypothesis by Dave Chalmers (Part 4) and Nick Bostrom (Part 5). Part 6 draws lessons from this discussion.

  2. Harms: Existential risk mitigation efforts have important benefits but also identifiable harms. This series discusses some of the most important harms of existential risk mitigation efforts. Part 1 discusses distraction from other pressing issues. Part 2 discusses surveillance. Part 3 discusses delayed technological development. Part 4 discusses inequality.

  3. Human biodiversity: Human biodiversity (HBD) is the latest iteration of modern race science. This series discusses the impact of HBD on effective altruism and adjacent communities, as well as the harms done by debating and propounding race science. Part 1 introduces the series. Part 2 focuses on events at Manifest 2023 and Manifest 2024. Part 3 discusses Richard Hanania. Part 4 discusses Scott Alexander.

  4. Papers I learned from: This series highlights papers that have informed my own thinking and draws attention to what might follow from them. Part 1 discusses a paper on temporal discounting by Harry Lloyd. Part 2 discusses a paper by Richard Pettigrew drawing a surprising consequence for longtermism on some leading approaches to risk-averse decisionmaking. Part 3 discusses a reply to Pettigrew by Nikhil Venkatesh and Kacper Kowalczyk.

  5. The scope of longtermism: Longtermists hold that in a large class of decision situations, the best thing we can do is what is best for the long-term future. The scope question for longtermism asks: how large is the class of decision situations for which this is true? This series discusses my paper on the scope of longtermism. I argue that the scope of longtermism may be narrower than many longtermists suppose. Part 1 introduces the series. Part 2 introduces one concern: rapid diminution. Part 3 discusses a second concern: washing out. The full paper contains a third concern and further discussion, both of which will be reflected in later posts in this series.

3. Continuations of old series

In addition to these new series, I have also added to three existing series this year.

  1. Billionaire philanthropy: Effective altruism relies increasingly on a few billionaire donors to sustain its operations. In this series, I ask what drives billionaire philanthropists, how they are taxed and regulated, and what sorts of influence they should be allowed to wield within a democratic society. An initial series of posts from late 2022 to early 2023 discussed the relationship between philanthropy and democracy (Part 2), patient philanthropy (Part 3), philanthropic motivations (Part 4), sources of wealth (Part 5), and extravagant spending (Part 6). After a long hiatus, I recently added the first of two posts on the role of donor discretion (Part 7).

  2. Epistemics: Effective altruists use the term `epistemics’ to describe practices that shape knowledge, belief and opinion within a community. This series focuses on areas in which community epistemics could be productively improved. This year, I added new posts discussing the role of legitimate authority within the effective altruism movement and a distinction between two types of decoupling.

  3. Exaggerating the risks: One of the core series of this blog, Exaggerating the risks argues that many existential risk claims have been exaggerated. Part 1 introduced the series. Parts 2-5 (sub-series: “Climate risk”) looked at climate risk. Parts 6-8 (sub-series: “AI risk”) looked at the Carlsmith report on power-seeking AI. Parts 9-17 (sub-series: “Biorisk”) looked at biorisk. Since last December, I have rounded out the discussion of biorisk by responding to arguments by Toby Ord (Part 13) and Will MacAskill (Part 14), as well as claimed biorisks from LLMs (Part 15 and Part 16). I concluded the subseries and drew lessons in Part 17.

4. Additional series

Not every series was expanded in 2024. In two cases, this happened because the series was complete.

  1. Existential risk pessimism and the time of perils: This series is based on my paper “Existential risk pessimism and the time of perils” (published here; note title change). The paper develops a tension between two claims: Existential Risk Pessimism (levels of existential risk are very high) and the Astronomical Value Thesis (efforts to reduce existential risk have astronomical value). It explores the Time of Perils hypothesis as a way out of the tension. Part 1 introduces the tension. Part 2 reviews failed solutions. Part 3 reviews a better solution: the Time of Perils hypothesis. Parts 4-6 review arguments for the time of perils hypothesis and argue that they do not succeed. Part 4 discusses space settlement. Part 5 discusses the existential risk Kuznets curve. Part 6 discusses wisdom growth. Part 7 discusses an application to the moral mathematics of existential risk, pursued further in my series on mistakes in the moral mathematics of existential risk. Part 8 discusses further implications. Part 9 discusses objections and replies. Special thanks are due to Toby Ord, who introduced a substantial fraction of the models used in this paper, as well as to Tom Adamczewski, who wrongly refuses credit for advancing these models.

  2. Mistakes in the moral mathematics of existential risk: This series is based on my paper “Mistakes in the moral mathematics of existential risk”. The paper discusses three mistakes in the way that existential risk mitigation efforts are often valued. Correcting these mistakes reduces the expected value of existential risk mitigation efforts and also reveals important areas for future work. Part 1 discusses the first mistake: confusing cumulative existential risk with risk during a smaller time period. Part 2 discusses a second mistake, ignoring background risk, raised in my earlier paper on existential risk pessimism. Part 3 discusses the final mistake: ignoring population dynamics. Part 4 generalizes this discussion to more optimistic assumptions about population dynamics. Part 5 discusses implications.

Several other series are on an extended pause.

  1. Academics review What we owe the future: In this series, I draw lessons from some of my favorite academic reviews of What we owe the future. Part 1 discusses Kieran Setiya’s review, focusing on the intuition of neutrality in population ethics. Part 2 discusses Richard Chappell’s review, which found many things to like in the book. Part 3 discusses Regina Rini’s review, focusing on the demandingness of longtermism, cluelessness, and the inscrutability of existential risk. I think that this series will remain paused for two reasons. First, several years have passed since the publication of What we owe the future. Second, I have not been overly impressed with the quality of many recent reviews.

  2. Belonging: This series discusses questions of inclusion and belonging within and around the effective altruism movement, with a focus on identifying avenues for positive and lasting change. Part 1, Part 2 and Part 3 discuss the Bostrom email and subsequent controversy. Part 4 discusses the TIME magazine investigation into sexual harassment and abuse within the effective altruism movement. I was very happy to see that this series did not grow in 2024, which was in many ways a calmer year than 2023. I hope that fewer posts will be necessary in this series going forward.

  3. The good it promises: This series is based on a volume of essays entitled The good it promises, the harm it does: Critical essays on effective altruism. The volume brings together a diverse collection of scholars, activists and practitioners to critically reflect on effective altruism. In this series, I draw lessons from papers contained in the volume. Part 1 introduces the book. Part 2 discusses Simone de Lima’s work on colonialism in vegan advocacy. Part 3 discusses Carol J. Adams’ work on lessons from feminist care ethics. Part 4 discusses Lori Gruen’s work on systematic change. Part 5 discusses work by Andrew deCoriolis and colleagues on strategies for animal advocacy. Part 6 and Part 7 discuss Alice Crary’s critique of effective altruism. I think that I have probably written all that I care to write about this book, though I may continue this series if there is interest in examining any specific chapters that have not yet been discussed.

5. Upcoming content

In 2025, I hope to add to existing series as well as to introduce several new series. Here are three series that I would like to introduce, if possible, in 2025:

  1. Beyond longtermism: When I came to Oxford in 2020, it was widely thought that a traditional package of consequentialist-adjacent views such as consequentialism, totalism, and fanaticism led almost inevitably to longtermism. The primary aim of my research on longtermism has been to show how someone who likes science, mathematics, decision theory, and the elements of a traditional consequentialist package may nonetheless not be convinced by longtermism. I am writing a book, Beyond longtermism, to weave together a series of challenges which together may amount to a case against longtermism that is compelling even to those with normative views similar to my own. I hope to start a blog series on this book in 2025, with the caveat that this series may need to be delayed to appease potential publishers.

  2. Getting it right: There are many things I would like to change about the effective altruism movement. But there are also important things that effective altruists get right. I spoke briefly in 2022 about ten things that I think the effective altruism movement gets right. I hope to make many of these points at post-length next year.

  3. What power-seeking theorems do not show: A leading argument for existential risk from artificial intelligence is the argument from power-seeking. This argument claims that a wide variety of artificial agents would find power conducive to their goals and hence would pursue it, leading to the permanent and existentially catastrophic disempowerment of humanity. A spate of recent power-seeking theorems aims to show formally that a wide variety of artificial agents would seek power in this way. I don’t think that these theorems succeed. My paper, “What power-seeking theorems do not show,” explains why I am not convinced. This series would discuss the paper together with some content, such as coverage of an early power-seeking theorem out of MIRI, that will likely be cut by journal referees.

I also hope to add to existing series in 2025. Here are some examples of the additions that I hope to make:

  1. Harms: I hope to continue my discussion of potential harms from existential risk mitigation efforts. Potential harms that I would like to discuss include regulatory capture and opportunity cost.

  2. Human biodiversity: There is a lot more to cover in this series. My plan is to continue by discussing rationalist-adjacent venues, focusing for several posts on the communities surrounding Astral Codex Ten and LessWrong. Then I will discuss the role of HBD in more traditional effective altruist venues, such as the EA Forum. Thankfully, HBD plays a smaller role in many corners of the effective altruist community than it does in the rationalist community. However, I think it is worth noting places where HBD still plays an important role.

  3. Papers I learned from: I would like to continue this series by allowing the authors of papers to speak for themselves, as in Part 3 of the series. I have tentatively secured a guest post by Dmitri Gallow on his paper about instrumental convergence. After that, I have no firm plans, though I am open to suggestions.

  4. The scope of longtermism: Parts 1-3 of this series have been written. These posts cover approximately the first half of my paper. I hope to wrap up the second half of this series in 2025.

  5. Billionaire philanthropy: At the very least, I hope to round out my discussion of donor discretion from Part 7. I may write further posts in this series, but it is not my highest priority at the moment.

  6. Epistemics: The next post in this series will discuss the practice of ironic authenticity, in which authors use irony to convey politically sensitive messages that the author likely endorses while providing a degree of plausible deniability. After that, I hope to write something about the declining role of global priorities research in many corners of the effective altruism movement.

  7. Exaggerating the risks: I certainly hope to add many posts to this series in 2025. I am not entirely sure which posts I would like to add. The natural continuation of this series would be a sub-series on AI risk. However, my existing academic papers cover most of the topics about AI risk on which I am comfortable speculating. I think it may be helpful to write a sub-series on lessons from previous discussions of existential risk, such as past overestimates of risk from self-replicating nanobots, the history of technology panics, and an argument by Petra Kosonen for an optimistic meta-induction from the failure of past risk predictions to the likely failure of current risk predictions.

6. Academic papers

I am, for the most part, a traditional academic. The core of my work is my academic research, expressed in academic papers and sometimes in scholarly monographs. I think that much of my academic work is of a significantly higher quality than my blogging, and I try when possible to help to make the contents of my academic papers accessible, for example by writing blog series about them.

So far, I have written five academic papers about longtermism, four of which have been published. Here are the papers and their abstracts. I would encourage interested readers to have a look at the papers. I’ve linked to penultimate drafts of published papers so that they will not be behind a paywall (except when the paper itself is open access).

  1. Against the singularity hypothesis: The singularity hypothesis is a hypothesis about the future of artificial intelligence on which self-improving artificial agents will quickly become orders of magnitude more intelligent than the average human. Despite the ambitiousness of its claims, the singularity hypothesis has been defended at length by leading philosophers and artificial intelligence researchers. In this paper, I argue that the singularity hypothesis rests on undersupported growth assumptions. I show how leading philosophical defenses of the singularity hypothesis fail to overcome the case for skepticism. I conclude by drawing out philosophical and policy implications of this discussion.

  2. High risk, low reward: A challenge to the astronomical value of existential risk mitigation: Many philosophers defend two claims: the astronomical value thesis that it is astronomically important to mitigate existential risks to humanity, and existential risk pessimism, the claim that humanity faces high levels of existential risk. It is natural to think that existential risk pessimism supports the astronomical value thesis. In this paper, I argue that precisely the opposite is true. Across a range of assumptions, existential risk pessimism significantly reduces the value of existential risk mitigation, so much so that pessimism threatens to falsify the astronomical value thesis. I argue that the best way to reconcile existential risk pessimism with the astronomical value thesis relies on a questionable empirical assumption. I conclude by drawing out philosophical implications of this discussion, including a transformed understanding of the demandingness objection to consequentialism, reduced prospects for ethical longtermism, and a diminished moral importance of existential risk mitigation.

  3. Mistakes in the moral mathematics of existential risk: Longtermists have recently argued that it is overwhelmingly important to do what we can to mitigate existential risks to humanity. I consider three mistakes that are often made in calculating the value of existential risk mitigation. I show how correcting these mistakes pushes the value of existential risk mitigation substantially below leading estimates, potentially low enough to threaten the normative case for existential risk mitigation. I use this discussion to draw four positive lessons for the study of existential risk.

  4. The scope of longtermism: Longtermism is the thesis that in a large class of decision situations, the best thing we can do is what is best for the long-term future. The scope question for longtermism asks: how large is the class of decision situations for which this is true? In this paper, I suggest that the scope of longtermism may be narrower than many longtermists suppose. I identify a restricted version of longtermism: swamping axiological strong longtermism (swamping ASL). I identify three scope-limiting factors—probabilistic and decision-theoretic phenomena which, when present, tend to reduce the prospects for swamping ASL. I argue that these scope-limiting factors are often present in human decision problems, then use two case studies from recent discussions of longtermism to show how the scope-limiting factors lead to a restricted, if perhaps nonempty, scope for swamping ASL.

  5. What power-seeking theorems do not show: Recent years have seen increasing concern that artificial intelligence may soon pose an existential risk to humanity. One leading ground for concern is that artificial agents may be power-seeking, aiming to acquire power and in the process disempowering humanity. A range of power-seeking theorems seek to give formal articulation to the idea that artificial agents are likely to be power-seeking. I argue that leading theorems face five challenges, then draw lessons from this result.

Finally, a quick thanks to the Survival and Flourishing Fund for funding my work on “What power-seeking theorems do not show.”

7. Talks and podcasts


Not everyone likes reading long papers or blog posts. I know that my writing is not especially accessible, so I have tried in the past several years to make the main ideas of my papers accessible by increasing the number of recorded talks and podcasts discussing my work. Here are some recent talks I have given on my papers:

  1. Against the singularity hypothesis: ANU; AI Safety Reading Group; EAGxVirtual 2024.

  2. High risk, low reward: A challenge to the astronomical value of existential risk mitigation: EAGxCambridge (full); EAGxOxford (short).

  3. Mistakes in the moral mathematics of existential risk: Recorded myself

  4. The scope of longtermism: 7th Oxford Workshop on Global Priorities Research

I’ve also done podcasts at The Gradient and Critiques of EA. I’ll be recording an episode of Bio(un)ethical with Leah Pierson and Sophie Gibert in January 2025.

8. Other blog changes

There have been two main changes to the blog this year. First, I’ve decreased my posting frequency from weekly to biweekly. I did this because I am committed to providing the highest quality content that I can, and after starting a new job I was no longer confident that I could continue to provide high-quality content on a weekly basis. I think that I will probably continue posting biweekly in the future.

Second, I’ve finished migrating the site to its new domain, reflectivealtruism.com. This completes my effort to bring the branding of the blog in line with its mission of driving positive change within and around the effective altruism movement. Thanks to the many of you who contributed to the renaming and transition process.