[Question] Why hasn’t EA found agreement on patient and urgent longtermism yet?

(Note: I only recently dived into the discussion about patient longtermism. Maybe there actually is more agreement than it seems to me, but even if there is, I think there is still the point to be made that people who are new to the discussion should be able to better understand the terminology used and the current opinion of the EA community.)

It seems to me that there is still some significant disagreement whether patient or urgent longtermism is correct, which I think roughly breaks down to the disagreement about the following questions:

  1. Are we living in the most influential time in history?

    1. And is the most influential time some time that we can reasonably influence or is it so far into the future that we don’t have a good chance of influencing it anyways? (What is the most important thing we should focus on now?)

  2. How urgent is AI safety? And how sure are we about that? (Considering arguments of Ben Garfinkel. Probably making a probability estimate.)

  3. (How much) should EA invest in financial resources, to give more later? (How much does the effectiveness of spending money decrease over time, and how does it compare to how much the stock market increases?)

    1. How does the effectiveness of investing in financial resources compare to investing in non-financial resources (like knowledge through priorities research, movement growth, building career capital)?

Those seem like really important questions to answer to find out how to do the most good, and it surprises me that we still have a quite significant disagreement about that.

It seems to me, that if we take all the evidence we have into account and weigh it appropriately, we should at least be able to find reasonable probability estimates for the first two questions, that are probably significantly better estimates than the current average of the thoughts of the community members.

Answering the first two questions would help us to answer the third question, though I think to answer that we also need to become clearer on questions like “How should EA look like in the future?” and “At which rate and how big can EA sustainably grow?”, which seem like hard but also extremely important questions to answer.

(And just to make it clear, I think patient and urgent longtermists have pretty much the same goals, just other opinions of the world. E.g. I don’t think that (many) urgent longtermists discount for doing good later instead of now. So I think we should strive to solve those disagreements to unite patient and urgent longtermism.)


I think finding the truth about what the best thing to do is, is obviously extremely important for doing the most good. Therefore, understanding how we (the EA community) sometimes fail to get a better picture of the truth seems extremely important as well.

So if you have an idea why EA hasn’t found agreement about patient longtermism yet and how EA could improve to better find the truth, please share that! (Maybe take 5 minutes and think about it now.)

Here are some of my ideas, but I am uncertain about many points and the ideas how we could improve are far from actionable:

1: There was/​is quite inconsistently used terminology, which makes the discussion less productive.

  • I think some people think “patient longtermism” is about investing into financial resources, whereas many now rather think it means investing in financial and non financial resources.

    • I was personally confused by that, and some friends I asked also associated patient longtermism with financial investments explicitly.

  • For the short term, I think it is helpful to point out how you are using the term “patient longtermism” (or anything else with “patient” or “urgent”), to avoid confusion. In the long term, we should strive to set up a consistent terminology.

    • (Note that for this blogpost, it doesn’t really matter what exactly you understand under “patient longtermism”, which is why I haven’t explicitly said what I mean with that.)

2: (Epistemic status: uncertain) It seems to me that most blogposts covered rather one-sided views on patient and urgent longtermism. I have not found any blogposts that break down the question “How should EA use its money?” into many subquestions recursively and then find the cruxes that cause the disagreement between patient and urgent longtermists. I think such an article, which outlines the key disagreements and arguments for and against patient longtermism, would have been and would still be really helpful for me. → EAs should find cruxes in disagreements.

3: (Epistemic status: uncertain) Maybe some people don’t truly intuitively realize that patient and urgent longtermists actually have the same goals, so they don’t really have such a strong feeling that they might be wrong. I think actually there should be sirens in our heads going off like “OH NO, MANY SMART LONGTERMISTS DISAGREE ABOUT WHAT IS MOST EFFECTIVE, SO WE DON’T REALLY KNOW WHAT WE SHOULD DO! WE SHOULD GO NOW AND FIND OUT WHAT IS RIGHT!”, but sadly the human brain often isn’t that good in detecting possibly wrong opinions.

  • To improve that, EAs could ask themselves more often “where could I be wrong?”, though I actually don’t think it would have solved the problem.

4: (Epistemic status: very uncertain; I actually don’t have a good idea of what priorities research is doing yet.) Maybe there is some bad incentive structure in priorities research, that causes people to only rarely work on clarifying uncertainties or answering rather fundamental questions that, for example, would help to answer the 3 questions I have written down above. To me, clarifying why we think our fundamental beliefs are correct seems very important, partially because future research can probably build on top of that quite well. For example, I think Holden did an excellent job in the “most important century” blogpost series, and it would be great to see more like that.

No comments.