Convergence thesis between longtermism and neartermism

Epistemic status: some vague musings.

Introduction

Here is a post I wrote very quickly a little while back on the topic of: would it be surprising that the best things to do if you are a longtermist and if you are not a longtermist might converge and be the same.

It takes a many weak arguments approach to considering the question. I do not make the case that longtermism and neartermism will converge only that it would not in fact be “surprising and suspicious” if they did converge [1]. Making the stronger case that they do converge would require a bit more time and empirical data gathering work and this is mostly a thought exercise so far.

Longtermism (roughly) defined here as: the moral belief that future wellbeing matters significantly and given the future might be extremely big it is extremely morally important to make sure it goes well. Neartermism is defined as not longtermism.

Top 10 weak arguments

SAME PROBLEMS

1. Neglectedness – the world is chronically short-term

We should focus on issues that are high in scale, neglected and tractable. The world is chronically short term. Like extremely so. At least anecdotally I can say that politicians I have worked with have not put much thought into anything beyond their expected 2 year tenure in office. The UK national risk register looks 2 years ahead. Here is a whole bunch of literature pointing out how politicians are short term. For some reasons this even applies to politicians and society planning their own futures – as can be seen by the neglect of improving care for elderly people with mental illness (source).

Even just considering people alive today most of society is not planning for 95% of our future. This is due to both presentism bias and the nature of our modern democracies. Given high negelctedness it should not be surprising that focusing on making sure the future goes well matters a lot even if you ignore the very long-term, so it should not be surprising if longtermist suggestions of how to do good are the same as suggestions from non-longtermist.

2. The same problems are relevant for both – e.g. X-risk estimates are high

Consider Toby Ord’s estimates of existential risks. If these estimates are accurate and roughly constant over the next 100 years then your chance of dying any year from an existential catastrophe is roughly 1600 or 0.16667% (there are some reasons to think it is higher or lower but let’s call that a reasonable estimate). Let’s compare to say your chance of dying from malaria at 0.00518% (source) or traffic accident if you live in the UK at 0.0026% (source and source). Your chance of dying in a global catastrophe would be over an order of magnitude higher than your chance of dying from any other big killer that can be listed.

Many non-longtermists are also interested in topics such as preventing pandemics. We are already seeing convergent problems and should not be surprised to see more such convergence.

SAME SOLUTIONS

3. The power-law rule of having impact could apply to approaches to doing good.

Our ability to have impact is heavy tailed (source), maybe power law distributed, with some ways of having impact being much more powerful than other ways of having impact. This means some ways of doing good are vastly superior to other ways of doing good. If we think this applies to the tools that we use to impact the world then we should not be surprised if there are some super-powered approaches to doing good and if the same really high impact tool is great for both the long-term and the short-term. We might a priori expect that if examined some tool such as: becoming prime minister, promoting growth, fixing science, or something else, might be many times more impactful than other ways of doing good and that the positive effect on both the short-term and the long-term will be huge.

4. We already see the same super-powered tools suggested for both, like growth or EA community building

Tyler Cowen on the 80K podcast suggests that our top moral priority should be preserving and improving humanity’s long-term future and that the way to do that is to maximise the rate of sustainable economic growth (source). Meanwhile Hauke Hillebrandt on the EA forum argues that the best way to improve global development is to focus on economic growth (source).

EA meta – such as EA community building or practical exploratory cause prioritisation research could be the best thing for both the near and long term. So could moral circle expansion or promoting animal welfare (before humans spread wild animal suffering across the galaxy).

We are already seeing convergent solutions and should not be surprised to see more such convergence.

5. Last dollar spending

The EA community has $bns, some of that is committed to be spent in the next few decades. When considering impact we shouldn’t think about the best place for the marginal additional $1 to be spent on existing projects but where the very last $ will go. Longtermist EAs are already suggesting they are short on projects to fund and that technical AI safety work is well funded and perhaps they should focus on carbon off-sets or generally making the world go well in the short-term. We should not be surprised if the last $ of longtermist spending is pegged to something that is broadly good in the short-term, in fact this is already happening.

EMPIRICISM & LONG-TERM PLANNING

6. Need short feedback loops so the things that are most measurable are most measurable on both

Humans are very bad at doing good. Like horrendous. Arguably most social programs don’t work (source source) and most people trying to do good are both failing to have an impact and convinced that they are having an impact. International development has taken a century of trying to have any impact – trying and failing to drive economic growth and developing the tools we need to focus on things that can be changed like health and decades and decades to reach the point where we know how to do that well.

In short the history of people trying to do good shows that if you do not instigate short feedback loops to demonstrate you are having an impact then maybe best to assume you are not having that much of an impact.

Given how challenging it is to set up good feedback loops and prove impact we should not be too surprised if the most impactful set of actions fall within a very small range of actions that are easy to demonstrate impact of.

For example at an extreme end maybe the only things we can be sure are positively affecting the world are actions that can be measured with RCT or similar such studies. At a less extreme end perhaps we should be building up EAs capacity to do good by working out how to influence policy to lead to positive social outcomes and track and judge the impact of that across a range of topics, perhaps starting with things that are more measurable and easier to track (this seems to be a bit of what OpenPhil is trying to do). Perhaps if we do this we work out there are limits to what can be demonstrated to be impactful (pushing us towards more short-term focus), or perhaps we learn to demonstrate impact even in speculative areas (pushing us towards more long-term focus).

7. Long term plans naturally converge

Looking forward beyond a certain point in time has no appreciable impact on the shape of long term plans. Try it.

Try making the best plan you can accounting for all the souls in the next trillion years, but no longer. Done that? Great! Now make the best plan but only take into account the next hundred billion years, so ignoring 90% of the previously considered future. Done? Does it look any different? Likely it is exactly the same plan. Now try a billion years and a million years? How different does that look? What about the best plan for 100000 years? What about 1000 years or 100 years? At what point does it look different?

Plans that consider orders of magnitude more of the future clearly converge with one another. It is not clear how much of the future you can ignore before plans start to look different but convergence between a 100 year plan and a 1x1012 year plan should not be surprising.

8. Long term planning involves setting 10-25 year goals, and this is significantly shorter than the lives of existing people today.

In practice long term planning rarely goes into details beyond a roughly 10-25 year time window. This is not to say that long-term planners do not care beyond that window just that the best way to improve the world (or whatever the plan is being made about) beyond 25 years is to have a really clear vision of what a world in 25 years time look like, such that that future actors at that 25 year point are left in a very strong position to prepare for the next 25 years, and so on. Having a long-term by making longer-term detailed plans just appears not to work very well (see here). As such, making the world in 10 years time or 25 years time as strong as possible to deal with the challenges beyond 10 or 25 years from now is likely the best way to plan for the long-term.

Now it is notable that 10-25 years is significantly shorter than just the lifespan of existing humans (70-80 years). If a neartermist only cared about existing humans, they might note that perhaps 66% of the value is beyond the point that it is easy to plan directly for. As such they might also put significant weight on long-term plans that enable future actors to make good decisions. It would not be shocking if our imaginary neartermist’s attempt to capture that 66% of human value looked very similar to a longtermist trying to capture the 99.9+% of human value they feel lies outside the length of time humans can easily directly plan for.

This similarity might be even stronger for individuals working in areas where planning timescales are short, like policy-making.

9. Experts and common sense suggests that it is plausible that the best thing you can do for the long term is to make the short term go well

It is not unusual to hear people say that the best thing you can do for the long term is to make the short term good. This seems a reasonable common sense view.

Even people who are trusted and considered experts with the EA community express this view. For example here Peter Singer suggest that “If we are at the hinge of history, enabling people to escape poverty and get an education is as likely to move things in the right direction as almost anything else we might do; and if we are not at that critical point, it will have been a good thing to do anyway” (source)

DOING BOTH IS BEST

10. Why not both?

There is also a case for taking a mixed strategy and doing both. Arguments for this include:

  • Worldview diversification – OpenPhil makes a strong case for worldview diversification: “putting significant resources behind each worldview that we find highly plausible”. (source) This prevents diminishing marginal returns and has a range of other benefits. Such as:

  • Managing uncertainty – If highly uncertain about what has impact should take a robust strategy and do at least some things in all domains.

  • Maximise learning – exploration of how to do good has high value so we should be doing a bit of everything right now and not focus on one thing. In the future we might have a better view on how to do good than we do now.

  • Epistemic modesty – We have a difference of view in the community. Given epistemic modesty (source) we should be distributing as per the views of our informed peers, such means across cause areas.

  • Moral trades – We have a difference of view in the community. We should be willing to make moral trades we are going to maximise our comparative advantages, this means doing some of both if it is advantageous for us to do so (source)

  • Appeal to experts – trusted thoughtful actors, in particular OpenPhil, supports doing both

My views

Despite making the case for convergence being plausible this does still feel a bit contrived. I am sure if you put effort into it you could make a many weak arguments approach to show that nearterm and longterm approaches to doing good will diverge.

Also I think we should not expect the same answer for all people. Maybe a programmer considering the EA question might get divergence (e.g. between doing AI safety research or earn to give depending on if they are longtermist or nertermist) but maybe a policy maker will get convergence (e.g. in both cases the person should improve institutions they have the power to influence).

In short I don’t think we yet know how to do the most good and there is a case for much more exploratory research, (source, and see my view on this here). Practical exploratory research seems hugely neglected right now, especially given the amount of money EA causes might have access to this is perhaps even more of a priority.

Possible implications

I would like to see near-term EAs like GiveWell looking more at the long-term implications of the interventions they recommend and more speculative but potentially higher return interventions such as policy change or preventing disasters (I believe they have mostly put-off doing this again). I do think there are better ways of doing good than bednets from a near-term point of view if we actually start looking for them.

I would like to see long-term EAs do more work to set mid-term goals and make practical plans of things they can do to generally build a resilient world (see my views on how to do this here). I do think we need to find ways of doing good beyond AI safety from a long-term point of view and we can do that if we actually start looking for them.

I would like to see more focus on the kinds of interventions that look great across all domains: meta-science, improving institutional decision making, economic growth, (safe) research and development of biotech, moral circle expansion, etc. Honestly I would not at all be surprised if any of these areas becomes a key focus for EA in a few years.

I would like to see less of a community split between neartermist and longtermist. It may be that for some folk this is not the deciding factor on how they best go do good in the world. For example it seems odd to me that the OpenPhil neartermist team has a different skill set from the longtermist team (economist and philosophers respectively), and I am sure they could, and do, learn from each other.

I genuinely love how this community brings together people working across a diverse range of cause areas to collaborate, share resources, and focus on doing the most good and I worry about organisations splitting into longtermist and neartermist camps and think there is so much to do together.

I am excited to see where we can go from here.

Thank you to Charlotte Siegman, Adam Bales and David Thorstad for input and feedback.

FOOTNOTE

[1] The “surprising and suspicious convergence” terminology is from Beware surprising and suspicious convergence – Gregory Lewis.

There is some implication that longtermism and neartermism convergence would be surprising and suspicious on p5 of The Case for Strong Longtermism—GPI Working Paper June 2021 (2). If convergence is true (in some cases for some people) then the case for Strong Longtermism would be trivially true and as such would perhaps not be a particularly useful decision heuristic in the world today (for those cases or those people).