[Question] Where are the long-termist theories of victory /​ impact?

Full question:

Where are the long-termist theories of victory /​ impact? If there are some loitering around behind private Google Docs but not shared more widely, what’s driving this?

Why I’m asking this:

  • I think it’s one thing to say “the long-run matters”, but quite another thing to say “the long-run matters, and we can probably affect it” and quite another thing to say “the long-run matters, we have worked up ways to impact it and—more importantly—to know if we’re failing so we can divest /​ divert energy elsewhere”.

  • I feel the above sums up a lot of the tension between long-termists and people more sceptical; they’re driven by different assumptions about how easily the long-term can be positively affected, and how to affect it, and also different viewpoints on moral uncertainty both in the near- and long-term. (And other things to, but top-of-the-head example pulling). For example...:

    • a long-termist might be more sceptical about the durability of many near-term interventions, thereby opting for much longer-term planning and action;

    • whereas a near-term advocate will believe that near-term interventions and strategies both cash out and can last long enough to affect the long-term

  • I’m a fence-sitter on the near- Vs. long-termism question, I think this is epistemically the most sensible place for me to be. Why? I’m frustrated by the amount of near-term interventions, particularly in global health and development, which prove less durable in the long-run, and therefore think there’s value towards a longer time-range. But I also think there’s many epistemic and accountability risks endemic to long-termism, such as how it’s easy to pitch for results you’ll (probably) never be around to see or be accountable for; I notice this thinking flaw in myself when I think about long-term interventions.

  • I think it’s even more morally incumbent on advocates of long-termism to put forward more concrete theories of impact /​ victory for a few reasons:

    • Near-term advocates work will be tested /​ scrutinised by others just by coming to fruition in their lifetimes /​ careers; therefore, there is a feedback loop and holding to account and—where necessary—epistemic updating and disinvestment in things that don’t work.

    • With long-termism, there’s not a small risk of leaving value on the table not just right now but value that could endure; in fact, I know this makes many long-termists question whether they’ve chosen the right path, and this is good!

    • I think if you’re advocating for long-termism (or any cause area), you kind of owe it to people who are asked to change their careers /​ donating to give them weightier reasons and mechanisms for assessing whether /​ when they should change their minds.

      • I agree with criticisms that we do have a culture of deference within EA are fair; particularly when contrasted with the rationalists and where there’s more emphasis on developing intellectual skills so that you can understand and question what you’re hearing from others in the tribe. And lesbehonest—there’s benefits to splitting responsibilities between those who set direction and those who row there. But I do think the rowers deserve a bit more assurance they ain’t headed for the doldrems of negative utility.

        • I notice when the deference pull happens to me. I was accepting theoretical arguments for long-termism based on some notable (but self-selecting) examples, such as how cultural values around meat consumption have shaped animal suffering for thousands of years. But I was still not listening to the part of my brain saying “but how easy is it to apply these lessons in reality to make the purposted impact?”

      • I can’t be the only one feeling this tension, and wanting to scrutinise things in more detail, but also not having time outside of my work and personal obligations to do this. But it feels like there community is drowning in chat of “this matters so much and we can do things about it, so you should do something about it”, but a lot less of “here’s how testable /​ scrutinisable these interventions are, so you can make informed decisions”. Maybe this will change when Will’s book is pumping on the airways a bit less, idk...

  • What could these ToI /​ ToV look like?

    • As someone who’s done lots of ToI /​ ToV work before, it would seem sensible to start with slim, narrow causes and build out from that. Ideally selecting for causes with some structural similarities to others, and some existing real-world evidence; such as X-risk pandemic preparedness. But I’d likely choose an even smaller sliver within that and work on it in detail; something like 1-3 specific biosecurity interventions which could be implemented now or in the near-term.

    • I think these ToI /​ ToV could be for narrow and broad long-termism, and individual long-termish cause areas within that, such as improving long-term institutional design and decision making.

    • Arguably, broad-longtermism should have even more considered ToV /​ ToI given how fuzzy an idea it is, and how liable our world is to unintended consequences /​ counterproductive backlash when it comes to things like inculcating cultural values.

  • Would I be willing to do some of the thinking on this?

    • Eh maybe… I ain’t a full-time advocate, and therefore less likely to be the person putting forward long-termist ToI /​ ToV, but could be a red-teamer.

What we already have:

We’ve seen some theories of impact /​ victory written up in varying levels of detail...:

  • on farmed animal welfare here

  • improving institutional decision-making here

  • AI governance here, and here, arguably with more of a near-term perspective in so far as when the proof of impact is expected to fall

  • EA community building /​ meta-EA here

  • general ToV building across many cause areas, or inviting others to do the same here

  • and advice about world-view investigations, lending itself a little like a ‘how-to’ guide, over here

  • What We Owe the Future” (or at least the pre-published versions I read of it) is notably not about putting forward ToVs, but rather arguing that the long-run can be affected and putting forward certain plausible mechanisms on the macro level (e.g. influencing future values) and on the meso level (e.g. citizens assemblies and scaling up participatory democracy like done in Taiwan).