[Question] What is the new EA question?

The question used to be: “what is the most cost-effective way to help people?”

Tl;dr: There’s a lot of money in longtermism, so people seem not to find that a useful question anymore. What’s the new animating question that keeps us focused and accountable to the mission?

This is a call to Babble and throw out ideas and see what comes up.

The full version:

The question used to be: “what is the most cost-effective way to help people?”

But now there’s all this money, and a stronger orientation to questions with more speculative, less empirical answers, both of which make the original question much less useful.

Now:

  • EA can focus on maximum impact rather than cost effectiveness (I’ve seen this argument somewhere on the forum but can’t find it and when I tried to write it out I don’t actually understand it. My understanding is that if project X does 10 units of good for $100 but can’t be scaled up and Y can do 100 units of good for $2000 then maybe you want to do Y. But I think if you’re not thinking about portfolio expansion or diversification, you fund all of X first and then Y is the most cost effective thing around?)

  • The argument for hits-based giving gets even stronger—try lots of things, fund lots of projects, be extremely speculative, be willing to try weird things and just fund interesting, smart people to do whatever, because who knows what will work and we don’t have a better idea and maybe you need to be in extreme Explore mode right now so as not to think too narrowly or optimize too early

As a result

  • I don’t know what to think when I encounter people spending money in a way that seems silly to me; “cost-effectiveness” feels like it’s been taken off the table as a metric, leaving me thinking that there’s no accountability anywhere, like the epistemic forcing function disappeared when the money bottleneck did.

  • What keeps us morally and epistemically honest? Feedback loops suck as it is, it’s ok that we’re trying to field-build and skill-build and try things that don’t give obvious successes right away, but then what keeps us from being a very earnest group of people who get nothing done? Or get a lot done that doesn’t matter through frenetic effort and wasted motion?

  • And even worse, what keeps us from sucking up tons of talent and interest because we’re where the interesting ideas and the money are and everything comes through us and then we don’t save the world and also the people who could have don’t either because we made all our mistakes correlate?

These are big, thorny questions, and I have some rudimentary answers to some of them, but what I’m looking for here is, what’s the new question?

What replaces “is that the most cost-effective way to help people” that reflects the position longtermism and some other parts of EA are in but keeps us accountable and keeping our eye on the prize?

What should EAs be asking themselves when they reflect on their work, or when they look out at the world and assess? What should come up in conversation? What should people ask themselves when they bounce against EA?

I think I would personally find it helpful, as a north star, to have a new question, to orient and guide and push myself.

My extremely preliminary ideas + some from Nicole Ross (thank you Nicole!):

  • “What is the most cost-effective way to do good?” might be just fine—I’m still mulling over a longtermist who finds current human and animal suffering extremely emotionally affecting telling me, about a donation that “I think this money could save four lives and I think you should do this instead.”

    Cost-effectiveness is still real, if we thought we could do better in expectation by throwing all the money at bednets, we should still do that. We aren’t doing that because we think we have a better idea, which is the only reason not to do that.

    • NB: This is really hard to work with for individual cases

  • The deeper original question: “How do you do the most good?”. It’s not very actionable, but it still might serve.

  • “What’s my story for how this saves the world /​ massively improves humanity?” which I like for its built-in call for epistemics and iteration and telling the story out loud and noticing its flaws and improving it and figuring out if it’s even true.

    • Related: “Does this seem like the kind of thing that’s part of a story where we win?”

  • “In twenty years, will I be happy I had a policy of spending money this way?”—activating our hindsight-in-advance, premortem-type thinking, but still caring about the overall policy and not getting caught up in each minute decision.

  • How is this spending leading to big x-risk reduction wins in the real world?

  • How might this money otherwise be spent? What are some things that might be better? What are some things that might be roughly equal to this opportunity in terms of doing good?

  • Will this seem like an obviously bad decision in hindsight?

  • Does this pass the red face test? Will you be happy to defend this decision even if the bet doesn’t pay off?