Benefits of EA engaging with mainstream (addressed) cause areas

With this post I wanted to ask a fairly basic question of the EA community that I’ve been scratching my head over.

Is Effective Altruism undervaluing the net impact of repairing traditional impact problem areas (i.e. global dev) compared to focusing on new or unaddressed problem areas?

I think that this forum in general could use more imagery /​ graphics, and so I’ll attempt to make my point with some graphs.

Consider first this graph, with ‘Amount of Capital Distributed’ on Y-axis and ‘Efficiency of Impact’ on the X-axis:

This is how I imagine some might view the social sector, which is to say every single organization or cause addressing every single impact area, placed on a spectrum. At the beginning of the curve, down and to the left, we see that there is a smaller amount of capital circulating through approaches that aren’t that effective. In the middle of the curve we see the bulk of approaches, with moderate impact and the most amount of capital at play. And finally to the right we start to see approaches that would fall under the banner of Effective Altruism. They wield less capital than traditional sources of impact, but are quite impactful in doing so.

The logic behind the slope of this curve is that there is a certain Overton window of altruism. Approaches that are too regressive will start to leave the window and receive less capital. Approaches that are at the peak of society’s attention will receive the most support. Those at the bleeding edge (EA) will only be perceptible by a small subset of the population and receive smaller levels of support.

Once this basic curve is established we can look at what we actually know about the impact landscape and start to refine the graph.

This next graph ditches the curve and instead introduces a bar chart. The same basic comparison of Capital vs. Impact still applies. Here the main difference is that different approaches don’t exist on a spectrum and instead are discrete.

This might seem like a minor discrepancy but it reveals an important point about how causes are funded. If anything, Effective Altruism shows us that any action can have various degrees of impact, in many different ways and in different categories. These relationships are incredible messy. At the same time, capital, especially philanthropic capital, is rarely distributed proportional to impact and agnostic of problem areas. Matter of fact, the opposite is probably true. First, donors commonly pick a problem area and set of organizations that they are personally swayed by, and then make isolated donations within this category with the hope that they can achieve impact. Even foundations such as the Rockefeller Foundation that are devoted to broad goals like “promoting the well-being of humanity throughout the world” have focus areas and pet issues that they like to fund more than others.

So ultimately a better way to think about the distribution relationship between impact and capital is probably not a nice smooth curve, but around specific chunks of capital related to cause or problem areas (even if in reality it doesn’t quite work like this).

Furthermore, the key in addressing the altruistic capital markets via chunks instead of as a continuous impact curve is that you begin to see the orders of scale that separates different categories:

Here we see several categories of impact, charted via their exact annual expenditure levels and loose ranking of their QALY/​$ levels. Despite not having the ability to make accurate estimations of QALY/​$ levels, the difference in magnitude between these categories in terms of expenditures hopefully is clear. Even taking the most generous estimation of the annual expenditures of explicitly-EA causes (~$500M), we see that this is a drop in the bucket compared to the >$100 Billion that just the UN System and the large NGO BRAC use each year.

This brings me to my central point and question for the EA community. Is there an argument to be made for focusing more efforts on more efficiently retooling these large sources of capital towards positions that would be EA-aligned?

I would imagine some objects to this argument might be:

- The whole idea of x-risks is that pouring even just a little attention and money into them can help mitigate catastrophic risks that would otherwise happen under business as usual. This is true even if there are more superficially pressing problems to deal with in the world like poverty.

- Focusing efforts into already addressed problem areas doesn’t just immediately yield clear impact, and could actually prove a futile activity.

- EA aligned organizations like Evidence/​Action and the various U.S. Policy Reform projects are in fact already addressing ‘traditional’ impact areas, but just the ones that have the highest upside potential.

I think all these points would be valid, but I want to raise some counterpoints that I think make the broad argument here still worthwhile to explore.

1. Even addressed problems can be addressed inefficiently

A common line of thinking when evaluating EA-friendly causes is to determine which causes have the least amount of attention placed on them. Past the potential biases that come about when you go about the world looking for problems, I worry about this approach’s emphasis on novelty.

It seems like there’s not enough emphasis on the quality of funding and attention being placed on an issue, compared to the quantity of funding and attention.

For climate change, I think the EA justification of not spending time and resources on this problem makes sense. Even if the problem carries catastrophic consequences, there is quite a lot of fairly high quality research and development being done here, both from for profit and non profit perspectives.

For global dev broadly speaking, and for sub categories like global health, most of EA’s engagement seems to be around a set of interventions that have stacked QALY/​$ ratios like early-life affordable healthcare. Past this though I get the impression that other sub categories of aid are written off as not worthy of attention because they are already being addressed. This is understandable, as we see from the chart above that there is a large amount of capital that goes towards humanitarian causes.

But despite the hundreds of billions of dollars that flow through aid each year, it’s unclear how impactful this aid is. Obviously an argument can be made towards the short term effectiveness of providing services for truly acute humanitarian crises. But long term perspectives like those contained within Dead Aid state that aid is fundamentally harmful. Moderate positions state at least that there needs to be better linkages between interventions and their long term impact.

EAs have shown a slight interest via orgs like Evidence/​Action to try to improve the effectiveness of traditional aid approaches, but I think that this is a problem that is worthy of at least as much attention as reforming political institutions. If it is in fact the case that there are glaring inefficiencies in this sector, and that trillions of dollars are locked up pursuing this inefficient work, fixing these problems could prove to have massive upside. First and foremost though it seems imperative to at least get a better sense of how effective these grandfathered capital chunks are.

2. There are numerous advantages of better integrating EA community with rest of social sector

Another upside of working to improve causes that might otherwise be viewed as being already addressed is that it forces greater interaction between the EA community and the rest of the social sector.

Before learning about Effective Altruism I was working for a social enterprise that worked with household name foundations on a variety of causes. Even at its relatively small stage of growth several years ago, I was surprised to see that such a robust community was forming around doing the most good possible. But what was most surprising about the EA community wasn’t just how active it was, it was how discrete it was from the world I was working in, despite having essentially the same goals.

Moreover, I was increasingly seeing a movement in Foundation World towards better frameworks around understanding and reporting on net impact. While EA takes this idea to an extreme I didn’t understand why this community needed to be so removed from the conversations (and access to capital) that were simultaneously happening in other parts of the social sector.

Besides avoiding the duplication of efforts I think there are valuable lessons that the EA community and the other impact-chasers could learn from one another. For example, EAs are uniquely good at understanding the role of technology in the future, which is a notorious weakness of traditional social sector folks. On the other hand, I think social sector folks could teach a thing or two to EAs about how programs work ‘in the field’ and what philanthropy looks like outside of the ivory tower that EAs can sometimes sit in.

Finally, I was reading somewhere on this forum recently a post that was about how EA is a set of beliefs and approaches, and shouldn’t aspire to be a group or movement (can’t find the post). I agree with this sentiment, but at this point Effective Altruism as a movement is a runaway train.

Part of embracing this reality means understanding better the role of optics, and how public perception affects EA’s overarching goals. Maybe at the moment the EA philosophy isn’t quite ‘mainstream,’ and maybe this monolithic status is a naive goal to reach. But speaking practically, the more people who operate under the banner of EA, the more good can be done in the world. This process entails both attracting new members towards what EA stands for today, but also being more integrative with communities that wouldn’t traditionally align themselves EA. Wanting to do the most good possible is truly an agnostic trait. EA as a movement should be equally agnostic about not just what causes it considers, but what tribes it aligns itself with.

3. A vast amount of philanthropic capital in the world is and will always be distributed ‘irrationally,’ EA has much to gain by embracing and working around this.

As discussed in #1 and 2 there is no shortage of problem areas that are being approached imperfectly, at least relative to the benchmarks of Effective Altruism. A large part of this no doubt is that global impact is not usually product of the pure (rational, selfless) definition of altruism. Among other things, people donate to causes they personally feel attached to. There is a deep psychological (evolutionary, likely) mechanism that underpins this, one that probably won’t be changing any time soon.

In the eyes of EAs, these imperfect causes don’t always seem to have tangible connections to impact, and as a result this community doesn’t engage with them. This disengagement makes sense for some ‘warm glow’ forms of altruism that have structural barriers in place preventing them from ever becoming more efficient. But for other forms of impact, just because they are inefficient now doesn’t mean they can’t improve.

Engaging with these causes further (once again, a good example being global development) stands as a way to not only create impact, but to embrace the irrationality of giving and effectively expand effective altruism in larger capital markets.

Conclusion

Even if they come across as not traditionally aligned with EA values, there are lots of problem areas, namely global development, that could benefit from an increase in analytical rigor.

Vice versa, EA could benefit from tapping into these larger capital pools and potentially converting them into higher impact brackets:

Currently Open Phil. lists no current focus areas in global health and development. It only recommends individuals make high impact donations to high impact charities like those pursuing deworming and anti-malaria.

I think that there is potential for a problem area to be loosely built around meta effectiveness of the development sector.

This isn’t a novel concept, and there is already a nascent movement in this sector towards leaner operating strategies.

Engaging with this space could not only reveal further high impact problems to work on, but also comes with numerous strategic side benefits such as helping to reframe narratives that EAs aren’t interested systemic change and that they exist in an elitist bubble.


Edit 1: Changed title of post from “Doing Repairs vs. Buying New” to “Benefits of EA engaging with mainstream (addressed) cause areas”

Note: This is my first post in the EA forum. I attempted to the best of my ability to research the points that were made here to make sure I wasn’t saying anything too redundant. Apologies in advance if this is the case.

I’m interested in talking with people here more about formalizing this issue. I’m also looking for some volunteer projects. I have a background in design, marketing, strategy, and experience in the tech and philanthropy/​foundation spaces. Please reach out if we can work with each other!

@bryanlehrer www.bryanlehrer.com