Illegible impact is still impact

In EA we focus a lot on legible impact. At a tactical level, it’s the thing that often separates EA from other altruistic efforts. Unfortunately I think this focus on impact legibility, when taken to extremes and applied in situations where it doesn’t adequately account for value, leads to bad outcomes for EA and the world as a whole.

Legibility is the idea that only what can easily be explained and measured within a model matters. Anything that doesn’t fit neatly in the model is therefore illegible.

In the case of impact, legible impact is that which can be measured easily in ways that a model predicts is correlated with outcomes. Examples of legible impact measures for altruistic efforts include counterfactual lives saved, QALYs, DALYs, and money donated; examples of legible impact measures for altruistic individuals include the preceding plus things like academic citations and degrees, jobs at EA organizations, and EA Forum karma.

Some impact is semi-legible, like social status among EAs, claims of research progress, and social media engagement. Semi-legible impact either involves fuzzy measurement procedures or low confidence models of how the measure correlates with real world outcomes.

Illegible impact is, by comparison, invisible, like helping a friend who, without your help, might have been too depressed to get a better job and donate more money to effective charities or filling a seat in the room at an EA Global talk such that the speaker feels marginally more rewarded for having done the work they are talking about and marginally incentives them to do more. Illegible impact is either hard or impossible to measure or there’s no agreed upon model suggesting the action is correlated with impact. And the examples I gave are not maximally illegible because they had to be legible enough for me to explain them to you; the really invisible stuff is like dark matter—we can see signs of its existence (good stuff happens in the world) but we can’t tell you much about what it is (no model of how the good stuff happened).

The alluring trap is thinking that illegible impact is not impact and that legible impact is the only thing that matters. If that doesn’t resonate, I recommend checking out the links above on legibility to see when and how focusing on the legible to the exclusion of the illegible can lead to failure.

One place we risk failing to adequately appreciate illegible impact is in work on far future concerns and existential risk. This comes with the territory: it’s hard to validate our models of what will happen in the far future, and the feedback cycle is so long that it may be thousands or millions of lifetimes before we get data back that lets us know if an intervention, organization, or person had positive impact, let alone if that impact was effectively generated.

Another place we risk impact illegible is in dealing with non-humans since there remains great uncertainty in many people’s minds about how to value the experiences of animals, plants, and non-living dynamic systems like AI. Yes, people who care about non-humans are often legible to each other because they share enough assumptions that they can share models and can believe measures in terms of those models, but outside these groups interventions to help non-humans can seem broadly illegible, up to interpreting these the interventions, like those addressing wild animal suffering, as being silly or incoherent rather than potentially positively impactful.

Beyond these two examples, there’s one place where I think the problems of illegible impact are especially neglected and that is easily tractable if we bother to acknowledge it. It’s one EAs are already familiar with, though likely not framed in this way. And it’s one that I perceive as having a lot of potential energy built up in, ready to be unleashed to solve the problem, if only we know it’s there. That problem is the illegibility of aspects of an individual’s impact.

By itself having some aspects of an individual’s impact be illegible is not a problem, especially if they have many legible aspects of impact that provide feedback and indicators of their ability to improve the world. But in cases where most of a person’s impact is illegible, it can create a positive feedback loop that destroys the possibility of future positive impact via an EA failing to have as much positive impact as they could because they believe they aren’t on track to produce as much positive impact as they could and downregulate the amount of effort they put into producing positive impact since their effort appears ineffective. Some key evidence I see for the existence of this self-fulfilling prophecy includes

  • Nate Soares’s Replacing Guilt series, which appears to have been largely motivated by his interacting with people who feel guilty that they aren’t doing enough and should be doing more while their guilt simultaneously works against them being more effective;

  • Ozy’s Defeating Scrupulosity series, which deals with shame a bit more generally and the way shame over failing to live up to one’s own ideals results in difficulty at living up to those ideals;

  • anecdotally, plenty of EAs living with mental health issues find these issues are exacerbated by feelings of inadequacy, and since mental health issues often reduce productivity, it creates a self-reinforcing “death spiral” away from impact;

  • and the common experience of trying and failing to secure a job in EA (additional context), which can lead to feeling that one isn’t good enough or isn’t doing as much as one should be.

When I’ve talked to EAs in the throes of this positive feedback loop away from impact or reflected on my own limited experience of it in years past, a common pattern is that, by the sort of methods we apply in EA for measuring the impact of interventions, people are not so much actively getting evidence that they are not having impact or are generating negative impact as they are getting little to no evidence of impact and (rationally) take this absence of evidence as evidence of absence of impact. This is often made worse the harder they try to make progress on work they believe to be tractable and impactful: they work ever harder and get no clear signals that it’s amounting to anything, reinforcing the notion that they aren’t capable of positive impact. As you can imagine, this can be very demotivating, so much so that it can even lead to burnout (also). It’s hard to know how many dedicated EAs have dropped out because they tried, saw no signs they were making headway, and (reasonably) gave up, but I’m confident it’s greater than zero.

Now it’s possible that this situation is unfortunate but correct, viz. the people going through this impact death spiral are correctly moving away from EA efforts because they are not having positive impact and the system is kicking them out where they can have greater positive impact by not contributing at all. I suspect this is not the case given that I can think of people who, by luck or good fortune or the help of friends, managed to break out of an anti-impact positive feedback loop to go on to do legibly and positively impactful things. So given that the impact death spiral is in fact a problem, what might we do about it?

To me the first step is acknowledging that illegible impact is still impact. For example, to me all of the following activities are positively impactful to EA such that if we didn’t have enough of them going on then the EA movement would be less effective and less impactful and if we had more of them going on then EA would be more effective and more impactful, yet all of them produce impact of low legibility, especially for the person performing the action:

  • Reading the EA Forum, LessWrong, the Alignment Forum, EA Facebook, EA Reddit, EA Twitter, EA Tumblr, etc.

  • Voting on, liking, and sharing content on and from those places

  • Helping a depressed/​anxious/​etc. (EA) friend

  • Going to EA meetups, conferences, etc. to be a member in the audience

  • Talking to others about EA

  • Simply being counted as part of the EA community

You’ll notice some of these produce legible impact, but importantly not very much to the person producing the impact. For example, being the 14,637th person counted among the ranks of EA doesn’t feel very impactful, but building the EA movement and bringing in more people who have more impact, some of whom will produce more legible impact, only happens by the marginally small contributions of lots of people.

Another tractable step to addressing the problems caused by illegible individual impact is creating places to support illegible impact. I don’t know that this is still or ever really was part of the mission of the EA Hotel (now CEEALAR), but one of the things I really appreciated about it from my fortnight stay there was that it provided a space for EA-aligned folks to work on things without the pressure to produce legible results. This to me seems extremely valuable because I believe many types of impact are quantized such that no impact is legible until a lot of things fall into place and you get a “windfall” of impact all at once and that there is also a large amount of illegible, “dark” impact being made in EA that goes largely unacknowledged but without which EA would not be as successful.

To some extent I think valuing illegible impact is convergent with efforts to strengthen the EA community, but not always in legible ways like starting local groups and bringing in people, but via the illegible work that weaves strong communities together. Maybe we can figure out ways to make more aspects of building a strong community legible, but I don’t think we should wait for the models and measures to do the work because I expect that without it we will fail. Thus we are put in the awkward situation of needing to do and acknowledge illegible impact in order to get more legible impact more effectively.

All of this is complicated by our desire as effective altruists to do the most good. Somewhere down the slippery slope of praising illegible impact is throwing money after ineffective charities and giving money to rich people to buy nicer positional goods. I think we are smart enough and strong enough as a community to figure out how to avoid that without also giving up the many things we risk losing by too much focusing on legible impact. I already see plenty to make me believe we will not Goodhart ourselves by becoming QALY monsters, but I also think we need to better appreciate illegible impact since in a world where we did this enough I don’t think we would see people suffer the impact death spiral.