Some benefits and risks of failure transparency
Key takeaways
The goal of this post is to discuss some heuristics on when it makes sense to discuss our own and other’s failures and mistakes in the EA community. I also recap the state of failure discourse in EA, including the recent interest and activity around red-teaming. (Read More)
Benefits of failure transparency
Building a more accurate map of the world so that resources are better allocated, we can identify and highlight trends in mistakes, and also help the broader world outside of EA improve. (Read More)
Creating a strong and trustworthy signal of our values (Read More)
Creating a stronger community (Read More)
Risks and Costs
Opportunity costs (Read More)
Reputational costs (Read More)
If we engage in discourse less responsibly, there is a risk of harming existing discourse norms, which could result in less failure discourse in the long run. (Read More)
Transparency being used against you or unfairly advantaging others (Read More)
When to engage in discourse
When considering our own mistakes, keep in mind the relevance of the mistake, the value of information you could gain from reflecting further on it or getting external opinions, whether others would find it valuable, and if you’re ready to hear your mistakes. If you’re higher status in the community, it may make sense to have a lower bar for sharing mistakes. If you’re in a competitive field outside of EA, it may be worth having a higher bar. (Read More)
When considering others’ mistakes, consider what you are hoping to achieve with the feedback, and how likely the other actor is to be open to feedback. Also consider whether the appropriate forum of engagement is private of public. It may also be worth factoring in the status of the actor you’re criticising. If you choose to communicating publicly, be careful when expressing your level of confidence and extrapolating. (Read More)
A few links that might be more concretely helpful when you’re actually trying to talk about failure (Read More)
Linguistic note: I refer to organisations and individuals collectively as “actors”. I also use the terms “failure” and “mistakes” somewhat interchangeably throughout this post. But I try to stick to “failure” for the most part.
Thanks to Arjun Khandelwal for extensive copyediting and review, and Adam Gleave for brainstorming on whiteboards. Thanks to Nathan Young, Abi Olvera, Ben Millwood, Arjun & Adam for many (many) helpful suggestions and comments. I’ve tried to cite in the footnotes where possible but I’m sure I’ve forgotten a bunch!
Introduction
The goal of this post is to discuss some heuristics on when it makes sense to discuss our own and other’s failures and mistakes in the EA community. I hope it provides a balanced and fair view of the considerations.
Failures are pretty complex and can be caused by a lot of different things. Some are caused by mistakes that were knowable in advance, others by things that could not have been predicted or were out of someone’s control. Some failures are related to not living up to one’s values and causing harm to other people, while other mistakes negatively impact one’s own productivity or growth without causing direct harm to anyone else.[1] It seems like all of the above are valuable, depending on who is reading. People external to EA might particularly care about value-related failures, while those working at EA organisations might benefit from knowing how other organisations have made mistakes that negatively harmed their productivity.
Failure discourse in EA currently
This section might be most useful to those new to the community or interested in a summary of failure discourse in the EA community.
The EA community can be keen to learn from mistakes and not punish people overly for making them in general, which can be rare.[2] A highlight is the presence of several public “Mistakes” pages where organisations discuss substantive ways they’ve made mistakes. This is partly through the presence and influence of GiveWell, which has been a champion of transparency in general, including failure transparency, in the nonprofit world and has maintained a prominent and substantive Mistakes page since its inception. Some other prominent EA orgs like CEA, 80,000 Hours, Giving What We Can, and Animal Charities Evaluators also maintain substantive Mistakes pages [3], which is likely very unusual in the broader nonprofit world.
These pages are a costly signal of an intention to learn from mistakes and, to some extent, be publicly accountable for them. While some of the mistakes have long been common knowledge among parts of the EA community, putting them out in the open for anyone to see allows outsiders to form a much more accurate understanding of EA organisations without having to undergo a potentially costly process of becoming socially connected within the community. Having public documentation instead of relying on social connections and word of mouth is more robust for community insiders as well.
Discussing mistakes at public events (e.g. EA conferences) is much less common. The only instance I’m aware of is the Celebrating Failed Projects Panel at EAG SF 2017.
In the past 6 months or so, there has been a lot of interest in having more critiques, and specifically red-teaming. Cremer and Kemp’s Democratising Risk post sparked some conversation about how open EA is to critiques, and many EA funders and leaders commented in support of more criticism. A notable recent post from Jan Kulveit and Gavin Leech discusses the EA and rationalist communities’ failures in response to COVID-19. [4] Training for Good is running a red-teaming workshop, and there are multiple contests which support critiques—the Effective Ideas’ blog contest and just this week a contest for critiques and red-teaming was pre-announced.[5]
It seems like now might be a good time to reflect on some of the benefits and risks associated with failure discourse.
Benefits and risks of failure discourse
What are the potential benefits?
Building a more accurate map of the world
With more information we can help the community build a more accurate map of the world together and update our decision-making accordingly. EA is young and new, it stands to reason that our map of the world has a lot of room for improvement.
Building a more accurate map may result in resources being better allocated, which seems generally net good for the world even though it can often be bad for individual actors, i.e. directed away from actors who are consistently underperforming or making mistakes. In some cases, it may even be a good sign to actors to try to find other ways to contribute, resulting in a better allocation of talent. [6]
A more accurate understanding of the world would enable us to identify and highlight trends in mistakes. Especially if done well/robustly, this should help us improve as a community. [7]
Sharing our mistakes can help the broader world outside of EA that is doing good improve as well (not just from a community building perspective). [8]
A strong and trustworthy signal of our values
When we talk about our mistakes publicly (which is not always advisable, see below), we are creating a very strong signal of our values. This benefits the community internally by encouraging and codifying our norms of transparency. Hopefully, this could create an environment that is more welcoming of criticism (both internally and externally) and makes us less susceptible to things like motivated reasoning. It also sends a strong signal to the outside world about our credibility. The higher the status of the person who talks about their own mistakes, the stronger the signal. Finally , it could nudge the outside world towards normalising this kind of transparency and discourse more.
A stronger community
Demonstrating vulnerability and honesty can help communities grow stronger and closer together. Here are two examples from within the EA community:
Howie’s 80,000 Hours podcast on mental health goes into a lot of detail on mistakes that he made during his time at Open Philanthropy and his experiences with mental illness. This episode has become 80K’s most “popular episode ever (both in terms of feedback and listening time).”
The EA student group PISE has had some success with being open and vulnerable in their community.
What are the potential risks and/or downsides?
Opportunity Cost
If an actor chooses to engage (by sharing their own or others’ mistakes, or engaging in the discourse that follows), they are effectively taking time away from “direct” work they could be doing. [6]
It can also cost others resources to consume the information in a way that’s useful to them, though this is more a reduction in benefit than a real cost since people can just not read it. [9]
Transparency in general can be very costly. Open Philanthropy changed their stance on transparency in 2016 after reflecting that it wasn’t directly aiding their goals (unlike GiveWell, whose goal is to make recommendations to the public). Charity Entrepreneurship decided to continue to publish their work in 2020 and accept the increased cost of spending more time polishing reports, but said they would evaluate in 2021. As of writing, their 2021 annual report has not yet been published.
Reputational Costs
Sometimes, writing about your mistakes could result in an actual loss of status or reputation. How much it could affect your reputation could be the function of how long ago it was, how understandable it was, how much you’ve changed since then and other factors, whether other people know about them.
If you demonstrate that you’ve learnt from your mistake, and have taken steps to improve, then sharing your mistakes could have a neutral or even positive effect. There are also instances where not sharing can be harmful to your reputation. If the mistake you are not sharing is something that might come up—say, if a potential employer was to speak to a reference of yours, then not owning your mistake may be perceived as either not being self-aware or reflective, or at worst, like you are trying to hide it.
Harming discourse
The way we react to actors sharing their own mistakes, or when we raise others’ actors’ mistakes can negatively harm discourse norms going forward. Jeff Kaufman writes, “If people react critically and harshly to … failure, it makes organizations much less likely to be willing to be so transparent in the future. And not just this organization [or actor]: others will also see that sharing negative information doesn’t go well.” This is less true “if the norm is very strong, then the pressure is not going to keep people from sharing similar things in the future, and it also means that seeing a failure from this organization but not from others is informative. On the other hand, if the norm is weaker we need to be careful to nourish it, not pushing it harder than it can stand.” It seems that EA does have somewhat strong norms around sharing failure, but it’s not clear to me how strong.
Another way that discourse norms can be harmed is that having very high standards for criticism, extending to things like tone can make it less likely that criticisms are raised. It’s possible that some people wouldn’t make the criticism rather than investing resources into trying to say it perfectly.[10] Some people I discussed Democratising Risk with thought that it was tonally inappropriate—that the authors should have expressed their negative experience separately from announcing their paper and/or expressed the negative experience more objectively. I can understand where this is coming from, but personally I’d rather live in a world where something is said imperfectly than a world where it isn’t said at all.
Career risks
If you’re in a competitive or reputation-heavy field such as politics, transparency could be used against you and affect your future prospects.
When there is imbalance in transparency, organisations that are more transparent could be evaluated unfairly. If some actors in a space are much more transparent than others, and they seem to be doing less well than other, less transparent orgs, they would be evaluate against different standards. This could set up unhelpful incentive structures, which could encourage less transparency. [9]
When might we engage in discourse?
Although I personally think it’s usually quite valuable, and would want the community to discuss our failures less than we could, it may not always be worth it for individual actors to engage in failure discourse. The following are some decision-making heuristics that might help you make this decision. The examples are somewhat focused on whether you’d want to share your thoughts publicly, but I think they could also be helpful for deciding whether to engage in more private reflection too (e.g. sharing your thoughts with a small group of people).
Our own failures
Are you in a position where you can accept your mistakes?
Opening oneself up to criticism and reflecting on mistakes can be (really) difficult. It’s okay to not be ready to do that immediately. Sometimes a little bit of distance can help with this.
How is it relevant to your ability to have impact or your work? How important is the mistake?
A small accounting error which underestimates your organisation’s spending by 1% is probably not very important, but a mistake which understates it by 10% could be.
How much information do you stand to gain from investing resources into reflecting on this?
Sharing our mistakes can help us to reflect in a more rigorous and focused manner. It’s possible that making something publishable (even if you don’t publish it) could encourage this more so than doing it independently. [9]
Do you think others could help you learn? Are you worried about motivated reasoning?
It could be that others have useful insights on your mistakes and you can learn from sharing them. Maybe putting too much faith in internal / self criticism leads to neglect of external criticism.
How many other people could find your insights valuable to learn about?
You might have observed that others are thinking in similar ways to you, or working in the same field. If your failure was particularly surprising or unusual to you, it could be valuable to spend time reflecting—perhaps you could identify a community blindspot. If you’re a first actor in a field or space, then your insights seem especially important.
What is your status within the EA community?
If you are a high-status individual, it seems valuable to have a lower bar for sharing your mistakes and failures because of the positive impact it can have on establishing and codifying community norms.
How competitive is your industry or profession—could this mistake be used against you?
This mostly applies to people working in non-EA paths such as politics—a handful of policy EAs have insights on the Forum anonymously to avoid issues.
Others’ failures
What is the appropriate fora of engagement?
Not all feedback and discussion of mistakes needs to be public to achieve it’s goal, and sometimes providing private critiques and feedback can be more effective. However, there does seem to be a lot of value in being able to have difficult public discussions as a community, even if it does sometimes cost us. Often it’s very valuable to provide feedback privately to an organisation first, and let them respond.
Sometimes, there are good reasons to share criticisms publicly. It can be useful to know if others share your views and there is broader support for the feedback you’re sharing. If the org is unresponsive to your feedback and you still feel it’s worth raising, this may be another reason.
If you do think there is a good reason to make criticism public, it’s good practice to share a draft with the organisation before publicly posting to give them a chance to respond or make corrections. This doesn’t mean you need to make all the changes they suggest—but it could lead to a more productive conversation (e.g. this critique of Giving Green).
What is the positive impact you think your feedback will have? What is your theory of impact or change?
Try to think about whether your feedback is actually something the organisation could take action on. Suggesting a small or capacity-strained org focus on 10 different projects might not cause any change—but suggesting they clearly communicate what they aren’t doing, and how the plan to coordinate with other actors could be really helpful.
A sub-question: How open are they to feedback? How likely are they to change their actions? Some actors are more resistant to feedback than others. If you’ve already seen others make the same critiques as you and haven’t seen much positive, it may not be as valuable to invest a lot of time in the critique (it may be more resource efficient to just signal boost / engage with existing critiques)
What is the status or position of the actor you are criticising?
When an actor has outsized influence in the EA community (e.g. a major grantmaker or public intellectual) then the benefit of scrutinising their work may be high and the cost to the actor is low as they are already in an established position.
In general it seems good to be cautious about directly criticising the work of individuals in a public space because it is hard to have sufficient context on an individual and this risks creating a hostile environment and harming discourse (see above).
Do you have time to be careful when expressing your level of confidence and extrapolating?
Because some discourse could reduce transparency, it might be worth spending some extra time thinking about how to express your thoughts. If you’re talking about anecdotal data, it might be worth asking in the post about whether others resonate with what you are saying. (This shouldn’t prevent you from sharing your own experiences—quite the contrary—many personal experiences have sparked valuable community-wide discussions)
Further reading
If you’re interested in actually doing this, some relevant reading might be a post I wrote on suggestions for online discourse norms. There are also lots of interesting posts in the discussion norms tag. I also found Giving and receiving feedback by Max Daniel very actionable—he suggests many specific questions and gives lots of examples, and other resources are linked in the comments of that post.
- ^
See GiveWell’s current and past mistakes pages for examples of both.
- ^
Here are some mostly non-EA examples. The EA-adjacent development charity Evidence Action shut down one of its programs a few years ago after publicly saying it wasn’t effective enough. In academia, there is a strong norm against transparency around failure—exemplified by economist Johannes Haushofer’s CV of failures which went viral in 2016 (and this earlier 2010 call in Nature for A CV of failures). One place where failures are discussed a lot, and sometimes embraced is in the startup world. Startup culture talking about failures—the leading thinkers in this space such as Eric Ries (The Lean Startup) and Paul Graham talk a lot about failures. VCs sometimes look favourably on founders who’ve got one failed startup behind them, if they can demonstrate they’ve learnt things. Ramit Sethi, a popular blogger who writes about careers, has a failure file where he aims to have 4 failures a month.
- ^
For those curious, the following organisations do not (to the best of my knowledge) have mistakes pages: FTX Future Fund, Open Philanthropy, One for the World, Founders’ Pledge, Rethink Priorities, Charity Entrepreneurship, Global Priorities Institute, Future of Humanity Institute. Some of these organisations do write about their mistakes in their annual reports. Other orgs that have mistakes pages: Sentience Institute.
- ^
Some other posts are the Good Technology Projects’ postmortem, a postmortem of a mental health app by Michael Plant, organisations discuss their learnings in retrospectives like Fish Welfare Initiative or in posts announcing decisions to shut down like Students for High Impact Charities. In the Rationalist community, there was the Arbital Postmortem. You can see more examples on the Forum postmortems and retrospectives tag, and examples from the LessWrong community in their analogous tag.
- ^
H/T Arjun Khandelwal for most of this section!
- ^
H/T Aaron Gertler & Adam Gleave
- ^
H/T Abi Olvera
- ^
H/T Arjun Khandelwal
- ^
H/T Ben Millwood
- ^
Chris Leong raised this concern on the pre-announcement for the contest for critiques and red teaming
- The FTX crisis highlights a deeper cultural problem within EA—we don’t sufficiently value good governance by 14 Nov 2022 12:43 UTC; 452 points) (
- Notes on impostor syndrome by 6 Jun 2022 10:56 UTC; 90 points) (
- EA needs to understand its “failures” better by 24 May 2022 14:24 UTC; 67 points) (
- On missing moods and tradeoffs by 9 May 2023 10:06 UTC; 50 points) (
- EA Updates for April 2022 by 31 Mar 2022 16:43 UTC; 32 points) (
- 24 May 2022 14:41 UTC; 11 points) 's comment on EA needs to understand its “failures” better by (
Writing as a comment because it didn’t feel very central to the post. I want to share some thoughts on my motivation to write this post and how it evolved. Initially, I was going to write a slightly different post which emphasised the following points:
We’re relatively overreporting successes and underreporting mistakes, failures, or lessons learned. (pretty strong claim)
It’s difficult to quantify what is “enough” of the “right type” of discourse. This feels inherently fuzzy. It’s not just about the quantity of posts—I think the “right” level of discourse would make me feel like I’m getting exposed to lots of peoples’ internal models or hypotheses on how they expected things to go, and how they went wrong, and compare their models to mine to try and figure out if I could avoid making the mistakes they are making.
Other community members could find it valuable to read about those experiences (medium strength claim)
It would help them develop better models and thus make better decisions
It could help them feel less alone
It’s worth the cost for actors to invest more time into such reflection (somewhat confident claim)
I was motivated to write this for three reasons:
Despite having strong values of transparency in EA, there still feel like strong disincentives to write about our failures. The whole set-up of EA is kind of like “other people made mistakes, and we’re going to do better”. That sets a pretty high bar for trying new things, or sharing “dumb mistakes” (or “dumb questions”). Of course, if the bar is set too low, you might have much less impact than you could otherwise. But if it’s too high it could disincentivize people from trying things.
The recent influx of FTX funding, as well as the uptick in interest around EA entrepreneurship and people starting more, and more ambitious, projects means that we are likely to see more failed early-stage EA projects. I wasn’t sure how we’d deal with this
I felt there isn’t enough transparency around failure in meta, despite the amount of uncertainty. It may be that we just don’t make that many mistakes—that seems very unlikely. When I reflect on my own time in community building, I’ve changed my mind many times (and continue to do so), most major meta organisations have as well.
While writing this post the feedback I got made me realise that rather than just advocating for “more of X” it would be more valuable to help people by providing heuristics for making decisions on how and when to engage in failure discourse.
Thanks for sharing your motivations! Personally, I would have liked to read your original post, even if it was more one-sided, and got the other side elsewhere. Being helped with heuristics for making decisions is not really what I was looking for in this post—it feels paternalistic and contrived in me, and I’d enjoy you advocating earnestly for more of something you think is good.
Some things I like about this post:
- I like the topic, I am interested in failure and places where failure and mistake making is discussed openly feels more growthy.
- I liked that you gave lots of examples.
Some things I didn’t like about this post:
- Sometimes I couldn’t always see the full connections you were making, or I could but had to leap to them based on my own preconceptions, maybe they could be more explained? For example, a benefit was a stronger community, but you didn’t explain the mechanism by which that leads to a stronger community. I don’t think the Howie podcast supports the point, a lot of people liked the podcast, but how is that indicative of a stronger community exactly?
Things I disagree with in this post:
- I don’t think the Opportunity Cost point was well argued. In particular, you discussed transparency in general, with examples of publishing annual reports and so on, which take a lot of time. However, this post is about being transparent about mistakes and failure, not transparency in general. I think the Opportunity Cost is much lower for just publishing big mistakes, even though it takes some time to word it properly, and then there is the stress of it. But you can choose simply not to look at reactions on social media. Same as people can choose not to engage in lengthy threads about it.
- I think your Reputational Cost point was better on the other side as some of the reasons would put it there. Also, I just think this is somewhat a normative cultural question rather than one about facts in the world. If my reputation will be destroyed in an area for publishing a mistake, either that is a good thing, or the person judging is undervaluing the growth/learning part and overvaluing a fixed view of people. I basically don’t think someone who would incorrectly judge me negatively for publishing a mistake is worth me caring about the opinion of. Again, this is normative, not a fact about reality, it’s about what kind of culture we want to create.
- Similar arguments to Reputational Cost apply to the Harming Discourse point—this is a normative culture question, we get to choose how we respond and whether we reward or disincentivise it! I would put it not as a risk/downside but in another category called cultural equilibrium or something, along with the reputation point.
- I don’t think the Career Risk point is different to the Reputational Cost point in any meaningful way. You can also take more ownership as an organisation rather than an individual, where appropriate.
I recognise that the things I disagree with are all in the downsides/risks section, and that is because I am biased and uninterested in critiquing the other side. I feel somewhat entitled to do this because I’m under the impression that you added this section in after feedback to make it more balanced, so it’s partially because I’m being mischievous and unfair (you made this easier), as well as not wanting to feel pressure myself to give a balanced comment and wanting to protest against feeling constrained in that way.