TL;DR: It’s hard to really feel sad about impact that doesn’t happen, but it’s easy to feel sad about mistakes in existing work.
This is bad because we sometimes have a choice between making a good-but-small thing or a big-but-slightly-worse thing, and the mismatch in our emotional reactions to the downsides of the two options turns into a cognitive bias that harms our decision-making. This can make us too error-averse and err too much on the side of just not doing things at all.
For instance, if I’m choosing between making five imperfect cookies or one perfect cookie, I might notice the imperfections more than the four missing cookies, whereas it’s often better to just make more imperfect cookies.
In some cases, we should be quite wary of mistakes. I outline where I think we’re too error-averse and where error-aversion is appropriate below.
Summary of my concrete suggestions:
To counteract fear of criticism, foster a culture of celebrating successes and exciting work.
To notice invisible impact loss, have the phrase “invisible impact loss” rattling inside your head, and try to quantify decisions that are prone to such errors.
When I was on the Events Team at CEA, we had a conversation about whether to approximately double EA Global capacity at the last minute (a few weeks before the actual event) in order to admit more people who met the admissions bar. (This was the most recent conference for which capacity was a problem.)
I remember starting off skeptical about the idea. After all,
There would be more logistics issues; lunch lines would be long, etc.
We’d have to move to two venues, which is awkward. People would have to walk from one venue to another to get to a different meeting or session.
Doubling would put a lot of strain on the team, which would mean we’d be worse at responding to people, communicating things like the schedule or important COVID information, catching awesome ways to improve the conference…
And in general, the experience of the average attendee would probably be worse.
But someone else on the team made me realize a big thing I wasn’t fully tracking:
About 500 more people who met the admissions bar would get to experience the conference. However much impact the default conference would produce — sparked collaborations, connections for years to come, inspirations for new projects, positive career changes — there would be about double that.
Although the whole conversation was sparked by the idea that we had way more people we wanted to admit than slots for attendees, I wasn’t internalizing the benefit we would miss out on if we didn’t double. I couldn’t viscerally feel the loss of impact from a smaller but near-perfect conference the way I could feel my aversion to the various issues I could imagine with the bigger conference. The impact loss was invisible to me.
I now actively try to notice invisible impact loss. I’ve noticed discussions on the Forum that miss this, and worry that it’s stunting work that could be extremely valuable, so I decided to write this post.[1]
Note: EA Global conferences have changed since then. You can see some discussion of this here. I should also note that the conference didn’t end up facing many of the issues I worried about while we were deciding whether to double.
What causes this phenomenon? How can we push back? (Suggestions)
Here are some factors I think are at play, and ideas for what we can do about them. (Please add to this in the comments!) See the summary at the top.
We’re much, much more likely to be criticized for doing something imperfectly than for not doing something.[3] (In part because it’s unclear who to criticize for not doing something.) This makes people more averse to doing anything, as they’re worried about the potential for criticism.
What can we do about fear of criticism?
It’s great that we’re a community that is critical, but I think we could celebrate successes and exciting work more than we currently do. This could give people an incentive to do stuff, as they might expect to be appreciated for that work. I also think self-celebration (bragging) should be normalized more.[4]
Relatedly, we can be more generous in our criticism (relevant resource, and a post on supportive scepticism). I think people in effective altruism already are more generous than is usual on the internet, but I’d guess that we can up our standards here.
(And we can individually reframe our attitude towards criticism. We can actively train ourselves to be receptive to criticism, to take criticism and process it productively. The standard advice is to view every criticism as an opportunity to grow in the object-level (improve that type of work), but I also think we can grow on a meta level — view it as an opportunity to build our criticism-response muscles.)
2. Invisible impact loss is hard to notice (it’s “invisible”!), but noticing flaws is easy
My suggestions:
When making decisions like whether to double a conference, just try hard to notice the invisible impact loss. Have the phrase “invisible impact loss” in the back of your mind.
Fermi estimates can be a really good grounding: just try to quantify the impact of the possibilities for every significant decision. Then you can try to account for the cost of errors and imperfections but also notice the missing value from invisible impact loss.
For the conference, we could have done something really rough; most of the value is in the attendees’ experiences, and most of the cost is their time. So let’s just consider the net benefit from the conference to/via every attendee (without considering e.g. the cost of the additional venue). Then, as long as the net value is positive and the conference doesn’t get half as valuable or less for every attendee as it would have been, doubling is worth it.
In the (unrealistic) jammie dodgers case, suppose that most of what I care about is how happy people are eating the jammie dodgers. If a lumpy jammie dodger gives people 10 units of happiness while a less-lumpy jammie dodger gives people 12 units of happiness, then I’m comparing a total of 50 units of happiness (5 x 10) to 12 units of happiness (1 x 12).
If you’re anything like me, the idea of kicking off something with known flaws (or something that you know has flaws that you haven’t identified yet) is aversive.
A few things help me:
A ship-fast-and-iterate mentality that I’m developing mostly by working with people who are better than I am at this. The rough idea is that you should deliver something useful fast, even if it has flaws, which allows you to get feedback and identify and fix more flaws or otherwise improve the product. I think this is heavily related to “agile” approaches (e.g. to software development), although I don’t know much about the theory, and “lean” principles.
Deadlines and accountability mechanisms. The best way for me to make sure that I actually post something that I’ve written is to create a deadline for it.
When this reasoning doesn’t quite apply — reasons to be error-averse (and some suggestions)
Here are some cases when, given a decision between “a good project” and “an OK project that’s bigger or more ambitious,” it might be better to err on the side of the better or more polished (and less ambitious) project (do less, but better):
When the downside risks are high
Or the other costs of a low-quality version of the project are high
When you might prevent someone else from doing a better job (or when the unilateralist’s curse applies)
When this type of project often has long tails of success (that depend on quality)
When a key goal of the project is your own development and training
In general, I think that a broad way to mitigate this kind of risk is to get external feedback and seriously listen to it.
Note that sometimes more than one of these applies. Let’s run through these cases one by one.
(1) The downside risks are high
Example: Say you’re baking cookies, and one of your ingredients is poisonous if not prepared correctly.[5] Then a more efficient and less cautious method of preparation might not be appropriate — even if it doubles your output. The downside risks here are high! You might kill someone. Better make one safe cookie than 10 potentially lethal cookies.
Downside risks are sometimes high in EA projects.[6] Some reasons the downside risks of a project might be high:
The area is inherently dangerous; there are other serious risks from this kind of work, like poisoning someone
In EA, this might apply if your project is physically dangerous, if you’re interacting with vulnerable people, like minors, or for some other reason.
You might significantly harm the reputation of something that relies on its reputation to succeed
E.g. if you’re establishing a new kind of publication, and you publish a bunch of articles with basic grammar mistakes, the mistakes themselves may not matter much, but people might write the publication off as amateurish and unprofessional.
The project might suck up a bunch of valuable resources
E.g. you know that your project will be appealing to people, you’re not sure it’s that valuable, and it’ll be costly either in time or money.
A note: I’m not sure we should spend too much time worrying about this; I think we have OK systems for not dumping valuable resources into low-expected-impact projects, or at least generally shouldn’t expect them to outcompete high-expected-impact projects.
This is the first project of its kind, and you might lock in a bad norm or pattern (this is related to (2) — preventing someone else from doing a better job
Mitigation: check whether you’re operating in a risky area, or if any of the above applies. Also, get feedback from others (although act carefully around information hazards). If you’re not sure why no one has done a thing yet, take the possibility of risks or a unilateralist’s curse seriously.
(2) You might prevent someone else from doing a better job
Example: Say you’ll be talking to 10 important state officials in a variety of offices. You’re an expert on biosecurity and could focus on carefully explaining risks from pandemics to the two relevant officials, or you could prepare 10 pitches to the 10 officials about various issues relevant from an EA perspective, but you might do it poorly; you just don’t understand global health or AI safety very well. You might want to focus on the biosecurity pitches in case you say something incorrect about global health or AI that makes the relevant officials dismiss the concern as that of an amateur.
Another example: You’re starting the first EA group in a given area, and you don’t have much time (or expertise) to do it well. Someone else might start one soon, but won’t do it if you’ve already started it, and the group you start might be worse than their version.
Mitigation: In this case, it might be worth spending more time understanding the counterfactual; if you don’t do (a bigger version of) the project, will someone else? It might be better to coordinate.
(3) This type of project often has long tails of success (that depend on quality)
Example: You’re working on a book. Book success is long-tailed; most books don’t do very well, and some become wildly popular. [Disclaimer: this is something I’m pretty sure I remember reading about in a source I thought was trustworthy, but don’t have a quick source for.] You could write two ~OK books or one great book.[7] You might want to just write the great book, as it’s likely to get more than double the readers!
Mitigation: seriously check that you have something that might be wildly successful,[8] and if you notice that you do, go ahead with that (rather than spending energy on volume of projects).
(4) When a key goal of the project is your own development or training
Example: Say you’re working on one of your first research papers. Your basic goal is to get it published, and you think you can do that by half-assing. However, you also want to test your fit for research and learn great research practices. You won’t be able to properly learn and test your fit if you half-ass. Then it might make sense to go all-in and really focus on trying to make something excellent, even if you think the impact from this one paper is not that big.
Mitigation: Notice if you expect that most of the value of a given project is via your own development. In those cases, it might make sense to focus on making something excellent.
Closing thoughts
The summary of this post is at the top — in very brief, notice invisible impact. Please argue with me and add your own examples in the comments!
I’m not the first person to talk about this, by a long shot. For instance, in a recent post, William MacAskill writes:
It seems to me to be more likely that we’ll fail by not being ambitious enough; by failing to take advantage of the situation we’re in, and simply not being able to use the resources we have for good ends.
It’s hard to internalise, intuitively, the loss from failing to do good things; the loss of value if, say, EA continued at its current giving levels, even though it ought to have scaled up more. For global health and development, the loss is clear and visceral: every year, people suffer and lives are lost. It’s harder to imagine for those concerned by existential risks. But one way to make the situation more vivid is to imagine you were in an “end of the world” movie with a clear and visible threat, like the incoming asteroid in Don’t Look Up. How would you act? For sure, you’d worry about doing the wrong thing. But the risk of failure by being unresponsive and simply not doing enough would probably weigh on you even harder.
… there are asymmetric costs to trying to do big things versus being cautious. Compare: (i) How many times can you think of an organisation being criticised for not being effective enough? and (ii) How many times can you think of someone being criticised for not-founding an organisation that should have existed? (Or, suppose I hadn’t given a talk on earning to give at MIT in 2012, would anyone be berating me?) In general, you get public criticism for doing things and making mistakes, not for failing to do anything at all.
I don’t really know why you’d make cookies with this, but let’s go with the example anyway. Here’s an example of such an ingredient — although again, I’m not really sure why you’d make cookies with it.
Although note that even with books, you might want to test some ideas first and see if you can hit on product-market fit. A source I’ve been told is useful on this topic is Write Useful Books.
Invisible impact loss (and why we can be too error-averse)
TL;DR: It’s hard to really feel sad about impact that doesn’t happen, but it’s easy to feel sad about mistakes in existing work.
This is bad because we sometimes have a choice between making a good-but-small thing or a big-but-slightly-worse thing, and the mismatch in our emotional reactions to the downsides of the two options turns into a cognitive bias that harms our decision-making. This can make us too error-averse and err too much on the side of just not doing things at all.
For instance, if I’m choosing between making five imperfect cookies or one perfect cookie, I might notice the imperfections more than the four missing cookies, whereas it’s often better to just make more imperfect cookies.
In some cases, we should be quite wary of mistakes. I outline where I think we’re too error-averse and where error-aversion is appropriate below.
Summary of my concrete suggestions:
To counteract fear of criticism, foster a culture of celebrating successes and exciting work.
To notice invisible impact loss, have the phrase “invisible impact loss” rattling inside your head, and try to quantify decisions that are prone to such errors.
Fight perfectionism; lean into agile project management, start half-assing, and try to determine what your real goals are.
Preamble: What sparked this thought
When I was on the Events Team at CEA, we had a conversation about whether to approximately double EA Global capacity at the last minute (a few weeks before the actual event) in order to admit more people who met the admissions bar. (This was the most recent conference for which capacity was a problem.)
I remember starting off skeptical about the idea. After all,
There would be more logistics issues; lunch lines would be long, etc.
We’d have to move to two venues, which is awkward. People would have to walk from one venue to another to get to a different meeting or session.
Doubling would put a lot of strain on the team, which would mean we’d be worse at responding to people, communicating things like the schedule or important COVID information, catching awesome ways to improve the conference…
And in general, the experience of the average attendee would probably be worse.
But someone else on the team made me realize a big thing I wasn’t fully tracking:
About 500 more people who met the admissions bar would get to experience the conference. However much impact the default conference would produce — sparked collaborations, connections for years to come, inspirations for new projects, positive career changes — there would be about double that.
Although the whole conversation was sparked by the idea that we had way more people we wanted to admit than slots for attendees, I wasn’t internalizing the benefit we would miss out on if we didn’t double. I couldn’t viscerally feel the loss of impact from a smaller but near-perfect conference the way I could feel my aversion to the various issues I could imagine with the bigger conference. The impact loss was invisible to me.
I now actively try to notice invisible impact loss. I’ve noticed discussions on the Forum that miss this, and worry that it’s stunting work that could be extremely valuable, so I decided to write this post.[1]
Note: EA Global conferences have changed since then. You can see some discussion of this here. I should also note that the conference didn’t end up facing many of the issues I worried about while we were deciding whether to double.
What causes this phenomenon? How can we push back? (Suggestions)
Here are some factors I think are at play, and ideas for what we can do about them. (Please add to this in the comments!) See the summary at the top.
1. Fear of criticism[2]
We’re much, much more likely to be criticized for doing something imperfectly than for not doing something.[3] (In part because it’s unclear who to criticize for not doing something.) This makes people more averse to doing anything, as they’re worried about the potential for criticism.
What can we do about fear of criticism?
It’s great that we’re a community that is critical, but I think we could celebrate successes and exciting work more than we currently do. This could give people an incentive to do stuff, as they might expect to be appreciated for that work. I also think self-celebration (bragging) should be normalized more.[4]
Concrete examples: more threads like this one, more generic comments of appreciation.
Relatedly, we can be more generous in our criticism (relevant resource, and a post on supportive scepticism). I think people in effective altruism already are more generous than is usual on the internet, but I’d guess that we can up our standards here.
(And we can individually reframe our attitude towards criticism. We can actively train ourselves to be receptive to criticism, to take criticism and process it productively. The standard advice is to view every criticism as an opportunity to grow in the object-level (improve that type of work), but I also think we can grow on a meta level — view it as an opportunity to build our criticism-response muscles.)
2. Invisible impact loss is hard to notice (it’s “invisible”!), but noticing flaws is easy
My suggestions:
When making decisions like whether to double a conference, just try hard to notice the invisible impact loss. Have the phrase “invisible impact loss” in the back of your mind.
Fermi estimates can be a really good grounding: just try to quantify the impact of the possibilities for every significant decision. Then you can try to account for the cost of errors and imperfections but also notice the missing value from invisible impact loss.
Note: you don’t need a stats degree for this!
Two examples:
For the conference, we could have done something really rough; most of the value is in the attendees’ experiences, and most of the cost is their time. So let’s just consider the net benefit from the conference to/via every attendee (without considering e.g. the cost of the additional venue). Then, as long as the net value is positive and the conference doesn’t get half as valuable or less for every attendee as it would have been, doubling is worth it.
In the (unrealistic) jammie dodgers case, suppose that most of what I care about is how happy people are eating the jammie dodgers. If a lumpy jammie dodger gives people 10 units of happiness while a less-lumpy jammie dodger gives people 12 units of happiness, then I’m comparing a total of 50 units of happiness (5 x 10) to 12 units of happiness (1 x 12).
Resources for Fermi estimates: intro to Fermi estimation, notes from my workshop on Fermi estimation, and the relevant Forum Wiki entry.
3. Perfectionism
If you’re anything like me, the idea of kicking off something with known flaws (or something that you know has flaws that you haven’t identified yet) is aversive.
A few things help me:
A ship-fast-and-iterate mentality that I’m developing mostly by working with people who are better than I am at this. The rough idea is that you should deliver something useful fast, even if it has flaws, which allows you to get feedback and identify and fix more flaws or otherwise improve the product. I think this is heavily related to “agile” approaches (e.g. to software development), although I don’t know much about the theory, and “lean” principles.
Half-assing it with everything you’ve got and the rest of the Replacing Guilt series.
Deadlines and accountability mechanisms. The best way for me to make sure that I actually post something that I’ve written is to create a deadline for it.
When this reasoning doesn’t quite apply — reasons to be error-averse (and some suggestions)
You can find a related discussion here, and a talk on accidental negative impacts here.
Here are some cases when, given a decision between “a good project” and “an OK project that’s bigger or more ambitious,” it might be better to err on the side of the better or more polished (and less ambitious) project (do less, but better):
When the downside risks are high
Or the other costs of a low-quality version of the project are high
When you might prevent someone else from doing a better job (or when the unilateralist’s curse applies)
When this type of project often has long tails of success (that depend on quality)
When a key goal of the project is your own development and training
In general, I think that a broad way to mitigate this kind of risk is to get external feedback and seriously listen to it.
Note that sometimes more than one of these applies. Let’s run through these cases one by one.
(1) The downside risks are high
Example: Say you’re baking cookies, and one of your ingredients is poisonous if not prepared correctly.[5] Then a more efficient and less cautious method of preparation might not be appropriate — even if it doubles your output. The downside risks here are high! You might kill someone. Better make one safe cookie than 10 potentially lethal cookies.
Downside risks are sometimes high in EA projects.[6] Some reasons the downside risks of a project might be high:
There are information hazards in this area
The area is inherently dangerous; there are other serious risks from this kind of work, like poisoning someone
In EA, this might apply if your project is physically dangerous, if you’re interacting with vulnerable people, like minors, or for some other reason.
You might significantly harm the reputation of something that relies on its reputation to succeed
E.g. if you’re establishing a new kind of publication, and you publish a bunch of articles with basic grammar mistakes, the mistakes themselves may not matter much, but people might write the publication off as amateurish and unprofessional.
The project might suck up a bunch of valuable resources
E.g. you know that your project will be appealing to people, you’re not sure it’s that valuable, and it’ll be costly either in time or money.
A note: I’m not sure we should spend too much time worrying about this; I think we have OK systems for not dumping valuable resources into low-expected-impact projects, or at least generally shouldn’t expect them to outcompete high-expected-impact projects.
This is the first project of its kind, and you might lock in a bad norm or pattern (this is related to (2) — preventing someone else from doing a better job
Mitigation: check whether you’re operating in a risky area, or if any of the above applies. Also, get feedback from others (although act carefully around information hazards). If you’re not sure why no one has done a thing yet, take the possibility of risks or a unilateralist’s curse seriously.
(2) You might prevent someone else from doing a better job
Example: Say you’ll be talking to 10 important state officials in a variety of offices. You’re an expert on biosecurity and could focus on carefully explaining risks from pandemics to the two relevant officials, or you could prepare 10 pitches to the 10 officials about various issues relevant from an EA perspective, but you might do it poorly; you just don’t understand global health or AI safety very well. You might want to focus on the biosecurity pitches in case you say something incorrect about global health or AI that makes the relevant officials dismiss the concern as that of an amateur.
Another example: You’re starting the first EA group in a given area, and you don’t have much time (or expertise) to do it well. Someone else might start one soon, but won’t do it if you’ve already started it, and the group you start might be worse than their version.
Mitigation: In this case, it might be worth spending more time understanding the counterfactual; if you don’t do (a bigger version of) the project, will someone else? It might be better to coordinate.
(3) This type of project often has long tails of success (that depend on quality)
Example: You’re working on a book. Book success is long-tailed; most books don’t do very well, and some become wildly popular. [Disclaimer: this is something I’m pretty sure I remember reading about in a source I thought was trustworthy, but don’t have a quick source for.] You could write two ~OK books or one great book.[7] You might want to just write the great book, as it’s likely to get more than double the readers!
Mitigation: seriously check that you have something that might be wildly successful,[8] and if you notice that you do, go ahead with that (rather than spending energy on volume of projects).
(4) When a key goal of the project is your own development or training
Example: Say you’re working on one of your first research papers. Your basic goal is to get it published, and you think you can do that by half-assing. However, you also want to test your fit for research and learn great research practices. You won’t be able to properly learn and test your fit if you half-ass. Then it might make sense to go all-in and really focus on trying to make something excellent, even if you think the impact from this one paper is not that big.
Mitigation: Notice if you expect that most of the value of a given project is via your own development. In those cases, it might make sense to focus on making something excellent.
Closing thoughts
The summary of this post is at the top — in very brief, notice invisible impact. Please argue with me and add your own examples in the comments!
Related links (most of which were mentioned)
Why we should err in both directions
And related: Terminate deliberation based on resilience, not certainty
Half-assing it with everything you’ve got
Celebrations and gratitude thread
Accidental harm
EA and the current funding situation
[Edit] Some related content (will try to add as I see more):
“Pandemic Ethics and Status Quo Risk” (Richard Y Chappell)
Thanks to everyone who gave me feedback on drafts of this post!
I’m not the first person to talk about this, by a long shot. For instance, in a recent post, William MacAskill writes:
MacAskill again:
Related: Action/inaction distinction (“doing vs allowing harm”)
“Bragging” feels very unnatural to me, but I do think it can be useful for the reasons I list.
I don’t really know why you’d make cookies with this, but let’s go with the example anyway. Here’s an example of such an ingredient — although again, I’m not really sure why you’d make cookies with it.
You can find more discussion about this here.
Although note that even with books, you might want to test some ideas first and see if you can hit on product-market fit. A source I’ve been told is useful on this topic is Write Useful Books.
Related: How to choose an aptitude by Holden Karnofsky