I’ve had a sense for a while that EA is too risk averse, and should be focused more on a broader class of projects most of which it expects to fail. As part of that, I’ve been trying to collect existing arguments related to either side of this debate (in a broader sense, but especially within the EA community), to both update my own views as well as make sure I address any important arguments on either side.
I would appreciate if people could link me to other sources that are important. I’m especially interested in people making arguments for more experimentation, as I mostly found the opposite.
Kelsey Piper’s “On ‘Fringe’ Ideas” makes a pro-risk argument in a certain sense (that we should be kind and tolerant to people whose ideas seem strange and wasteful).
I’m not sure if this is written up anywhere, but one simple argument you can make is that many current EA projects were risky when they were started. GiveWell featured two co-founders with no formal experience in global health evaluating global health charities, and nearly collapsed in scandal within its first year. 80,000 Hours took on an impossibly broad task with a small staff (I don’t know whether any had formal career advisement experience). And yet, despite various setbacks, both projects wound up prospering, without doing permanent damage to the EA brand (maybe a few scrapes in the case of 80K x Earning to Give, but that seems more about where the media’s attention was directed than what 80K really believed).
I’ve had a sense for a while that EA is too risk averse, and should be focused more on a broader class of projects most of which it expects to fail. As part of that, I’ve been trying to collect existing arguments related to either side of this debate (in a broader sense, but especially within the EA community), to both update my own views as well as make sure I address any important arguments on either side.
I would appreciate if people could link me to other sources that are important. I’m especially interested in people making arguments for more experimentation, as I mostly found the opposite.
1: 80k’s piece on accidental harm: https://80000hours.org/articles/accidental-harm/#you-take-on-a-challenging-project-and-make-a-mistake-through-lack-of-experience-or-poor-judgment
2. How to avoid accidentally having a negative impact with your project, by Max Dalton and Jonas Volmer: https://www.youtube.com/watch?v=RU168E9fLIM&t=519s
3. Steelmanning the case against unquantifiable interventions, By David Manheim: https://forum.effectivealtruism.org/posts/cyj8f5mWbF3hqGKjd/steelmanning-the-case-against-unquantifiable-interventions
4. EA is Vetting Constrained: https://forum.effectivealtruism.org/posts/G2Pfpkcwv3bJNF8o9/ea-is-vetting-constrained
5. How X-Risk Projects are different from Startups by Jan Kulveit:
https://forum.effectivealtruism.org/posts/wHyy9fuATeFPkHSDk/how-x-risk-projects-are-different-from-startups
Kelsey Piper’s “On ‘Fringe’ Ideas” makes a pro-risk argument in a certain sense (that we should be kind and tolerant to people whose ideas seem strange and wasteful).
I’m not sure if this is written up anywhere, but one simple argument you can make is that many current EA projects were risky when they were started. GiveWell featured two co-founders with no formal experience in global health evaluating global health charities, and nearly collapsed in scandal within its first year. 80,000 Hours took on an impossibly broad task with a small staff (I don’t know whether any had formal career advisement experience). And yet, despite various setbacks, both projects wound up prospering, without doing permanent damage to the EA brand (maybe a few scrapes in the case of 80K x Earning to Give, but that seems more about where the media’s attention was directed than what 80K really believed).