Sometimes new EA projects are compared to startups, and the EA project ecosystem to the startup ecosystem. While I’m often making such comparisons myself, it is important to highlight one crucial difference.
However, when startups fail, they usually can’t go “much bellow zero”. In contrast, projects aimed at influencing long-term future can have negative impacts many orders of magnitude larger than the size of the project. It is possible for small teams, or even small funders, to cause large harm.
While in the usual startup ecosystem investors are looking for unicorns, they do not have to worry about anti-unicorns.
Also, the most the harmful long-term oriented projects could be the ones which succeed in a narrow project sense: they have competent teams, produce outcomes, and actually change the world—but in a non-obviously wrong way.
Implications
It is in my opinion wrong to directly apply models like “we can just try many different things” or “we can just evaluate the ability of teams to execute” to EA projects aiming to influence long-term future or decrease existential risk.
Also for this reason, I believe projects aiming to influence long-term future, decrease existential risk, or do something ambitious in the meta- or outreach space, are often actually vetting constrained, and also grant-making in this space is harder than in many other areas.
Note: I want to emphasise this post should not be rounded off to “do not start projects” or “do not fund projects”.
How x-risk projects are different from startups
Sometimes new EA projects are compared to startups, and the EA project ecosystem to the startup ecosystem. While I’m often making such comparisons myself, it is important to highlight one crucial difference.
Causing harm is relatively easy—as explained in
How to avoid accidentally having a negative impact with your project, talk by Max Dalton and Jonas Vollmer from EA Global 2018, and
Ways people trying to do good accidentally make things worse, post by 80.000h.
Also most startups fail.
However, when startups fail, they usually can’t go “much bellow zero”. In contrast, projects aimed at influencing long-term future can have negative impacts many orders of magnitude larger than the size of the project. It is possible for small teams, or even small funders, to cause large harm.
While in the usual startup ecosystem investors are looking for unicorns, they do not have to worry about anti-unicorns.
Also, the most the harmful long-term oriented projects could be the ones which succeed in a narrow project sense: they have competent teams, produce outcomes, and actually change the world—but in a non-obviously wrong way.
Implications
It is in my opinion wrong to directly apply models like “we can just try many different things” or “we can just evaluate the ability of teams to execute” to EA projects aiming to influence long-term future or decrease existential risk.
Also for this reason, I believe projects aiming to influence long-term future, decrease existential risk, or do something ambitious in the meta- or outreach space, are often actually vetting constrained, and also grant-making in this space is harder than in many other areas.
Note: I want to emphasise this post should not be rounded off to “do not start projects” or “do not fund projects”.