Often (in EA in particular) the largest cost to a failed started project isn’t to you, but is a hard-to-see counterfactual impact.
Imagine I believe that building a synth bio safety field is incredibly important. Without a real background in synth bio, I go about building the field but because I lack context and subtle field knowledge, I screw it up having reached out to almost all the key players. They would now are be conditioned to think that synth bio safety is something that is pursued by naive outsiders who don’t understand synth bio. This makes it harder for future efforts to proceed. It makes it harder for them to raise funds. It makes it harder for them to build a team.
The worst case is that you start a project, fail, but don’t quit. This can block the space, and stop better projects from entering it.
These can be worked around, but it seems that many of your assumptions are conditional on not having these sorts of large negative counterfactual impacts. While that may work out, it seems overconfident to assume a 0% chance of this, especially if the career capital building steps are actually relevant domain knowledge building.
Would asking people on the street if they’d be willing to donate money to effective charities such as AMF (or similar marketing efforts to try to raise money quickly rather than focus on high quality movement building) have this negative counterfactual impact?
What is a good way to evaluate this risk of a negative counterfactual impact for candidate projects one is considering launching in general?
Often (in EA in particular) the largest cost to a failed started project isn’t to you, but is a hard-to-see counterfactual impact.
Imagine I believe that building a synth bio safety field is incredibly important. Without a real background in synth bio, I go about building the field but because I lack context and subtle field knowledge, I screw it up having reached out to almost all the key players. They would now are be conditioned to think that synth bio safety is something that is pursued by naive outsiders who don’t understand synth bio. This makes it harder for future efforts to proceed. It makes it harder for them to raise funds. It makes it harder for them to build a team.
The worst case is that you start a project, fail, but don’t quit. This can block the space, and stop better projects from entering it.
These can be worked around, but it seems that many of your assumptions are conditional on not having these sorts of large negative counterfactual impacts. While that may work out, it seems overconfident to assume a 0% chance of this, especially if the career capital building steps are actually relevant domain knowledge building.
Agreed. This updates my view.
Would asking people on the street if they’d be willing to donate money to effective charities such as AMF (or similar marketing efforts to try to raise money quickly rather than focus on high quality movement building) have this negative counterfactual impact?
What is a good way to evaluate this risk of a negative counterfactual impact for candidate projects one is considering launching in general?