How x-risk projects are different from startups

Some­times new EA pro­jects are com­pared to star­tups, and the EA pro­ject ecosys­tem to the startup ecosys­tem. While I’m of­ten mak­ing such com­par­i­sons my­self, it is im­por­tant to high­light one cru­cial differ­ence.

Caus­ing harm is rel­a­tively easy—as ex­plained in

Also most star­tups fail.

How­ever, when star­tups fail, they usu­ally can’t go “much bel­low zero”. In con­trast, pro­jects aimed at in­fluenc­ing long-term fu­ture can have nega­tive im­pacts many or­ders of mag­ni­tude larger than the size of the pro­ject. It is pos­si­ble for small teams, or even small fun­ders, to cause large harm.

While in the usual startup ecosys­tem in­vestors are look­ing for uni­corns, they do not have to worry about anti-uni­corns.

Also, the most the harm­ful long-term ori­ented pro­jects could be the ones which suc­ceed in a nar­row pro­ject sense: they have com­pe­tent teams, pro­duce out­comes, and ac­tu­ally change the world—but in a non-ob­vi­ously wrong way.


It is in my opinion wrong to di­rectly ap­ply mod­els like “we can just try many differ­ent things” or “we can just eval­u­ate the abil­ity of teams to ex­e­cute” to EA pro­jects aiming to in­fluence long-term fu­ture or de­crease ex­is­ten­tial risk.

Also for this rea­son, I be­lieve pro­jects aiming to in­fluence long-term fu­ture, de­crease ex­is­ten­tial risk, or do some­thing am­bi­tious in the meta- or out­reach space, are of­ten ac­tu­ally vet­ting con­strained, and also grant-mak­ing in this space is harder than in many other ar­eas.

Note: I want to em­pha­sise this post should not be rounded off to “do not start pro­jects” or “do not fund pro­jects”.