I’m curious if you’ve considered the conjunction fallacy.
From what I see, there are seven events that could go wrong, for different reasons:
We will never develop the resolve to colonize space We cannot fit everything we need to build a civilization into a spaceship We cannot get the spaceship going fast enough We cannot have enough civilization-building materials remain intact during the voyage We cannot slow the spaceship down when we’re close to the target We cannot build the civilization even after arriving at the target for some reason
* Some unknown unknown will go wrong
As you know, even if you claim all seven individual events are unlikely (say 10%), collectively something still could go wrong with probability 52%.
Thoughts?
-
Also, another idea i wanted to ask if you’ve considered is space cities—rather than making the long journey to a far flung habitable planet, we just continue to exist in constructed facilities in space, using non-habitable planets for construction materials. Though I haven’t thought about it that much...
I’m really glad the Global Priorities Project exists and I look forward to seeing more research. I think this piece was also particularly well-written in a very accessible yet academic voice.
That being said, I’m not sure the intention of this piece, but it feels neither novel nor thorough. I’m excited that my calculator is linked in this piece, but to clarify I no longer hold the view that those cost-effectiveness estimates are to be taken as the end-all of the impact, and I don’t think any EAs still do.
Furthermore, many people now argue that the impact of working on animals is to have a long-term gestalt shift in the view to help not humans, but rather future animals. Ending factory farming, for example, would have a large compounding effect on all future animals that are no longer factory farmed, toward the future, and attitude change is the only way to make that happen.
Likewise, some people (though I’m unsure) think that spreading anti-speciesism might be a critical gateway toward helping people expand their moral concern to wild animals or computer programs (e.g., suffering subroutines) in the far future too.
It’s not just that this piece doesn’t address this fact, but it seems to ignore the possibility entirely by focusing (somewhat dogmatically) on humans.