I think you raise some really interesting points here, and I am inclined to agree with your skepticism of longtermism.
I just have one comment on your âtractabilityâ section. In my understanding, longtermists advocate that we should prioritize reducing existential risk in the near-term, and say very little about reducing it in the long-term. I donât think I have seen longtermists advocating for your claim (8b) (although correct me if Iâm wrong!). I think youâre right that the tractability objection would make this claim seem very far-fetched.
The âlongtermâ bit of longtermism is relevant only in how they assess the value of reducing near-term existential risk, as you explain in your introduction. Longtermists believe that reducing near-term existential risk is overwhelmingly important, in a way that other people donât (although as you also point out, most people would still agree it is extremely important!)
I think the crucial point for longtermists is that reducing near-term existential risk is one of the only ways of having a very large positive influence on the far-future that is plausibly tractable. We âonlyâ have to become convinced that the future has astronomically large positive expected value, and this then automatically implies that reducing near-term existential risk will have an astronomically large positive expected impact. And reducing near-term extinction risk is something it feels like we have a chance of being successful at, in a way that reducing extinction risk in 5,000 years doesnât.
If anything, not only do longtermists not focus on reducing existential risk thousands of years from now, you can also argue that their worldview depends on the assumption that this future existential risk is already astronomically low. If it isnât, and there is a non-negligible probability per year of humanity being wiped out that persists indefinitely, then our expected future canât be that big. This is the âhinge of historyâ/ââprecipiceâ assumption: existential risk is quite big right now (so itâs a problem we should worry about!) but if we can get through the next few centuries then it wonât be very big after that (so that the expected value of the future is astronomical).
I think you raise some really interesting points here, and I am inclined to agree with your skepticism of longtermism.
I just have one comment on your âtractabilityâ section. In my understanding, longtermists advocate that we should prioritize reducing existential risk in the near-term, and say very little about reducing it in the long-term. I donât think I have seen longtermists advocating for your claim (8b) (although correct me if Iâm wrong!). I think youâre right that the tractability objection would make this claim seem very far-fetched.
The âlongtermâ bit of longtermism is relevant only in how they assess the value of reducing near-term existential risk, as you explain in your introduction. Longtermists believe that reducing near-term existential risk is overwhelmingly important, in a way that other people donât (although as you also point out, most people would still agree it is extremely important!)
I think the crucial point for longtermists is that reducing near-term existential risk is one of the only ways of having a very large positive influence on the far-future that is plausibly tractable. We âonlyâ have to become convinced that the future has astronomically large positive expected value, and this then automatically implies that reducing near-term existential risk will have an astronomically large positive expected impact. And reducing near-term extinction risk is something it feels like we have a chance of being successful at, in a way that reducing extinction risk in 5,000 years doesnât.
If anything, not only do longtermists not focus on reducing existential risk thousands of years from now, you can also argue that their worldview depends on the assumption that this future existential risk is already astronomically low. If it isnât, and there is a non-negligible probability per year of humanity being wiped out that persists indefinitely, then our expected future canât be that big. This is the âhinge of historyâ/ââprecipiceâ assumption: existential risk is quite big right now (so itâs a problem we should worry about!) but if we can get through the next few centuries then it wonât be very big after that (so that the expected value of the future is astronomical).