What we need is an argument for why it would be good in expectation, compared to all these other cause areas.
Yeah the strong longtermism paper elucidates this argument. I also provide a short sketch of the argument here. At its core is the expected vastness of the future that allows longtermism to beat other areas. The argument for “normal” longtermism i.e. not “strong” is pretty much the same structure.
Future well being does matter but focusing on existential risk doesn’t lead to greater future well-being necessarily. It leads to humans being alive. If the future is filled with human suffering, then focus on existential risk could be one of the worst focus areas.
Yes that’s true. Again we’re dealing with expectations and most people expect the future to be good if we manage not to go extinct. But it’s also worth noting that reducing extinction risk is just one class of reducing existential risk. If you think the future will be bad, you can work to improve the future conditional on us being alive or, in theory, you can work to make us go extinct (but this is of course a bit out there). Improving the future conditional on us being alive might involve tackling climate change, improving institutions, or aligning AI.
And, to reiterate, while we focus on these areas to some extent now, I don’t think we focus on them as much as we would in a world where society at large accepts longtermism.
Yeah the strong longtermism paper elucidates this argument. I also provide a short sketch of the argument here. At its core is the expected vastness of the future that allows longtermism to beat other areas. The argument for “normal” longtermism i.e. not “strong” is pretty much the same structure.
Yes that’s true. Again we’re dealing with expectations and most people expect the future to be good if we manage not to go extinct. But it’s also worth noting that reducing extinction risk is just one class of reducing existential risk. If you think the future will be bad, you can work to improve the future conditional on us being alive or, in theory, you can work to make us go extinct (but this is of course a bit out there). Improving the future conditional on us being alive might involve tackling climate change, improving institutions, or aligning AI.
And, to reiterate, while we focus on these areas to some extent now, I don’t think we focus on them as much as we would in a world where society at large accepts longtermism.