The value of the future conditional on civilization surviving seems positive to me, but not robustly so. I think the main argument for its being positive is theoretical (e.g., Spreading happiness to the stars seems little harder than just spreading), but the historical/contemporary record is ambiguous.
The value of improving the future seems more robustly positive if it is tractable. I suspect it is not that much less tractable than extinction risk work. I think a lot of AI risk satisfies this goal as well as the x-risk goal for reasons Will MacAskill gives in What We Owe the Future. Understanding, developing direct interventions for, and designing political processes for digital minds seem like plausible candidates. Some work on how to design democratic institutions in the age of AI also seems plausibly tractable enough to compete with extinction risk.
The value of the future conditional on civilization surviving seems positive to me, but not robustly so. I think the main argument for its being positive is theoretical (e.g., Spreading happiness to the stars seems little harder than just spreading), but the historical/contemporary record is ambiguous.
The value of improving the future seems more robustly positive if it is tractable. I suspect it is not that much less tractable than extinction risk work. I think a lot of AI risk satisfies this goal as well as the x-risk goal for reasons Will MacAskill gives in What We Owe the Future. Understanding, developing direct interventions for, and designing political processes for digital minds seem like plausible candidates. Some work on how to design democratic institutions in the age of AI also seems plausibly tractable enough to compete with extinction risk.