The challenge of conceptualizing and estimating costs and benefits for such a large and diffuse stakeholder group, the vast majority of whom can’t speak for themselves, is daunting to say the least. Longtermists have partially gotten around that challenge, however, by focusing on “existential risks”, meaning risks of events that would permanently, drastically reduce the potential for value in the future. If the future could contain vast numbers of morally relevant beings with flourishing lives, and those existential risks could irreversibly prevent them from existing or worsen their lives, it may be reasonable to simply focus on the proxy goal of existential risk reduction.
Also, I don’t think reducing existential risk is of intrinsic value; instead it translates into an increase in WELLBYs, in expectation. In that way, the two kinds of measures (nearterm vs. longterm) and their outcomes are also ‘in a common currency’ and thus, in principle, comparable.
The following part has I think some problems, which I recently addressed here: “Future people might not exist”
Also, I don’t think reducing existential risk is of intrinsic value; instead it translates into an increase in WELLBYs, in expectation. In that way, the two kinds of measures (nearterm vs. longterm) and their outcomes are also ‘in a common currency’ and thus, in principle, comparable.