From a “current generations” perspective, reducing GCRs is probably not more cost-effective than directly improving the welfare of people / animals alive today
I think reducing GCRs seems pretty likely to wildly outcompete other traditional approaches[1] if we use a slightly broad notion of current generation (e.g. currently existing people) due to the potential for a techno utopian world which making the lives of currently existing people >1,000x better (which heavily depends on diminishing returns and other considerations). E.g., immortality, making them wildly smarter, able to run many copies in parallel, experience insanely good experiences, etc. I don’t think BOTECs will be a crux for this unless we ignore start discounting things rather sharply.
If GCRs actually are more cost-effective under a “current generations” worldview, then I question why EAs would donate to global health / animal charities (since this is no longer a question of “worldview diversification”, just raw cost-effectiveness)
IMO, the main axis of variation for EA related cause prio is “how far down the crazy train do we go” not “person affecting (current generations) vs otherwise” (though views like person affecting ethics might be downstream of crazy train stops).
Mildly against the Longtermism --> GCR shift
Idk what I think about Longtermism --> GCR, but I do think that we shouldn’t lose “the future might be totally insane” and “this might be the most important century in some longer view”. And I could imagine focus on GCR killing a broader view of history.
That said, if we literally just care about experiences which are somewhat continuous with current experiences, it’s plausible that speeding up AI outcompetes reducing GCRs/AI risk. And it’s plausible that there are more crazy sounding interventions which look even better (e.g. extremely low cost cryonics). Minimally the overall situation gets dominated by “have people survive until techno utopia and ensure that techno utopia happens”. And the relative tradeoffs between having people survive until techno utopia and ensuring that techno utopia happen seem unclear and will depend on some more complicated moral view. Minimally, animal suffering looks relatively worse to focus on.
I think reducing GCRs seems pretty likely to wildly outcompete other traditional approaches[1] if we use a slightly broad notion of current generation (e.g. currently existing people) due to the potential for a techno utopian world which making the lives of currently existing people >1,000x better (which heavily depends on diminishing returns and other considerations). E.g., immortality, making them wildly smarter, able to run many copies in parallel, experience insanely good experiences, etc. I don’t think BOTECs will be a crux for this unless we ignore start discounting things rather sharply.
IMO, the main axis of variation for EA related cause prio is “how far down the crazy train do we go” not “person affecting (current generations) vs otherwise” (though views like person affecting ethics might be downstream of crazy train stops).
Idk what I think about Longtermism --> GCR, but I do think that we shouldn’t lose “the future might be totally insane” and “this might be the most important century in some longer view”. And I could imagine focus on GCR killing a broader view of history.
That said, if we literally just care about experiences which are somewhat continuous with current experiences, it’s plausible that speeding up AI outcompetes reducing GCRs/AI risk. And it’s plausible that there are more crazy sounding interventions which look even better (e.g. extremely low cost cryonics). Minimally the overall situation gets dominated by “have people survive until techno utopia and ensure that techno utopia happens”. And the relative tradeoffs between having people survive until techno utopia and ensuring that techno utopia happen seem unclear and will depend on some more complicated moral view. Minimally, animal suffering looks relatively worse to focus on.