I am an independent researcher interested in AI alignment =)
Kredan
Additional arguments to explore:
Replaceablitiy: it might be better on the margin to focus on increasing fertility relative to slowing down ageing on ground of cost effectiveness, tractability and generating comparable (or more) expected happy life. This is also limit the problem of immortal dictators and bad ideas.
Hard to argue that reducing ageing beats X-risks. Unless one thinks slowing down ageing is a good intervention to reduce X-risks. We might care more about the long term future if we live longer but human psychology is such that we might discount our future selves anyway and X-risks are actually not that much of a “long term” issue: AI X-risks, nuclear war and other pandemics have a descent chance to happen in a couple of decades (so most people alive today have a descent chance to live through one of them anyway).
Human aligned AI will help us with anti-ageing research: prioritise aligning AI and then ageing will be easier to solve
Replaceablitiy with happy AIs: Biological organisms might not be a cost effective way to create a lot of happy life in the world. Resources are better allocated in building happy simulations (in the form of brain emulations or human aligned AIs) that can populate the world more effectively, live arbitrary long lives in which they can self modify (as long as they stay aligned)
Existential risks, and AI considerations aside:
Ageing generates an important amount of suffering (old bodies in particular tends to be painful for a while) and might be one of the dominant burden on healthcare systems, I’m wondering for ex how ageing compares with standard global health and development issues at least maybe it could plausibly compare or even beat them in terms of scale (would like to see a cost benefit analysis of that).
One key issue is that we very likely do not know enough about what utopia mean or how to achieve it. We also don’t know enough about the current expected value of the long run future (even conditional on survival). And we likely won’t make much progress on these difficult questions before AI or other X risks. Reducing P(extinction) seems to be a necessary condition to be in a position to use safe AI that makes progress on important fields that we need to figure out to understand what Utopia mean and how to increase P(Utopia) (and avoid downside risks such as S-risks in the process).
Example of fields that could be particularly important to point our safe and aligned AI towards :
-Moral philosophy (and in particular to check whether total utilitarianism is correct or if we can update to better alternatives)
-Governance mechanisms and Economics to implement our extrapolated ideal moral system in the world
It might be preferable to focus on reducing P(doom) AND reducing the risks of a premature irreversible race to the universe to give us ample time to use our safe and aligned AI to solve others important problems and make substantial progress in natural, social sciences and philosophy. (a “long reflexion” with AI that does not need to be long on astronomical scales)