I think dying is bad too, and that’s why I want to abolish AI. It’s an existential risk to humanity and other sentient species on Earth, and anywhere close enough to be reached via interstellar travel at any point in the future.
“No life extension” and “no AGI” aren’t inherently linked, but they are practically linked in some important ways. These are:
1. Human intelligence may not be enough to solve the hard problems of aging and cancer, meaning we may never develop meaningful life extension tech.
2. Humanity may not have enough time or cultural stability to commit to a megaproject like this (which will likely take centuries at purely human scales of research) before climate change, economic inequality, and other x- and s-risks greatly weaken our species.
3. Both AGI and life extension come from the same philosophical place: the rejection of our natural limits as biological animals (put there by Nature via billions of years of natural selection). I think this is extremely dangerous, as it encourages us to seek out new x-risks to find a solution to our mortality (AGI being the most obvious, but large-scale gene editing and brain augmentation carry a high extinction chance as well).
Basically, my argument is that any attempt to escape our mortality is likely to cause more death and suffering than it prevents. In light of this, we should accept our mortality and try to optimize society for 70-80 years of individual life and health. We should train future generations to continue human progress. In other words, we should stick to the model we’ve used for all of human history before 2020.
I think dying is bad.
Also, I’m not sure why “no life extension” and “no AGI” have to be linked. We could do life extension without AGI, it’d just be harder.
I think dying is bad too, and that’s why I want to abolish AI. It’s an existential risk to humanity and other sentient species on Earth, and anywhere close enough to be reached via interstellar travel at any point in the future.
“No life extension” and “no AGI” aren’t inherently linked, but they are practically linked in some important ways. These are:
1. Human intelligence may not be enough to solve the hard problems of aging and cancer, meaning we may never develop meaningful life extension tech.
2. Humanity may not have enough time or cultural stability to commit to a megaproject like this (which will likely take centuries at purely human scales of research) before climate change, economic inequality, and other x- and s-risks greatly weaken our species.
3. Both AGI and life extension come from the same philosophical place: the rejection of our natural limits as biological animals (put there by Nature via billions of years of natural selection). I think this is extremely dangerous, as it encourages us to seek out new x-risks to find a solution to our mortality (AGI being the most obvious, but large-scale gene editing and brain augmentation carry a high extinction chance as well).
Basically, my argument is that any attempt to escape our mortality is likely to cause more death and suffering than it prevents. In light of this, we should accept our mortality and try to optimize society for 70-80 years of individual life and health. We should train future generations to continue human progress. In other words, we should stick to the model we’ve used for all of human history before 2020.