On the one hand, I am more concerned with learning AI safety than organizing my own cryoperservation or chemical brain preservation. This is mostly because I think it is more important to avoid the worst possible futures before planning to be alive in the future. On the other hand, I think life extension should receive much more funding (as long as you do not use AI with dangerous capabilities), I am against anti-natalism and I think that brain preservation should be available for those who wish. There is a certain tension between both statements. I do not know how to resolve that tension, but thank you for pointing out the problem with my worldview.
Frank_R
To answer your question I have to describe some scenarios how a non-aligned AI would act. This is slightly cringe since we do not know what an unaligned AI would do and this sounds very sci-fi like. In the case of something like a robot uprising or a nuclear war started by an AI many people would die under circumstances such that uploading is impossible, but brain banks could be still intact. If an unaligned AI really has the aim to upload and torture everyone, there will probably be better ways. [Insert something with nanobots here.]
In my personal, very subjective opinion there is a 10% chance of extinction by AI and a 1% chance for s-risks or something like Roko’s basilisk. You may have different subjective probabilities and even if we agree on the possibilities, it depends very much on your preferred ethical theory what to do.
The difference is that if you are biologically dead, there is nothing you can do to prevent a malevolant actor to upload your mind. If you are terminally ill and are pessimistic about the future, you can at least choose cremation.
I am not saying that there should be no funding for brain preservation, but personally I am not very enthusiastic since there is the danger that we will not solve the alignment problem.
This may sound pessimistic, but the value of brain preservation depends also on your views about the long term future. If you think that there is a non-negligible chance that the future is ruled by a non-aligned AI or that it will be easily possible to create suffering copies of you, then it would be better to erase the information that is necessary to reconstruct you after your biological death.
Unfortunately, I have not found time to listen to the whole podcast; so maybe I am writing stuff that you have already said. The reason why everyone assumes that utility can be measured by a real number is the von Neumann-Morgenstern utility theorem. If you have a relation of the kind “outcome x is worse than outcome y” that satisfies certain axioms, you can construct a utility function. One of the axioms is called continuity:
“If x is worse than y and y is worse than z, then there exists a probability p, such that a lottery where you receive x with a probability of p and z with a probability of (1-p), has the same preference as y.”
If x is a state of extreme suffering and you believe in suffering focused ethics, you might disagree with the above axiom and thus there may be no utility function. A loophole could be to replace the real numbers by another ordered field that contains infinite numbers. Then you could assign to x a utility of -Omega, where Omega is infinitely large.
Unfortunately, I do not have time for a long answer, but I can understand very well how you feel. Stuff that I find helpful is practising mindfulness and/or stoicism and taking breaks from internet. You said that you find it difficult to make future plans. In my experience, it can calm you down to focus on your career / family / retirement even if it is possible that AI timelines are short. If it turns out that fear of AI is the same as fear of grey goo in the 90s, making future plans is better anyway.
You may find this list of mental health suggestions helpful:
Be not afraid to seek help if you get serious mental health issues.
I have switched from academia to software development and I can confirm most that you have written from my own experience. Although I am not very involved in the AI alignment community, I think that there may be similar problems as in academia; mostly because the people interested in AI alignment are geographically scattered and there are too few senior researchers to advise all the new people entering the field.
In my opinion, it is not clear if space colonization increases or decreases x-risk. See “Dark skies” from Daniel Deudney or the article “Space colonization and suffering risks: Reassessing the ‘maxipok rule’” by Torres for a negative view. Therefore, it is hard to say if SpaceX or Bezos Blue Origin are net-positive or negative.
Moreover, Google founded the life extension company Calico and Bezos invested in Unity Biotechnology. Although life extension is not a classical cause area of EA, it would be strange if the moral value of indefinite life extension was only a small positive or negative number.
I want to add that sleep training is a hot-button issue among parents. There is some evidence that starting to sleep-train your baby too early can be traumatic. My advice is simply to gather evidence from different sources before making a choice.
Otherwise, I agree with Geoffrey Millers reply. Your working hours as a parent are usually shorter, but you learn how to set priorities and work more effectively.
Thank you for writing this post. I agree with many of your arguments and criticisms like yours deserve to get more attention. Nevertheless, I still call myself a longtermist; mainly for the following reasons:
There exist longtermist interventions that are good with respect to a broad range of ethical theories and views about the far future, e.g. searching the waste water for unknown pathogens.
Sometimes it is possible to gather further evidence for counter-intuitive claims. For example, you could experiment with existing large language models and search for signs of misaligned behaviour.
There may exist unknown longtermist interventions that satisfy all of our criteria. Therefore, a certain amount of speculative thinking is OK as long as you keep in mind that most speculative theories will die.
All in all, you should keep the balance between too conservative and too speculative thinking.
In my opinion, the philosophy that you have outlined should not be simply dismissed since it contains several important points. Many people in EA, including me, want to avoid the repugnant conclusion and do not think that wireheading is a valueable thing. Moreover, more holistic ethical theories may also lead to important insights. Sometimes an entity has emergent properties that are not shared by its parts.
I agree that it is hard to reconcile animal suffering with a Nietzschian world view. Whats even worse is that it may lead to opinions like “It does not matter if there is a global catastrophe as long as the elite survives”.
It could be possible to develop a more balanced philosophy with help of moral uncertainty or if you simple state that avoiding suffering and excellence are both important values. Finally, you could point out that it is not plausible that humankind is able to flourish although many humans suffer. After all, you cannot be healthy if most of your organs are sick.
I have thought about similar issues as in your article and I my conclusions are broadly the same. Unfortunately, I have not written anything down since thinking about longtermism is something I do beside my job and family. I have some quick remarks:
Your conclusions in Section 6 are in my opinion pretty robust, even if you use a more general mathematical framework.
It is very unclear if space colonization increases or decreases existential risk. The main reason is that it is probably technologically feasible to send advanced weapons across astronomical distances, while building trust across such distances is hard.
Solving the AI alignment problem helps, but you need an exceptionally well aligned AI to realize the “time of perils”-scenario. If an AI does not “kill everyone immediately”, it is not clear if it is able to stick to positive human values for several million years and can coordinate with AIs in space colonies which may have different values.
Since I have seen so many positive reactions to your article, I am wondering if it has some impact if I try to find time to write more about my thoughts.
In my opinion there is a probability of >10% that you are right, which means AGI will be developed soon and you have to solve some of the hard problems mentioned above. Do you have any reading suggestions for people who want to find out if they are able to make progress on these questions? On the MIRI website there is a lot of material. Something like “You should read this first.”, “This is intermediate important stuff.” and “This is cutting edge research.” would be nice.
Thank you for the link to the paper. I find Alexander Vilenkins theoretical work very interesting.
Let us assume that a typical large but finite volume contains happy simulations of you and suffering copies of you, maybe Boltzmann brains or simulations made by a malevolent agent. If the universe is infinite, you have infinitely many happy and infinitely suffering copies of you and it is hard how to interpret this result.
I see two problems with your proposal:
It is not clear if a simulation of you in a patch of spacetime that is not causally connected to our part of the universe is the same as you. If you care only about the total amount of happy experiences, this would not matter, but if you care about personal identity, it becomes a non-trivial problem.
You probably assume that the multiverse is infinite. If this is the case, you can simply assume that for every copy of you that lives for N years another copy of you that lives for N+1 years appears somewhere by chance. In that case there would be no need to perform any action.
I am not against your ideas, but I am afraid that there are many conceptual and physical problems that have to solved before. What is even worse is that there is no universally accepted method how to resolve this issues. So a lot of further research is necessary.
Thank you for your answers. With better brain preservation and a more detailed understanding of the mind it may be possible to resurrect recently deceased persons. I am more skeptical about the possibility to resurrect a peasant from the middle ages by simulating the universe backwards, but of course these are different issues.
Could you elaborate why we have to make choices before space colonisation if we want to survive beyond the end of the last stars? Until now, my opinion is that we can can “start solving heat death” a billion years in the future while we have to solve AI alignment in the next 50 − 1000 years.
Another thought of mine is that it is probably impossible to resurrect the dead by computing how the state of each neuron of a deceased person was at the time of her/his death. I think, you need to measure the state of each particle in the present with a very high precision and/or the computational requirements for a backward simulation are much too high. Unfortunately, I cannot provide a detailed mathematical argument. This would be an interesting research project; even if the only outcome is that a small group of people should change their cause area.
It should be mentioned that all (or at least most) ideas to survive the heat death of the universe involve speculative physics. Moreover, you have to deal with infinities. If everyone is suffering but there is one sentient being that experiences a happy moment every million years, does this mean that there is an infinite amount of suffering and an infite amount of happiness and the future is of neutral value? If any future with an infinite amount of suffering is bad, does this mean that it is good if sentient life does not exists forever? There is no obvious answer to these questions.
Just a detail: The name of the President of the European Commission is “von der Leyen” with “e” instead of “a”.
I am also curious if you have any donation recommendations if the decision on PEPFAR will be final. Obvious candidates would be Give Well or the Global Development Fund of Effektiv Spenden if you live in Germany. But maybe you have other suggestions.