Is there a hedonistic utilitarian case for Cryonics? (Discuss)
Cryonics is a popular topic among the rationalist community but not the utilitarian community. My impression is that most people who promote Cryonics are generally not utilitarians and most utilitarians do not promote Cryonics.
This seems to be one area where the rationalist and EA communities diverge significantly. My take is that typically those excited in Cryonics are typically in it for somewhat selfish (not in a bad way, just different from utilitarian) reasons, and that there haven’t been many attempts to justify it as utilitarian because that wasn’t the original intention.
I can imagine some interesting arguments for Cryonics as an effective intervention, but I haven’t heard many others give these arguments, and I’m reluctant to steel man a cause for a reason its believers don’t care about.
I wanted to open this up for the discussion. I would hope we can roughly come to a consensus on which of the following is true:
1) There is a strong case for cryonics being an effective monetary intervention, and the math has been done to support this.
2) Cryonics can be an effective career intervention for someone with a large amount of career capital in the field, but not for others.
3) There is very little case for cryonics as an effective utilitarian intervention, though it could make sense for other philosophical systems or people with moral uncertainty.
Other questions:
1) If there is no Hedonistic Utilitarian case for Cryonics, are there any strong Effective Altruist cases for it?
2) How much of the above applies to life extension research?
- Big List of Cause Candidates by 25 Dec 2020 16:34 UTC; 282 points) (
- Brain preservation to prevent involuntary death: a possible cause area by 22 Mar 2022 12:33 UTC; 50 points) (
- Brain preservation to prevent involuntary death: a possible cause area by 22 Mar 2022 12:36 UTC; 39 points) (LessWrong;
- Why are some EAs into cryonics? by 17 Mar 2021 7:10 UTC; 36 points) (
- 7 Aug 2023 16:29 UTC; 3 points) 's comment on Why isn’t radical life extension the focus of EA? by (
- 27 Oct 2022 12:32 UTC; 2 points) 's comment on Five Areas I Wish EAs Gave More Focus by (
- 課題候補のビッグリスト by 20 Aug 2023 14:59 UTC; 2 points) (
Some folks argue that cryonics is or may be justified on EA grounds. Among these people, some go ahead and pay for a cryonics subscription. However, I have yet to find a single person in that group who has paid for someone else’s subscription, rather than his or her own. If there was indeed an EA justification for cryonics, this would be an extraordinary coincidence. The hypothesis that these decisions were motivated by self-interest and later rationalized as justified on EA grounds seems much more plausible.
Really agree with this style of reasoning.
It’s worth pointing out your case is weakened by the cases of Kim Suozzi and Aaron Drake, both of whom had their suspensions paid for by the community within the last few years.
It’s also worth pointing out that there has been at least one attempt to give away an Alcor membership to a random person (chosen by lottery). The person who won it ended up not going through with the sign-up process. This was discussed on Mike Darwin’s blog (I can’t easily find the link right now, but lmk if you’re curious).
Also, some in the cryonics/brain preservation community have donated to research and logistical investments that would certainly not benefit themselves only.
ETA: Another point here is that because of the tricky informed consent and possible negative outcomes following brain preservation, it’s much more difficult to choose for other people to be preserved rather than choosing to preserve oneself.
I intend to pay for someone else’s subscription!
Did you ever act on this intention?
I am also interested in the outcome of this.
I’ve been musing about a Suspension for Historically Significant Minds movement. I don’t particularly care whether I personally get suspended, I don’t think I’m important, we can only save so many of these living biographies, others are more important, I think it’s a tragedy that the most interesting biographies are currently being burned.
I’m not sure it’s reasonable to expect a fund like this to be able to act very often, though! The figures who wont pay for their own suspension usually aren’t going to be willing to accept suspension.
The people I’d want to nominate would tend to have a deep attachment to some community of the present, they would rarely think of the far future. Most of them, on receiving their invitation would think about it for 20 minutes and then trash it, out of a sense of humility, and out of a sense that accepting such a thing would look from the outside like an abandonment of their community. I would want to say to them, “No, you were selected because you are the largest portion of that community that we’re able to save.” I’m not sure whether they’d hear it.
Maybe it would help to give them additional nominations to allocate to others, so it wouldn’t just be them. A lot of them wouldn’t want to deal with the political consequences of having to make a decision like that. It would just make things messier. The dirty work of triage.
It seems to me (as a utilitarian) that allowing people to effectively die and then be brought back to life later is approximately morally equivalent to allowing people to die and then creating entirely new people later. (Although maybe having people around who have already lived ~80 years has advantages.) I might cryonicize myself if I expect that I will add substantially more utility to the future than the average person, which is not implausible; but I wouldn’t take cryonics out of my charity budget.
Could you please flesh out your reasoning for this a little bit more?
It seems to me that there is a large difference between your two scenarios, with much larger utility going to extending existing people’s life rather than creating new ones.
This is because an extremely large cause of disutility for current people is the fact that they will inevitably die. This prevents them from making long-term investments in their own happiness, their local communities, and the world at large. Evidence for this abounds, and includes strong rationalizing behavior towards death. Atul Gawande also discusses it in his book Being Mortal.
Are you saying that cryonics could improve current people’s ability to make long-term investments in the world? I don’t see that that’s true.
That’s actually not what I am saying; rather, I am questioning your claim that human life is exchangeable. Because most people intensely dislike the prospect of death and it makes life a lot more difficult in many ways, it seems much better (to me) to have one person live for 100 years than to have two people live for 50, given the same healthspan.
Separately, I expect that any increased expected chance of an increased lifespan, including cryonics, would increase the average person’s propensity to make long-term investments in themselves and in their communities.
One advantage of life extension is that it might prompt people to think in a more long-term-focused way, which might be nice for solving coordination problems and x-risks.
Some people think that they are able to think more clearly about the future as a result of being signed up for cryonics, because they aren’t as scared of death and don’t need to rationalize that eg the Singularity will happen in their lifetimes.
In cryonicists’ defense, I’ve never heard them say that they buy cryonics from their EA budget; it seems to be a personal spending thing.
I’m totally ok with people agreeing to spend money on it, but not from their EA budget, and acknowledging that.
Agree it definitely has some long term advantages, curious how we can estimate those.
I find the argument “I’m so afraid of dying and believe in cyronics so much that signing up for cryonics would end many of my worries and let me be far more productive” kind of humorous, though imagine that it could be true for a very small set of people.
Hey Ozzie, could you explain why you find it humorous? Full disclosure: I’m in the cryo camp and I’d like to learn how to explain my beliefs to others in future.
(Note: I found this old thread after Eliezer recently shared this Wait But Why post on his Facebook: Why Cryonics Makes Sense)
I don’t find this argument humorous, but I do see it as perhaps the most plausible argument defending cryonics from an EA perspective.
That said, I don’t think the argument succeeds for myself or (I would presume) a large majority of other people.
(It seems to me that the exceptions that may exist would tend to be the people who are very high producers (such that even a very small percentage increase in their good-production would outweigh the cost of their signing up for cryonics) rather than people who are exceptionally afraid of death and love the idea of possibly waking up in the distant future and living longer so much to the point that not signing up for cryonics would be debilitating to them and a sufficiently large hindrance on their productivity (e.g. due to feeling depressed and being unable to concentrate on EA work, knowing that this cryonics option exists that would give them hope) to outweigh the cost of signing up for cryonics.)
So I don’t see cryonics as being very defensible from an EA perspective.
This has also been discussed at Effective Altruism and Cryonics at Lesswrong
Thanks! I’ve never looked into the Brain Preservation Foundation, but since RomeoStevens’ essay, which is linked to in the post you linked to above, mentions it as being potentially a better target of funding than SENS, I’ll have to look into it sometime.
This was my entry to the lesswrong essay contest on whether a utilitarian should sign up for cryonics:
Arguing for Cryo as EA seems to be a bottom line reasoning for me.
I can imagine exceptions. For instance: 1) Mr E.A. is an effective altruist who is super productive and gets most enjoyment out of working, and rests by working even more. Expecting to be an emulation with high likelihood, Mr E. A. decided for cryopreservation to give himself a chance of becoming an emulation coalition which would control large fractions of the EM economy, and use these resources for the EA cause on which society has settled after long and careful thought.
2) Rey Cortzvail is a prominent technologists who fears death more than anything else. He received medals from many presidents, and invented many of the great technologies over the last few decades. To continue working well, it is fundamental for him to think he may get a chance to live a long life, so he signs up for cryonics to purchase peace of mind.
3) Neith Sorez wants to help the world’s mess, he wants it badly. He also is not on board with the whole people die thing, and to be fair, much of his perception of how screwed up things are comes from the clarity with which he envisions the grim reaper, and the awful brought by it. He’s convinced that AI matters and is pushing through to help AI be less dangerous and quite possibly help most people to get a chance to live long. He wears the necklace though as a reminder, to himself and others, and as a signal of allegiance to the many others who can see the scope of the horror that lies behind us, and the need to stop it from happening towards the future.
4) Miss E.A. has entered the EA community by going to cryonics meetings and noticing that these EA people seem to be pretty freaking awesome, and thinking there’s no reason to not join the team, learn all about EA. Within a year, she is very convinced of some particular set of actions and is an active and central EA, all of this came through the cryonics community and her friends there, so she decides to not drop out of cryonics, for historical, signalling, allegiance and friendship reasons.
These above don’t seem to me to be bottom line arguments.
But arguments of the form: “The best way to help the far future is to self preserve” through cryonics should definitely be dominated by “Paying for cryonics for whoever is the person you think is doing the best job who isn’t yet a cryonicist”.
At the risk of necro’ing and old thread, I think we may want to reassess in light of the latest Wait But Why article.
Clearly, at this point cryonics is not an effective way to save a life: costs of up to $250k per life are a far cry from the effectiveness of for example AMF. The moral implications are also not clear: is cryopreserving people morally equivalent to saving people from treatable diseases, for instance? Do we care about aggregate happiness, or suffering? How does cryonics compare to XRisk causes?
There are all very difficult questions, and certainly some of these will have very personalised answers. Still, even for those who consider a successful resuscitation as a result of cryonics of equal or greater value to a life saved by other means (perhaps even if that life then reaches the longevity escape point) it remains matter of basic effectiveness.
I propose that the value of cryonics for EA may be in something beyond its immediate effectiveness: its economy of scale, which relates to its neglectedness. Currently cryonics is a heavily neglected cause; the total number of people signed up in the world don’t even measure into the ten thousands. This is unfortunate, because cryonics is an intervention that benefits greatly from improvements in scale; both by reducing costs, and by improving techniques (thereby increasing the probability of success). This article by Alcor suggests that a world scale cryonics network could make saving a life cheaper than even AMF. Of course, these numbers are likely fairly biased in favour of cryonics; nonetheless, we should take note of this possiblity.
Instead of paying for (impactful) people to be frozen (a recurring cryonics proposal in EA), the real value of cryonics as a cause may be to work on its expansion and ‘normalisation’. This will be a challenge, and the weirdness associated with cryonics may negatively impact public perception of EA, especially since the most effective intervention may involve attempting to change this public perception. This is definitely something to keep in mind. Moreover, the project may not be worth its resources if anti-aging technologies reach the longevity escape point before cryonics becomes sufficiently ‘normal’, although it will still have uses in saving people from then-uncurable diseases.
Estimation is rather complicated; the difficulties are reminiscent of XRisk calculations, although cryonics lacks the feature of the destruction of ‘a hypothetical overwhelming number of future sentients’; instead it covers only a moderate amount of sentients, scaling inversely with advances in medical technology. Despite this, the effectiveness of ‘cryonics normalisation’ (for lack of a better term) as a cause may still be worth discussing, simply because of its tremendous potential.
EDIT: A relevant blog post by Robin Hanson discussing both scale and other charitable aspects of cryonics
Important link that hasn’t been mentioned yet: http://www.overcomingbias.com/2010/07/cryonics-as-charity.html
The argument is that social reasons are a contributor to people not signing up for or being interested in brain preservation/cryonics, and that doing so yourself helps decrease that.
Epistemic status: low confidence on both parts of this comment.
On life extension research:
See here and here, and be sure to read Owen’s comments after clicking on the latter link. It’s especially hard to do proper cost effectiveness estimates on SENS, though, because Aubrey de Grey seems quite overconfident (credence-wise) most of the time. SENS is still the best organization I know of that works on anti-aging, though.
On cyonics:
I suspect that most of the expected value from cyonics comes from the outcomes in which cyonics becomes widely enough available that cyonics organizations are able to lower costs (especially storage costs) substantially. Popularity would also help on the legal side of things—being able to start cooling and perfusion just before legal death could be a huge boon, and earlier cooling is probably the easiest thing that could be done to increase the probability of successful cryonics outcomes in general.
A consideration for why cryonics should count as personal spending and not altruistic spending:
Suppose you have a gym membership and work out regularly. This will help you stay healthy for longer so you can have more productive years with which to help the world. So in a sense, the gym membership is good for the world. But you would not consider it as part of your charity budget.
You can extend this even further. Suppose you buy a nice suit which helps you look better in job interviews and you earn a higher-paying job, and then donate the extra money. This is good for the world too, but a new suit certainly doesn’t count as charitable spending.
Similarly, even if signing up for cryonics helps you improve the world, it’s not the same as donating to charity.
Some people do allow personal investments to count toward charitable obligations. For example, I think the GWWC further giving pledge allows exceptions toward education and professional expenses.
Seems to me that the underlying reasons, beyond the externalities Buck mentioned, are if your personal ethics have some combination of the following four qualities:
Terminally valuing people not dying, independent of opportunity costs
Valuing a lifetime of 2n years more than 2 lifetimes of n years
Valuing oneself more than others at some multiplier (assuming reflective stability)
Deep, persistent, reflectively stable fear of death
I am like this. I’m pretty sure oge is like this. It seems like there may be some philosophical barriers here. I definitely think that hedonistic utilitarianism violates 1 and 2 outright, and strongly discourages 3 and 4. As a preference utilitarian, the strength of 1 and 4 outweigh the costs to me.
Maybe it would be a worthwhile decision in the long run to sponsor cryonics for a few high profile, high impact EA figures? The monetary cost would be small, but the long run preservation of intellectual capital and leadership could be extremely beneficial.
This could be bad for PR, by making us look like a creepy cult. But I don’t know if that dominates considerations.
One argument for this position came up while discussing possible permanent societal improvements:
Utilitarianism is pretty simple in theory so there should only be three relevant questions here:
Well how much value is your labour contributing per year?
How much extra labour would you expect to do if you subscribe for cryonics, in net present value?
How much does it cost, in net present value?
If you take a broader rational approach, then the fact that it lets you live longer is always handy.
I think that there could be arguments made that sponsoring cryonics research could be useful outside of its implications to you personally. For instance, maybe if continuity of life is super important, then it could be worth paying for lots of other people to be signed up, or maybe its promotion would encourage people to think longer term.
I’m not arguing from implications to you personally, I’m arguing that you should keep yourself alive for reasons other than that.
Buying cryonics for others is pretty left-field but maybe it could be justified.
If we bought cryonics for others then we should start with the most high-profile, high-impact people who we can expect to accomplish the most over their lifetimes. In those cases it might make a lot of sense.
Cryonics is as useful for the world as letting any other refrigerator run without a useful function.
That is, it is waste. Still better than some other ways to spend money, but worse than spending it on entertainment. Because at least that’s entertaining.
Let’s maintain some standards of discourse where you have to give reasoning if you want to disagree! http://www.paulgraham.com/disagree.html http://lesswrong.com/lw/85h/better_disagreement/
That’s a complicated way of saying “I don’t think it works” 0_o