I think that given the possibility of brain emulation, the division between AIs and humans you are drawing here may not be so clear in the longer term. Does that play into your model at all, or do you expect that even human emulations with various cognitive upgrades will be totally unable to compete with pure AIs?
AndyMcKenzie
I asked GPT-3 your question 10 times. Answers:
- Hitler 7- Judas Iscariot 1
- Napolean Bonaparte 1
- Genghis Khan 1
I then tried to exclude Hitler by saying “Aside from Adolf Hitler” and asked this 10 times as well (some answers gave multiple people). Answers:
- Stalin 5
- Mao Zedong 3
- Pol Pot 2
- Christopher Columbus 1
- Bashar al-Assad 1
The answer to the bonus questions is basically always of the form: “The obvious counterfactual to this harm is that Stalin never came to power, or that he was removed from power before he could do any damage. The ideal counterfactual is that Stalin never existed. As for what an ambitious, altruistic, and talented person at the time could have done to mitigate this harm, it is difficult to say. More hypothetically, an EA-like community could have worked to remove Stalin from power, or to prevent him from ever coming to power in the first place.”
Not sure how helpful this is, but perhaps it is interesting to get a sense of what the “typical” answer might be.
I think I basically agree that if someone can identify a way to reduce extinction risk by 0.01% for $100M-1B, then that would be a better use of marginal funds than the direct effects of brain preservation.
Great post. I fully agree that this seems to be a worthwhile area of funding. Although it was written too soon to be included in the Open Phil prize, I wrote a post on a similar topic here: https://forum.effectivealtruism.org/posts/sRXQbZpCLDnBLXHAH/brain-preservation-to-prevent-involuntary-death-a-possible
I wonder if the EA community feels they have already spent too many “weirdness points” on other areas—mainly AGI x-risk alignment research—and don’t want to distribute them elsewhere. Evidence for this would be that other new cause areas that get criticized as “sci-fi” or people use the absurdity heuristic to discount would be selected against; evidence against it would be the opposite.
It’s also possible that the EA community doesn’t think it’s a very good idea for technical reasons, although in that case, you would at least expect to see arguments against it or research funded into whether it could work.
Hi Jeremy, as far as I can tell, nearly all of the QALYs are dependent upon the idea that it’s better to extend someone’s life than to replace them with a new person. Because by the time that revival is possible, we will likely be able to create new people at will. (This is assuming that society does not decide to not create more people before reaching Malthusian limits.)
Basically, we get rapidly into population ethics if you want to debate whether lives are fungible. As Ariel points out elsewhere in the comments—I was not aware of this connection, but it seems fruitful—“Deciding whether lives are fungible is a key part of the debate between ‘person-affecting’ and ‘total’ utilitarians, and as of-yet unsettled as I see it in the EA community.”
To me, the idea that humans are fungible and that it doesn’t matter if someone dies because we can just create a new person, goes so strongly against my altruistic intuitions that the whole notion is difficult to think about. There is a reason that similar reasoning leads to the repugnant conclusion.
This is part of why I said “I think the field may be among the most cost-effective ways to convert money into long-term QALYs, given certain beliefs and values”; the idea that humans are not fungible is one of those values. I’m not sure how to calculate the QALYs without assuming that value. I don’t think it’s possible to quantify the “sadness”. Do you have any ideas?
Hi Peter, I agree with you that right now there are not any obvious high-value ways to donate money to this area. Although as I just wrote in a comment elsewhere in this thread, I am hoping to do more research on this question in the future, and hopefully others can contribute to that effort as well.
I also agree with you that the history of cryonics suggests it’s hard to get people to sign up. But, I do think that the cost of signing up is an obvious area where interventions can be made. My understanding is that the general public’s price sensitivity has not really been tested very thoroughly.
Thanks for your interest in this topic!
I agree with you that it is hard as an outsider to tell what the current scope of the situation is regarding the need for more funding. This post was more of a high-level overview of the problem to see whether people agreed with me that this was a reasonable cause area for effective altruism.
Since it seems that a good number of people do agree (please tell me if you don’t!), I am hoping to work on the practical area more in the future. For now, I don’t think I know enough to publically say with any confidence whether I think that any particular organization could benefit from more EA-level funding. If pressed, my guess is that the most important thing would be to get more researchers and people in general interested in the field.
I also agree with you about the chicken-and-egg problem of lack of interest and lack of quality of the service. One approach is to start locally, rather than trying to achieve high-quality preservation all over the world. This makes things much cheaper. An obvious problem with the local approach is that any local area may not have enough people interested to get the level of practical feedback needed, although this also can be addressed.
- 23 Mar 2022 13:49 UTC; 1 point) 's comment on Brain preservation to prevent involuntary death: a possible cause area by (
Thanks for the kind feedback!
The main counter-argument to the idea that there is limited space is that in the future, if humanity ever progresses to the point that revival is possible, then we will almost certainly not have the same space constraints we do now. For example, this may be because of whole brain emulation and/or because we have become a multi-planetary species. Many people, myself included, think that there is a high likelihood this will happen in the next century or sooner: https://www.cold-takes.com/most-important-century/
There is also an argument that we actually do not have limited space or resources on the planet now. For example, this was explained by Julian Simon: https://en.wikipedia.org/wiki/The_Ultimate_Resource. But that is a little bit more controversial and not necessary to posit for the sake of counter-argument, in my opinion.
A related question is: what is the point of (a) extending an existing’s person’s life when you could just (b) create a new person instead? I think (a) is much better than (b), because I what I described as “the psychological and relational harms caused by involuntary death” in the post. But others might disagree; it depends on whether they think that humans are replaceable or not.
There is also a discussion about this on r/slatestarcodex that you might be interested in: https://www.reddit.com/r/slatestarcodex/comments/tk2krv/brain_preservation_to_prevent_involuntary_death_a/i1o2s1d/
As @Ariel_ZJ wrote, it is already possible for brain activity to fully cease and then restart, and people don’t typically think that they were “destroyed” and “recreated” after that.
With some revival strategies, such as whole brain emulation, some people are concerned about a “copy problem”, because it would not be the same atoms/molecules instantiated, just the same patterns. Personally, I don’t think that the copy problem is an actual concern, for reasons explained here: https://www.brainpreservation.org/content-2/killed-bad-philosophy/
My expectation is that in the future, with anti-aging technology or whole brain emulation, aging will not significantly add to the marginal cost of providing another year of life.
Does this address your hesitation? I’m not sure if you’re referring to something else.
Thanks for your kind comments! Much appreciated.
I agree that brain preservation could potentially be cost-saving for healthcare systems if combined with medical aid in dying and people were interested in this rather than pursuing painful care that is likely futile. However, my guess is that healthcare systems in general are not very cost-efficient from an effective altruism perspective, so it’s hard to see how this would affect overall QALYs.
Can you please explain what you mean by “anti-aging fetish”?
I think the cost of brain preservation procedures and financial accessibility is extremely important. As I mention in the post, some of the options that are already available today are relatively cheap, costing a few thousand dollars. This is cheaper than the average funeral in the United States. With more research, the procedures could potentially become cheaper. In my view, brain preservation would ideally be free to the individual and paid for by philanthropy or health insurance, so that there are no financial accessibility problems.
Of course, whether any of these procedures will actually work is an open question.
I’m not sure if the cost of the procedure is what you are concerned about, though. Is there some other reason that you think that brain preservation would only be for the benefit of extremely rich people?
Brain preservation to prevent involuntary death: a possible cause area
I’m surprised that you find that persuasive.
It suggests that humans are fungible: if some people die, it’s no matter, because more can simply be created. This strongly goes against my intuition.
I also think that human fungibility is flawed from a hedonistic quality of life perspective. Much, perhaps most, of human angst is due to involuntary death. There has been a lot of philosophic work on this. One famous book is Ernest Becker’s: https://en.wikipedia.org/wiki/The_Denial_of_Death/.
Involuntary death is one of the great harms of life. Decreasing the probability and inevitability of involuntary death seems to have the potential to dramatically improve the quality of human lives.
It is also not clear that future civilizations will want to create as many people as they can. It is quite plausible that the future civilizations will be reticent to do this. For one, those people have not consented to be born and the quality of their lives ay still be unpredictable. Whereas people who have opted for cryonics/biostasis are consenting to live longer lives.
That’s actually not what I am saying; rather, I am questioning your claim that human life is exchangeable. Because most people intensely dislike the prospect of death and it makes life a lot more difficult in many ways, it seems much better (to me) to have one person live for 100 years than to have two people live for 50, given the same healthspan.
Separately, I expect that any increased expected chance of an increased lifespan, including cryonics, would increase the average person’s propensity to make long-term investments in themselves and in their communities.
Important link that hasn’t been mentioned yet: http://www.overcomingbias.com/2010/07/cryonics-as-charity.html
The argument is that social reasons are a contributor to people not signing up for or being interested in brain preservation/cryonics, and that doing so yourself helps decrease that.
Really agree with this style of reasoning.
It’s worth pointing out your case is weakened by the cases of Kim Suozzi and Aaron Drake, both of whom had their suspensions paid for by the community within the last few years.
It’s also worth pointing out that there has been at least one attempt to give away an Alcor membership to a random person (chosen by lottery). The person who won it ended up not going through with the sign-up process. This was discussed on Mike Darwin’s blog (I can’t easily find the link right now, but lmk if you’re curious).
Also, some in the cryonics/brain preservation community have donated to research and logistical investments that would certainly not benefit themselves only.
ETA: Another point here is that because of the tricky informed consent and possible negative outcomes following brain preservation, it’s much more difficult to choose for other people to be preserved rather than choosing to preserve oneself.
It seems to me (as a utilitarian) that allowing people to effectively die and then be brought back to life later is approximately morally equivalent to allowing people to die and then creating entirely new people later.
Could you please flesh out your reasoning for this a little bit more?
It seems to me that there is a large difference between your two scenarios, with much larger utility going to extending existing people’s life rather than creating new ones.
This is because an extremely large cause of disutility for current people is the fact that they will inevitably die. This prevents them from making long-term investments in their own happiness, their local communities, and the world at large. Evidence for this abounds, and includes strong rationalizing behavior towards death. Atul Gawande also discusses it in his book Being Mortal.
I agree with you that pure software AGI is very likely to happen sooner than brain emulation.
I’m wondering about your scenario for the farther future, near the point when humans start to retire from all jobs. I think that at this point, many humans would be understandably afraid of the idea that AIs could take over. People are not stupid and many are obsessed with security. At this point, brain emulation would be possible. It seems to me that there would therefore be large efforts in making those emulations competitive with pure software AI in important ways (not all ways of course, but some important ones, involving things like judgment). Possibly involving regulation to aid this process. Of course it is just a guess, but it seems likely to me that this would work to some extent. However, this may stretch the definition of what we currently consider a human in some ways.