I wonder if the EA community feels they have already spent too many “weirdness points” on other areas—mainly AGI x-risk alignment research—and don’t want to distribute them elsewhere. Evidence for this would be that other new cause areas that get criticized as “sci-fi” or people use the absurdity heuristic to discount would be selected against; evidence against it would be the opposite.
It’s also possible that the EA community doesn’t think it’s a very good idea for technical reasons, although in that case, you would at least expect to see arguments against it or research funded into whether it could work.
Great post. I fully agree that this seems to be a worthwhile area of funding. Although it was written too soon to be included in the Open Phil prize, I wrote a post on a similar topic here: https://forum.effectivealtruism.org/posts/sRXQbZpCLDnBLXHAH/brain-preservation-to-prevent-involuntary-death-a-possible
I wonder if the EA community feels they have already spent too many “weirdness points” on other areas—mainly AGI x-risk alignment research—and don’t want to distribute them elsewhere. Evidence for this would be that other new cause areas that get criticized as “sci-fi” or people use the absurdity heuristic to discount would be selected against; evidence against it would be the opposite.
It’s also possible that the EA community doesn’t think it’s a very good idea for technical reasons, although in that case, you would at least expect to see arguments against it or research funded into whether it could work.