Under a total utilitarian view, it is probably second or third after existential risk mitigation.
[...]
I can count at least three times in which non-profits operating under the principles of Effective Altruism have acknowledged SENS and then dismissed it without good reasons.
I once read a comment on the effective altruism subreddit that tried to explain why aging didn’t get much attention in EA despite being so important, and I thought it was quite enlightening. Supporting anti-aging research requires being weird across some axes, but not others. You have to be against something that most people think is normal, natural and inevitable while at the same time being short-termist and human-focused.
People who are weird across all axes will generally support existential risk mitigation, or moral circle expansion, depending on their ethical perspective. If you’re short termist but weird in other regards, then you generally will help factory farm animals or wild animals. If you are not weird across all axes, you will support global health interventions.
I want to note that I support anti-aging research, but I tend to take a different perspective than most EAs do. On a gut level, if something is going to kill me, my family, my friends, everyone I know, everyone on Earth if they don’t get killed by something else first, and probably do so relatively soon and in a quite terrible way, I think it’s worth investing in a way to defeat that. This gut-level reaction comes before any calm deliberation, but it still seems compelling to me.
My ethical perspective is not perfectly aligned with a long-termist utilitarian perspective, and being a moral anti-realist, I think it’s OK to sometimes support moral causes that don’t necessarily have a long-term impact. Using similar reasoning, I come to the conclusion that we should be nice to others and we should help our friends and those around us when possible, even when these things are not as valuable from a long-termist perspective.
I once read a comment on the effective altruism subreddit that tried to explain why aging didn’t get much attention in EA despite being so important, and I thought it was quite enlightening.
Longevity research occupies an unstable position in the space of possible EA cause areas: it is very “hardcore” and “weird” on some dimensions, but not at all on others. The EAs in principle most receptive to the case for longevity research tend also to be those most willing to question the “common-sense” views that only humans, and present humans, matter morally. But, as you note, one needs to exclude animals and take a person-affecting view to derive the “obvious corollary that curing aging is our number one priority”. As a consequence, such potential supporters of longevity research end up deprioritizing this cause area relative to less human-centric or more long-termist alternatives.
(Comment cross-posted from Lesswrong)
I once read a comment on the effective altruism subreddit that tried to explain why aging didn’t get much attention in EA despite being so important, and I thought it was quite enlightening. Supporting anti-aging research requires being weird across some axes, but not others. You have to be against something that most people think is normal, natural and inevitable while at the same time being short-termist and human-focused.
People who are weird across all axes will generally support existential risk mitigation, or moral circle expansion, depending on their ethical perspective. If you’re short termist but weird in other regards, then you generally will help factory farm animals or wild animals. If you are not weird across all axes, you will support global health interventions.
I want to note that I support anti-aging research, but I tend to take a different perspective than most EAs do. On a gut level, if something is going to kill me, my family, my friends, everyone I know, everyone on Earth if they don’t get killed by something else first, and probably do so relatively soon and in a quite terrible way, I think it’s worth investing in a way to defeat that. This gut-level reaction comes before any calm deliberation, but it still seems compelling to me.
My ethical perspective is not perfectly aligned with a long-termist utilitarian perspective, and being a moral anti-realist, I think it’s OK to sometimes support moral causes that don’t necessarily have a long-term impact. Using similar reasoning, I come to the conclusion that we should be nice to others and we should help our friends and those around us when possible, even when these things are not as valuable from a long-termist perspective.
For background, here’s the comment I wrote: