Yes, if such a farfetched situation ever occurred (a productive EA suddenly confronted with the need for expensive medical treatment, but for some reason they couldn’t raise enough cash from their personal friends/family/etc), it could make sense for an EA organization to give them a grant covering some of the cost of the treatment, just as people sometimes receive grants to boost their productivity in other ways—allowing them to hire a research assistant, or financing a move so they can work in-person rather than remotely, or buying an external monitor to pair with their laptop, or etc.
But I don’t think this would realistically extend to ever crazier and crazier levels, like spending millions of dollars on lifesaving medical treatment. First of all, there are few real-life medical procedures that cost so much, and fewer still which are actually effective at prolonging healthy life by decades. (On average, half of all US healthcare costs are incurred in the last 6 months of people’s lives—people are understandably very motivated to spend a lot of money for even a small increase in their own lifespan while fighting diseases like cancer, but it wouldn’t make sense altruistically to spend exorbitant sums on this kind of late-stage care for random EAs.)
Furthermore, even if you believe that longtermist work is super-duper-effective, such that each year of a longtermist’s research/effort is saving thousands or millions of lives in expectation… (Personally, I am a pretty committed longtermist, but nevertheless I doubt that longtermism is bajillions of times more effective than everything else—its big advantages on paper are eroded somewhat by issues like moral cluelessness, lack of good feedback loops, the fact that I am not just a total hedonic utilitarian that linearly values the creation of more and more future lives, etc.)
...even if you believe that longtermism is super-duper-effective, it still wouldn’t be worth paying exorbitant sums to fund the medical care of individual researchers, because it would be cheaper to simply pay the salaries of newly hired, replacement researchers of equivalent quality! So, in practice, in a maximally dramatic scenario where a two-million-dollar medical treatment could restore a dying EA to another 30 years of perfect health and productivity, a grantmaking organization could weigh it against the alternative option to spend the same money on funding 30 years of research by other people at $70,000/year.
Of course if someone was especially effective and hard-to-replace (perhaps we elect an EA-sympathetic politician to the US Senate, and they are poised to introduce a bunch of awesome legislation about AI safety, pandemic preparedness, farmed animal welfare, systemic reform of various institutions, etc, but then they fall ill in dramatic fashion), the numbers would go up. But they wouldn’t go up infinitely—there will always be the question of opportunity cost, and what other interventions could have been funded. (Eg, I doubt there’s any single human on Earth who we could identify as having such a positive impact on the world that it would be worth diverting 100% of OpenPhil’s longtermist grantmaking for an entire year, to save them from an untimely death.)
Yes, if such a farfetched situation ever occurred (a productive EA suddenly confronted with the need for expensive medical treatment, but for some reason they couldn’t raise enough cash from their personal friends/family/etc), it could make sense for an EA organization to give them a grant covering some of the cost of the treatment, just as people sometimes receive grants to boost their productivity in other ways—allowing them to hire a research assistant, or financing a move so they can work in-person rather than remotely, or buying an external monitor to pair with their laptop, or etc.
But I don’t think this would realistically extend to ever crazier and crazier levels, like spending millions of dollars on lifesaving medical treatment. First of all, there are few real-life medical procedures that cost so much, and fewer still which are actually effective at prolonging healthy life by decades. (On average, half of all US healthcare costs are incurred in the last 6 months of people’s lives—people are understandably very motivated to spend a lot of money for even a small increase in their own lifespan while fighting diseases like cancer, but it wouldn’t make sense altruistically to spend exorbitant sums on this kind of late-stage care for random EAs.)
Furthermore, even if you believe that longtermist work is super-duper-effective, such that each year of a longtermist’s research/effort is saving thousands or millions of lives in expectation… (Personally, I am a pretty committed longtermist, but nevertheless I doubt that longtermism is bajillions of times more effective than everything else—its big advantages on paper are eroded somewhat by issues like moral cluelessness, lack of good feedback loops, the fact that I am not just a total hedonic utilitarian that linearly values the creation of more and more future lives, etc.)
...even if you believe that longtermism is super-duper-effective, it still wouldn’t be worth paying exorbitant sums to fund the medical care of individual researchers, because it would be cheaper to simply pay the salaries of newly hired, replacement researchers of equivalent quality! So, in practice, in a maximally dramatic scenario where a two-million-dollar medical treatment could restore a dying EA to another 30 years of perfect health and productivity, a grantmaking organization could weigh it against the alternative option to spend the same money on funding 30 years of research by other people at $70,000/year.
Of course if someone was especially effective and hard-to-replace (perhaps we elect an EA-sympathetic politician to the US Senate, and they are poised to introduce a bunch of awesome legislation about AI safety, pandemic preparedness, farmed animal welfare, systemic reform of various institutions, etc, but then they fall ill in dramatic fashion), the numbers would go up. But they wouldn’t go up infinitely—there will always be the question of opportunity cost, and what other interventions could have been funded. (Eg, I doubt there’s any single human on Earth who we could identify as having such a positive impact on the world that it would be worth diverting 100% of OpenPhil’s longtermist grantmaking for an entire year, to save them from an untimely death.)