Longtermists sometimes argue that some causes matter extraordinarily more than others—not just thousands of times more, but 10^30 or 10^40 times more. The reasoning goes: if civilization has astronomically large potential, then apparently small actions could have compounding flow-through effects, ultimately affecting massive numbers of people in the long-run future. And the best action might do far more expected good than the second-best.
I’m not convinced that causes differ astronomically in cost-effectiveness. But if they do, what does that imply about how altruists should choose their careers?
Suppose I believe cause A is the best, and it’s astronomically better than any other cause. But I have some special skills that make me extremely well-suited to work on cause B. If I work directly on cause B, I can do as much good as a $100 million per year donation to the cause. Or instead, maybe I could get a minimum-wage job and donate $100 per year to cause A. If A is more than a million times better than B, then I should take the minimum-wage job, because the $100 I donate will do more good.
This is an extreme example. Realistically, there are probably many career paths that can help the top cause. I expect I can find a job supporting cause A that fits my skill set. It might not be the best job, but it’s probably not astronomically worse, either. If so, I can do much more good by working that job than by donating $100 per year.
But I might not be able to find an appropriate job in the top cause area. As a concrete example, suppose AI safety matters astronomically more than global priorities research. If I’m a top-tier moral philosopher, I could probably make a lot of progress on prioritization research. But I could have a bigger impact by earning to give and donating to AI safety. Even if the stereotypes are true and my philosophy degree doesn’t let me get a well-paying job, I can still do more good by making a meager donation to AI alignment research than by working directly on a cause where my skills are relevant. Perhaps I can find a job supporting AI safety where I can use my expertise, but perhaps not.
(This is just an example. I don’t think global priorities research is astronomically worse than AI safety.)
This argument requires that causes differ astronomically in relative cost-effectiveness. If causes A is astronomically better than cause B in absolute terms, but cause B is 50% as good in relative terms, then it makes sense for me to take a job in cause B if I can be at least twice as productive.
If Causes Differ Astronomically in Cost-Effectiveness, Then Personal Fit In Career Choice Is Unimportant
Confidence: Unlikely
Longtermists sometimes argue that some causes matter extraordinarily more than others—not just thousands of times more, but 10^30 or 10^40 times more. The reasoning goes: if civilization has astronomically large potential, then apparently small actions could have compounding flow-through effects, ultimately affecting massive numbers of people in the long-run future. And the best action might do far more expected good than the second-best.
I’m not convinced that causes differ astronomically in cost-effectiveness. But if they do, what does that imply about how altruists should choose their careers?
Suppose I believe cause A is the best, and it’s astronomically better than any other cause. But I have some special skills that make me extremely well-suited to work on cause B. If I work directly on cause B, I can do as much good as a $100 million per year donation to the cause. Or instead, maybe I could get a minimum-wage job and donate $100 per year to cause A. If A is more than a million times better than B, then I should take the minimum-wage job, because the $100 I donate will do more good.
This is an extreme example. Realistically, there are probably many career paths that can help the top cause. I expect I can find a job supporting cause A that fits my skill set. It might not be the best job, but it’s probably not astronomically worse, either. If so, I can do much more good by working that job than by donating $100 per year.
But I might not be able to find an appropriate job in the top cause area. As a concrete example, suppose AI safety matters astronomically more than global priorities research. If I’m a top-tier moral philosopher, I could probably make a lot of progress on prioritization research. But I could have a bigger impact by earning to give and donating to AI safety. Even if the stereotypes are true and my philosophy degree doesn’t let me get a well-paying job, I can still do more good by making a meager donation to AI alignment research than by working directly on a cause where my skills are relevant. Perhaps I can find a job supporting AI safety where I can use my expertise, but perhaps not.
(This is just an example. I don’t think global priorities research is astronomically worse than AI safety.)
This argument requires that causes differ astronomically in relative cost-effectiveness. If causes A is astronomically better than cause B in absolute terms, but cause B is 50% as good in relative terms, then it makes sense for me to take a job in cause B if I can be at least twice as productive.
I suspect that causes don’t differ astronomically in cost-effectiveness. Therefore, people should pay attention to personal fit when choosing an altruistic career, and not just the importance of the cause.