Additional reason that applies to me and probably other EA engineers: Earning to Give lets your impact be more liquid and therefore better directed.
E2G lets you donate money to whichever organization in whichever cause area you think is best. Signing on to work at CEA means you think (impact at CEA) + (donating ?5-15k to Best Charity) is better than (donating 30-60k to Best Charity).
If you think CEA (or New Incentives, or Wave or whatever) is The Most Optimal Charity, easy decision. But it’s not clear why the math would work out if you think X-risk, animal charities, or basic science is the right cause area… or even if you’re into global health/poverty but think GiveWell is better at charity evaluation than you are.
This by the way is what certificates of impact are for, although it’s not a practical suggestion right now because it’s only been implemented at the toy level.
The idea is to create a system where your comparative advantage, in terms of knowledge and skills, is decoupled from your value system. Two people can be working for whichever org best needs their skills, even though the other best matches their values, and agree to swap impact with each other. (As well as the much more complex versions of that setup that would occur in real life).
Additional reason that applies to me and probably other EA engineers: Earning to Give lets your impact be more liquid and therefore better directed.
E2G lets you donate money to whichever organization in whichever cause area you think is best. Signing on to work at CEA means you think (impact at CEA) + (donating ?5-15k to Best Charity) is better than (donating 30-60k to Best Charity).
If you think CEA (or New Incentives, or Wave or whatever) is The Most Optimal Charity, easy decision. But it’s not clear why the math would work out if you think X-risk, animal charities, or basic science is the right cause area… or even if you’re into global health/poverty but think GiveWell is better at charity evaluation than you are.
This by the way is what certificates of impact are for, although it’s not a practical suggestion right now because it’s only been implemented at the toy level.
The idea is to create a system where your comparative advantage, in terms of knowledge and skills, is decoupled from your value system. Two people can be working for whichever org best needs their skills, even though the other best matches their values, and agree to swap impact with each other. (As well as the much more complex versions of that setup that would occur in real life).
Of course, see here: https://80000hours.org/career-guide/high-impact-jobs/
But then also see here: https://80000hours.org/2015/11/why-you-should-focus-more-on-talent-gaps-not-funding-gaps/