My concern is that financial security might become a real bottleneck to do real altruistic work. Even though the EA community is said to be more talent-constrained than funding-constrained, in practice, it seems quite difficult to obtain EA-aligned jobs or research grants (e.g., from Open Philanthropy or related organizations). Many people may therefore need to work in non-EA companies for long periods.
However, I’m unsure how realistic it is to do impactful work in such settings. I’d like to work in AI s-risks field in the future. However, Non-EA companies are profit-oriented, and although some AI companies have AI alignment-related positions, there may be very few jobs related to AI s-risks research (such as preventing AI conflict or digital suffering). My impression is that these s-risks topics are rarely commercially valuable, so opportunities might be very limited. Wild animal suffering opportunties in non-EA companies seem to be quite limited also.
If that’s true, then perhaps a practical approach would be “earning to give for myself” — working in a high-earning but stable job (like medicine), saving a large portion (e.g., $150,000–$200,000 per year), and later using that financial independence to self-fund altruistic research or projects during periods when external funding or EA jobs are unavailable. However that means doing a long period of doing work unrelated to EA at all, it’s still way better if I can find EA career opportunities and contribute altruisticly in non-EA organizations.
So my main question is: How easy or difficult is it, in your experience, to find or create altruistic work within non-EA organizations?
Thank you very much for your time and patience in reading this long question. Your insight would be very valuable to me.
(My background information of reasons and importance on asking this question is typed on the comments section)
It is possible to work at non-EA orgs while still contributing meaningfully to AI risk, but the impact usually comes from what you do outside your main job independent research, collaborations, and earning-to-give to support aligned projects. Direct s-risk or alignment roles in profit-driven companies are rare, so many people build financial stability first and then self-fund research or transition later. It’s not ideal, but it’s a realistic path that still lets you stay engaged, keep learning, and contribute where opportunities exist.
Hi! My superficial understanding is that grantmakers in s-risks have a certain bar for what they’re open to funding, and that they generally have the capacity to fund a marginal independent researcher if their work is sufficiently promising. If, in the future, you seem like an individual with a track record that is good enough in funders’ views (maybe that can come through doing independent research, applying to fellowships, doing non-S-risk related research at AI labs, etc.), then receiving funding will be possible, as money does not seem to be the primary constraint (at leas that’s not what grantmakers in the field seem to think). But that is a high bar to pass.
If you actually manage to save a 150,000$ per year, Macroscopic can advise you in donations to reduce S-risks, which would be a considerable contribution to a cause you seem to care about a lot. (I have no ties to Macroscopic, the information is publically available on their website)
Thanks for your answering a lot first. Well, I know that most EA organizations and grantmakers said talent is primary constraint. However the fact seems to be it’s very difficult to get a job in EA organizations. I’m unsure, but it also seems difficult(like less than 50% success rate) to get independent research fundings from grantmakers. Of course that if you have great talent on researching it’ll be way easier to get fundings, but I’ll probably just become a mediocrity researcher, therefore I probably can’t rely on EA grantmakers to support me.
What do you think about my main question: Is it difficult to find or create altruistic work within non-EA organizations?(especially in reducing AI s-risks)