there are important downsides to the “cause-first” approach, such as a possible lock-in of main causes
I think this is a legitimate concern, and I’m glad you point to it. An alternative framing is lock-out of potentially very impactful causes. Dynamics of lock-out, as I see it, include:
EA selecting for people interested in the already-established causes.
Social status gradients within EA pushing people toward the highest-regarded causes, like AI safety.[1]
EAs working in an already-established cause having personal and career-related incentives to ensure that EA keeps their cause as a top priority.
A recent shortform by Caleb Parikh, discussing the specific case of digital sentience work, feels related. In Caleb’s words:
I think aspects of EA that make me more sad is that there seems to be a few extremely important issues on an impartial welfarist view that don’t seem to get much attention at all, despite having been identified at some point by some EAs.
Personal anecdote: Part of the reason, if I’m to be honest with myself, for my move from nuclear weapons risk research to AI strategy/governance is that it became increasingly difficult, socially, to be an EA working on nuclear risk. (In my sphere, at least.) Many of my conversations with other EAs, even in non-work situations and even with me trying avoid this conversation area, turned into me having to defend my not focusing on AI risk, on pain of being seen as “not getting it”.
Social status gradients within EA pushing people toward the highest-regarded causes, like AI safety.[1]
I think this is relatively underdiscussed / important. I previously wrote about the availability bias in EA jobhunting and have anecdotally seen many examples of this both in terms of social pressures and norms, but also just difficulty of forging your own path vs sticking to the “defaults”. It’s simply easier to try and go for EA opportunities where you have existing networks, and there are additionally several monetary, status, & social rewards for pursuing these careers.
I think it’s sometimes hard for people to decouple these when making career decisions (e.g. did you take the job because it’s your best option, or because it’s a stable job which people think is high status)
Caveats before I begin:
I think it’s really good for people who need to (e.g. from low SES backgrounds) to take financial security into consideration when making important career decisions. But I think this community also has a lot of privileged people who could afford to be a little more risk-taking.
I don’t think it’s bad that these programs and resources exist—I’m excited that they exist. But we need to acknowledge how they affect the EA ecosystem. I expect the top pushback will be the standard one, which is that if you have very short timelines, other considerations simply don’t matter if you do a E(V) calculation.
I think that people should take more ownership of exploring other paths and trying difficult things than they currently do, but I also think it’s important to consider the ecoystem impacts and how it can create lock-in effects on certain causes.
These projects exist for a reason—the longtermist space is less funding constrained than the non-longtermist one, it’s a newer field, and so many of the opportunities available are field building ones.
Here are some concrete examples of how the presence of upskilling opportunities & incentives in more specifically x-risk and AIS space) in the last 12-24 months , with comparisons of some other options and how they stack up:
(written quickly of the top of my head, I expect some specific examples may be wrong in details or exact scope. If you can think of counter-examples please let me know!)
Career advising resources:
80K has been the key career resource for over 10 years, and they primarily investing resources in expanding their LT career profiles, resources & advice (without a robust alternative for several years.
80K made a call to get others interested in various aspects of career advising they are not covering and have posted about it in 2020, 2021, and 2022 but (as far as I can tell) with limited traction.
There are some other career options—Animal Advocacy Careers and Probably Good—they are at early stages and still ramping up (even in 2023).
Career funding / upskilling opportunities:
There are the century fellowship & early career funding for AI / bio & Horizon for longtermist policy careers (there is nothing similar for any other cause AFAIK). These are 1-2 year long open-ended funding opportunities. (There is the Charity Entrepreneurship incubator, which mostly funds neartermist and meta orgs and accepts about 20 applicatns per round (historically one per year, from 2023 will be 2 rounds per year))
When Future Fund was running (and LTFF has also done this), there were several opportunities for people interested in AI safety (possibly other LT causes too, my guess was the bulk was AIS) to visit the bay for the summer, or do career transition grants and so on (there was no equivalent for other causes)
Since 2021, we now have multiple programs to skill up in AI and other X-risks (AGISF & biosecurity program from BlueDot, SERI MATS, various other ERIx summer internships). (somewhat similar programs with fewer resources are the alt proteins fellowship from BlueDot, a China-based & South East Asia-based farmed animal fellowship in 2022, and AAC’s programming)
There are paid general LT intro programs like the Global Challenge Project retreats, Atlas Fellowship, Nontrivial (There is Intro to VP program, community retreats organized by some local groups & LEAF which have less funding / monetary compensation)
There are now several dedicated AIS centers at various universities (SERI @ Stanford, HAIST @ Harvard, CBAI @ Harvard / MIT) and a few X-risk focused (ERA @ Cambridge (?), CHERI in Switzerland). As far as I know, there are no such centers for other causes (and even non-AI x-risk causes). These centers are new, but can provide better quality advice, resources and guidance for pursuing these career paths over others.
Networking: This seems rougly equal.
The SERI conference has run since 2021 (there is EA Global, and several EAGx’s per year, but no dedicated opportunities for other causes.)
Funding for new community projects
Bulk (90%) of EA movement building from OP is funded by the longtermist team, and most univesity EA groups funding is from the longtermist team. I’d love to know more about how those groups and projects are evaluated and how much funding ends up going to more principles-first community building, as opposed to cause-specific work.
Most of OP’s neartermist granting has gone towards effective giving (because it has the highest ROI)
There are even incentives for infrastructure providers (e.g. Good Impressions, cFactual, EV, Rethink etc.) to primarily support the longtermist ecosystem as that’s where the funding is (There are a few meta orgs supporting the animal space, such as AAC, Good Growth, and 2 CE incubated orgs—Animal Ask & Mission Motor)
Career exploration grants:
At various points when Future Fund was running, lots of small grants for folks to spend time in the Bay (link), do career exploration, etc. The LTFF has also given x-risk grants that are somewhat similar (as far as I know, the EAIF or others have not given more generic career exploration grants, or grants for other causes)
I think this is a legitimate concern, and I’m glad you point to it. An alternative framing is lock-out of potentially very impactful causes. Dynamics of lock-out, as I see it, include:
EA selecting for people interested in the already-established causes.
Social status gradients within EA pushing people toward the highest-regarded causes, like AI safety.[1]
EAs working in an already-established cause having personal and career-related incentives to ensure that EA keeps their cause as a top priority.
A recent shortform by Caleb Parikh, discussing the specific case of digital sentience work, feels related. In Caleb’s words:
Personal anecdote: Part of the reason, if I’m to be honest with myself, for my move from nuclear weapons risk research to AI strategy/governance is that it became increasingly difficult, socially, to be an EA working on nuclear risk. (In my sphere, at least.) Many of my conversations with other EAs, even in non-work situations and even with me trying avoid this conversation area, turned into me having to defend my not focusing on AI risk, on pain of being seen as “not getting it”.
I think this is relatively underdiscussed / important. I previously wrote about the availability bias in EA jobhunting and have anecdotally seen many examples of this both in terms of social pressures and norms, but also just difficulty of forging your own path vs sticking to the “defaults”. It’s simply easier to try and go for EA opportunities where you have existing networks, and there are additionally several monetary, status, & social rewards for pursuing these careers.
I think it’s sometimes hard for people to decouple these when making career decisions (e.g. did you take the job because it’s your best option, or because it’s a stable job which people think is high status)
Caveats before I begin:
I think it’s really good for people who need to (e.g. from low SES backgrounds) to take financial security into consideration when making important career decisions. But I think this community also has a lot of privileged people who could afford to be a little more risk-taking.
I don’t think it’s bad that these programs and resources exist—I’m excited that they exist. But we need to acknowledge how they affect the EA ecosystem. I expect the top pushback will be the standard one, which is that if you have very short timelines, other considerations simply don’t matter if you do a E(V) calculation.
I think that people should take more ownership of exploring other paths and trying difficult things than they currently do, but I also think it’s important to consider the ecoystem impacts and how it can create lock-in effects on certain causes.
These projects exist for a reason—the longtermist space is less funding constrained than the non-longtermist one, it’s a newer field, and so many of the opportunities available are field building ones.
Here are some concrete examples of how the presence of upskilling opportunities & incentives in
more specifically x-risk and AIS space) in the last 12-24 months , with comparisons of some other options and how they stack up:
(written quickly of the top of my head, I expect some specific examples may be wrong in details or exact scope. If you can think of counter-examples please let me know!)
Career advising resources:
80K has been the key career resource for over 10 years, and they primarily investing resources in expanding their LT career profiles, resources & advice (without a robust alternative for several years.
80K made a call to get others interested in various aspects of career advising they are not covering and have posted about it in 2020, 2021, and 2022 but (as far as I can tell) with limited traction.
There are some other career options—Animal Advocacy Careers and Probably Good—they are at early stages and still ramping up (even in 2023).
Career funding / upskilling opportunities:
There are the century fellowship & early career funding for AI / bio & Horizon for longtermist policy careers (there is nothing similar for any other cause AFAIK). These are 1-2 year long open-ended funding opportunities. (There is the Charity Entrepreneurship incubator, which mostly funds neartermist and meta orgs and accepts about 20 applicatns per round (historically one per year, from 2023 will be 2 rounds per year))
When Future Fund was running (and LTFF has also done this), there were several opportunities for people interested in AI safety (possibly other LT causes too, my guess was the bulk was AIS) to visit the bay for the summer, or do career transition grants and so on (there was no equivalent for other causes)
Since 2021, we now have multiple programs to skill up in AI and other X-risks (AGISF & biosecurity program from BlueDot, SERI MATS, various other ERIx summer internships). (somewhat similar programs with fewer resources are the alt proteins fellowship from BlueDot, a China-based & South East Asia-based farmed animal fellowship in 2022, and AAC’s programming)
There are paid general LT intro programs like the Global Challenge Project retreats, Atlas Fellowship, Nontrivial (There is Intro to VP program, community retreats organized by some local groups & LEAF which have less funding / monetary compensation)
There are now several dedicated AIS centers at various universities (SERI @ Stanford, HAIST @ Harvard, CBAI @ Harvard / MIT) and a few X-risk focused (ERA @ Cambridge (?), CHERI in Switzerland). As far as I know, there are no such centers for other causes (and even non-AI x-risk causes). These centers are new, but can provide better quality advice, resources and guidance for pursuing these career paths over others.
Networking: This seems rougly equal.
The SERI conference has run since 2021 (there is EA Global, and several EAGx’s per year, but no dedicated opportunities for other causes.)
Funding for new community projects
Bulk (90%) of EA movement building from OP is funded by the longtermist team, and most univesity EA groups funding is from the longtermist team. I’d love to know more about how those groups and projects are evaluated and how much funding ends up going to more principles-first community building, as opposed to cause-specific work.
Most of OP’s neartermist granting has gone towards effective giving (because it has the highest ROI)
There are even incentives for infrastructure providers (e.g. Good Impressions, cFactual, EV, Rethink etc.) to primarily support the longtermist ecosystem as that’s where the funding is (There are a few meta orgs supporting the animal space, such as AAC, Good Growth, and 2 CE incubated orgs—Animal Ask & Mission Motor)
Career exploration grants:
At various points when Future Fund was running, lots of small grants for folks to spend time in the Bay (link), do career exploration, etc. The LTFF has also given x-risk grants that are somewhat similar (as far as I know, the EAIF or others have not given more generic career exploration grants, or grants for other causes)