First, I think @NickLaing is right to point out that there’s a missing mood here and to express disappointment that it isn’t being sufficiently acknowledged.
2. My assumption is that the direction change is motivated by factors like:
A view of AI as a particularly time-sensitive area right now vs. areas like GHD often having a slower path to marginal impact (in part due to the excellence and strength of existing funding-constrained work).
An assumption that there are / will be many more net positions to fill in AI safety for the next few years, especially to the extent one thinks that funding will continue to shift in this direction. (Relatedly, one might think there will be relatively few positions to fill in certain other cause areas.)
I would suggest that these kinds of views and assumptions don’t imply that people who are already invested in other cause areas should shift focus. People who are already on a solid path to impact are not, as I understand it, 80K’s primary target audience.
3. I’m generally OK with 80K going in this direction if that is what its staff, leadership, and donors want. I’ve taken a harder-line stance on this sort of thing to the extent that I see something as core infrastructure that is a natural near-monopoly (e.g., the Forum, university groups) -- in which case I think there’s an enhanced obligation to share the commons. Here, there’s nothing inherent about career advising that is near-monopolistic (cf. Probably Good and Animal Advocacy Careers exist in analogous spaces). I would expect the new 80K to make at least passing reference to the existence of other EA career advice services for those who decide they want to work in another cause area. Thus, to the extent that there are advisors interested in giving advice in these areas, advisees interested in receiving that advice, and funders interested in supporting those areas, there’s no clear reason why alternative advisors would not fill the gap left by 80K here. I’d like to have seen more lead time, but get that the situation in AI is rapidly evolving and that this is a reaction to external developments.
4. I think part of the solution is to stop thinking of 80K as (quoting Nick’s comment) “one of the top 3 or so EA orgs” in the same sense one might have considered it before this shift. Of course, it’s an EA org in the same sense that (e.g.) Animal Advocacy Careers is an EA org, but after today’s announcement it shouldn’t be seen as a broad-tent EA org in the same vein as (e.g.,) GWWC. Therefore, we should be careful not to read a shift in the broader community’s cause prio into 80K’s statements or direction. This may change how we interact with it and defer (or not) to it in the future. For example, if someone wants to point a person toward broad-based career advice, Probably Good is probably the most appropriate choice.
5. I too am concerned about the EA funnel / onramp / tone-setting issues that EA others have written about, but don’t have much to add on those.
I love point 3 “to the extent that I see something as core infrastructure that is a natural near-monopoly (e.g., the Forum, university groups) [...] I think there’s an enhanced obligation to share the commons”—that’s a good articulation of something I feel about Forum stewardship.
I have a complicated reaction.
First, I think @NickLaing is right to point out that there’s a missing mood here and to express disappointment that it isn’t being sufficiently acknowledged.
2. My assumption is that the direction change is motivated by factors like:
A view of AI as a particularly time-sensitive area right now vs. areas like GHD often having a slower path to marginal impact (in part due to the excellence and strength of existing funding-constrained work).
An assumption that there are / will be many more net positions to fill in AI safety for the next few years, especially to the extent one thinks that funding will continue to shift in this direction. (Relatedly, one might think there will be relatively few positions to fill in certain other cause areas.)
I would suggest that these kinds of views and assumptions don’t imply that people who are already invested in other cause areas should shift focus. People who are already on a solid path to impact are not, as I understand it, 80K’s primary target audience.
3. I’m generally OK with 80K going in this direction if that is what its staff, leadership, and donors want. I’ve taken a harder-line stance on this sort of thing to the extent that I see something as core infrastructure that is a natural near-monopoly (e.g., the Forum, university groups) -- in which case I think there’s an enhanced obligation to share the commons. Here, there’s nothing inherent about career advising that is near-monopolistic (cf. Probably Good and Animal Advocacy Careers exist in analogous spaces). I would expect the new 80K to make at least passing reference to the existence of other EA career advice services for those who decide they want to work in another cause area. Thus, to the extent that there are advisors interested in giving advice in these areas, advisees interested in receiving that advice, and funders interested in supporting those areas, there’s no clear reason why alternative advisors would not fill the gap left by 80K here. I’d like to have seen more lead time, but get that the situation in AI is rapidly evolving and that this is a reaction to external developments.
4. I think part of the solution is to stop thinking of 80K as (quoting Nick’s comment) “one of the top 3 or so EA orgs” in the same sense one might have considered it before this shift. Of course, it’s an EA org in the same sense that (e.g.) Animal Advocacy Careers is an EA org, but after today’s announcement it shouldn’t be seen as a broad-tent EA org in the same vein as (e.g.,) GWWC. Therefore, we should be careful not to read a shift in the broader community’s cause prio into 80K’s statements or direction. This may change how we interact with it and defer (or not) to it in the future. For example, if someone wants to point a person toward broad-based career advice, Probably Good is probably the most appropriate choice.
5. I too am concerned about the EA funnel / onramp / tone-setting issues that EA others have written about, but don’t have much to add on those.
I love point 3 “to the extent that I see something as core infrastructure that is a natural near-monopoly (e.g., the Forum, university groups) [...] I think there’s an enhanced obligation to share the commons”—that’s a good articulation of something I feel about Forum stewardship.