Here is a simple argument that this strategic shift is a bad one:
(1) There should be (at least) one EA org that gives career advice across cause areas.
(2) If there should be such an org, it should be (at least also) 80k.
(3) Thus, 80k should be an org that gives career advice across cause areas.
(Put differently, my reasoning is something like this: Should there be an org like the one 80k has been so far? Yes, definitely! But which one should it be? How about 80k!?)
I’m wondering with which premise 80k disagrees (and what you think about them!). They are indicating in this post that they think it would be valuable to have orgs that cover other individual cause areas such as biorisk. But I think there is strong case for having an org that is not restricted to specific cause areas. After all, we don’t want to do the most good in cause area X but the most good, period.
At the same time, 80k seems like a great candidate for such a cause-neutral org. They have done great work so far (as far as I can tell), and they have built up valuable resources (experience, reputation, outputs, …) through this work that would help them doing even better in the future.
(1) There should be (at least) one EA org focused on AI risk career advice; it is important that this org operate at a high level at the present time.
(2) If there should be such an org, it should be—or maybe can only be -- 80K; it is more capable of meeting criterion (1) quickly than any other org that could try. It already has staff with significant experience in the area and organizational competence to deliver career advising services with moderately high throughput.
(3) Thus, 80K should focus on AI risk career advice.
If one generally accepts both your original three points and these three, I think they are left with a tradeoff to make, focusing on questions like:
If both versions of statement (1) cannot be fulfilled in the next 1-3 years (i.e., until another org can sufficiently plug whichever hole 80K didn’t fill), which version is more important to fulfill during that time frame?
Given the capabilities and limitations of other orgs (both extant and potential future), would it be easier for another org to plug the AI-focused hole or the general hole?
Good reply! I thought of something similar as a possible objection against my premise (2) that 80k should fill the role of the cause-neutral org. Basically, there are opportunity costs to 80k filling this role because it could also fill the role of (e.g.) an AI-focused org. The question is how high these opportunity costs are and you point out two important factors. What I take to be important, and plausibly decisive, is that 80k is especially well suited to fill the role of the cause-neutral org (more so than the role of the AI-focused org) due to its biography and the brand it has built. Combined with a ‘global’ perspective on EA according to which there should be one such org, it seems plausible to me that it should be 80k.
After all, we don’t want to do the most good in cause area X but the most good, period.
Yes, and 80k think that AI safety is the cause area that leads to the most good. 80k never covered all cause areas—they didn’t cover the opera or beach cleanup or college scholarships or 99% of all possible cause areas. They have always focused on what they thought were the most important cause areas, and they continue to do so. Cause neutrality doesn’t mean ‘supporting all possible causes’ (which would be absurd), it means ‘being willing at support any cause area, if the evidence suggests it is the best’.
Yeah framed like this, I like their decision best. In the important sense, you could say, they are still cause-neutral. It’s just that their cause-neutral evaluation now came to a very specific result: all the most cost-effective careers choices are in (or related to) AI. If this indeed the whole motivation for the strategic shift of 80k, I would have liked this post to use this framing more directly: “we have updated our beliefs on the most impactful careers” rather than “we have made strategic shifts” as the headline. It wasn’t clear to me whether the latter is a consequence of only the former, on my first reading.
Here is a simple argument that this strategic shift is a bad one:
(1) There should be (at least) one EA org that gives career advice across cause areas.
(2) If there should be such an org, it should be (at least also) 80k.
(3) Thus, 80k should be an org that gives career advice across cause areas.
(Put differently, my reasoning is something like this: Should there be an org like the one 80k has been so far? Yes, definitely! But which one should it be? How about 80k!?)
I’m wondering with which premise 80k disagrees (and what you think about them!). They are indicating in this post that they think it would be valuable to have orgs that cover other individual cause areas such as biorisk. But I think there is strong case for having an org that is not restricted to specific cause areas. After all, we don’t want to do the most good in cause area X but the most good, period.
At the same time, 80k seems like a great candidate for such a cause-neutral org. They have done great work so far (as far as I can tell), and they have built up valuable resources (experience, reputation, outputs, …) through this work that would help them doing even better in the future.
But one could also reason:
(1) There should be (at least) one EA org focused on AI risk career advice; it is important that this org operate at a high level at the present time.
(2) If there should be such an org, it should be—or maybe can only be -- 80K; it is more capable of meeting criterion (1) quickly than any other org that could try. It already has staff with significant experience in the area and organizational competence to deliver career advising services with moderately high throughput.
(3) Thus, 80K should focus on AI risk career advice.
If one generally accepts both your original three points and these three, I think they are left with a tradeoff to make, focusing on questions like:
If both versions of statement (1) cannot be fulfilled in the next 1-3 years (i.e., until another org can sufficiently plug whichever hole 80K didn’t fill), which version is more important to fulfill during that time frame?
Given the capabilities and limitations of other orgs (both extant and potential future), would it be easier for another org to plug the AI-focused hole or the general hole?
Good reply! I thought of something similar as a possible objection against my premise (2) that 80k should fill the role of the cause-neutral org. Basically, there are opportunity costs to 80k filling this role because it could also fill the role of (e.g.) an AI-focused org. The question is how high these opportunity costs are and you point out two important factors. What I take to be important, and plausibly decisive, is that 80k is especially well suited to fill the role of the cause-neutral org (more so than the role of the AI-focused org) due to its biography and the brand it has built. Combined with a ‘global’ perspective on EA according to which there should be one such org, it seems plausible to me that it should be 80k.
Yes, and 80k think that AI safety is the cause area that leads to the most good. 80k never covered all cause areas—they didn’t cover the opera or beach cleanup or college scholarships or 99% of all possible cause areas. They have always focused on what they thought were the most important cause areas, and they continue to do so. Cause neutrality doesn’t mean ‘supporting all possible causes’ (which would be absurd), it means ‘being willing at support any cause area, if the evidence suggests it is the best’.
Yeah framed like this, I like their decision best. In the important sense, you could say, they are still cause-neutral. It’s just that their cause-neutral evaluation now came to a very specific result: all the most cost-effective careers choices are in (or related to) AI. If this indeed the whole motivation for the strategic shift of 80k, I would have liked this post to use this framing more directly: “we have updated our beliefs on the most impactful careers” rather than “we have made strategic shifts” as the headline. It wasn’t clear to me whether the latter is a consequence of only the former, on my first reading.