Hey Joel, I’m wondering if you have recommendations on (1) or on the transparency/clarity element of (2)?
(Context being that I think 80k do a good job on these things, and I expect I’m doing a less good job on the equivalents in my own talent search org. Having a sense of what an ‘even better’ version might look like could help shift my sort of internal/personal overton window of possibilities.)
For (1) I’m agree with 80k’s approach in theory—it’s just that cost-effectiveness is likely heavily driven by the cause-level impact adjustment—so you’ll want to model that in a lot of detail.
For (2), I think just declaring up front what you think is the most impactful cause(s) and what you’re focusing on is pretty valuable? And I suppose when people do apply/email, it’s worth making that sort of caveat as well. For our own GHD grantmaking, we do try to declare on our front page that our current focus is NCD policy and also if someone approaches us raising the issue of grants, we make clear what our current grant cycle is focused on.
Makes sense on (1). I agree that this kind of methodology is not very externally legible and depends heavily on cause prioritisation, sub-cause prioritisation, your view on the most impactful interventions, etc. I think it’s worth tracking for internal decision-making even if external stakeholders might not agree with all the ratings and decisions. (The system I came up with for Animal Advocacy Careers’ impact evaluation suffered similar issues within animal advocacy.)
For (2), I’m not sure why you don’t think 80 do this. E.g. the page on “What are the most pressing world problems?” has the following opening paragraph:
Then the actual ranking is very clear: AI 1, pandemics 2, nuclear war 3, etc.
And the advising page says quite prominently “We’re most helpful for people who… Are interested in the problems we think are most pressing, which you can read about in our problem profiles.” The FAQ on “What are you looking for in the application?” mentions that one criterion is “Are interested in working on our pressing problems”.
Of course it would be possible to make it more prominent, but it seems like they’ve put these things pretty clearly on the front.
It seems pretty reasonable to me that 80k would want to talk to people who seem promising but don’t share all the same cause prio views as them; supporting people to think through cause prio seems like a big way they can add value. So I wouldn’t expect them to try to actively deter people who sign up and seem worth advising but, despite the clear labelling on the advising page, don’t already share the same cause prio rankings as 80k. You also suggest “when people do apply/email, it’s worth making that sort of caveat as well”, and that seems in the active deterrence ballpark to me; to the effect of ‘hey are you sure you want this call?’
On (2). If you go to 80k’s front page (https://80000hours.org/), there is no mention that the organizational’s focus is AGI or that they believe it to be the most important cause. For the other high-level pages accessible from the navigation bar, things are similar not obvious. For example, in “Start Here”, you have to read 22 paragraphs down to understand 80k’s explicit prioritization of x-risk over other causes. In the “Career Guide”, it’s about halfway down the page. If the 1-1 advising tab, you have to go down to the FAQs at the bottom of the page, and even then it only refers to “pressing problems” and links back to the research page. And on the research page itself, the issue is that it doesn’t give a sense that the organization strongly recommends AI over the rest, or that x-risk gets the lion’s share of organizational resources.
I’m not trying to be nitpicky, but trying to convey that a lot of less engaged EAs (or people who are just considering impactful careers) are coming in, reading the website, and maybe browsing the job board or thinking of applying for advising—without realizing just how convinced on AGI 80k is (and correspondingly, not realizing how strongly they will be sold on AGI in advisory calls). This may not just be less engaged EAs too, depending on how you defined engaged—like I was reading Singer since two decades ago; have been a GWWC pledger since 2014; and whenever giving to GiveWell have actually taken the time to examine their CEAs and research reports. And yet until I actually moved into direct EA work via the CE incubation program, I didn’t realize how AGI-focused 80k was.
People will never get the same mistaken impression when looking at Non-Linear or Lightcone or BERI or SFF. I think part of the problem is (a) putting up a lot of causes on the problems page, which gives the reader the impression of a big tent/broad focus, and (b) having normie aesthetics (compare: longtermist websites). While I do think it’s correct and valuable to do both, the downside is that without more explicit clarification (e.g. what Non-Linear does, just bluntly saying on the front page in font 40: “We incubate AI x-risk nonprofits by connecting founders with ideas, funding, and mentorship”), the casual reader of the website doesn’t understand that 80k basically works on AGI.
I suspect the crux might be that I don’t necessarily think it’s a bad thing if “the casual reader of the website doesn’t understand that 80k basically works on AGI”. E.g. if 80k adds value to someone as they go through the career guide, even if they don’ realise that “the organization strongly recommends AI over the rest, or that x-risk gets the lion’s share of organizational resources”, is there a problem?
I would be concerned if 80k was not adding value. E.g. I can imagine more salesly tactics that look like making a big song and dance about how much the reader needs their advice, without providing any actual guidance until they deliver the final pitch, where the reader is basically given the choice of signing up for 80k’s view/service, or looking for some alternative provider/resource that can help them. But I don’t think that that’s happening here.
I can also imagine being concerned if the service was not transparent until you were actually on the call, and then you received some sort of unsolicited cause prioritisation pitch. But again, I don’t think that’s what’s happening; as discussed, it’s pretty transparent on the advising page and cause prio page what they’re doing.
Hey Joel, I’m wondering if you have recommendations on (1) or on the transparency/clarity element of (2)?
(Context being that I think 80k do a good job on these things, and I expect I’m doing a less good job on the equivalents in my own talent search org. Having a sense of what an ‘even better’ version might look like could help shift my sort of internal/personal overton window of possibilities.)
Hi Jamie,
For (1) I’m agree with 80k’s approach in theory—it’s just that cost-effectiveness is likely heavily driven by the cause-level impact adjustment—so you’ll want to model that in a lot of detail.
For (2), I think just declaring up front what you think is the most impactful cause(s) and what you’re focusing on is pretty valuable? And I suppose when people do apply/email, it’s worth making that sort of caveat as well. For our own GHD grantmaking, we do try to declare on our front page that our current focus is NCD policy and also if someone approaches us raising the issue of grants, we make clear what our current grant cycle is focused on.
Hope my two cents is somewhat useful!
Makes sense on (1). I agree that this kind of methodology is not very externally legible and depends heavily on cause prioritisation, sub-cause prioritisation, your view on the most impactful interventions, etc. I think it’s worth tracking for internal decision-making even if external stakeholders might not agree with all the ratings and decisions. (The system I came up with for Animal Advocacy Careers’ impact evaluation suffered similar issues within animal advocacy.)
For (2), I’m not sure why you don’t think 80 do this. E.g. the page on “What are the most pressing world problems?” has the following opening paragraph:
Then the actual ranking is very clear: AI 1, pandemics 2, nuclear war 3, etc.
And the advising page says quite prominently “We’re most helpful for people who… Are interested in the problems we think are most pressing, which you can read about in our problem profiles.” The FAQ on “What are you looking for in the application?” mentions that one criterion is “Are interested in working on our pressing problems”.
Of course it would be possible to make it more prominent, but it seems like they’ve put these things pretty clearly on the front.
It seems pretty reasonable to me that 80k would want to talk to people who seem promising but don’t share all the same cause prio views as them; supporting people to think through cause prio seems like a big way they can add value. So I wouldn’t expect them to try to actively deter people who sign up and seem worth advising but, despite the clear labelling on the advising page, don’t already share the same cause prio rankings as 80k. You also suggest “when people do apply/email, it’s worth making that sort of caveat as well”, and that seems in the active deterrence ballpark to me; to the effect of ‘hey are you sure you want this call?’
On (2). If you go to 80k’s front page (https://80000hours.org/), there is no mention that the organizational’s focus is AGI or that they believe it to be the most important cause. For the other high-level pages accessible from the navigation bar, things are similar not obvious. For example, in “Start Here”, you have to read 22 paragraphs down to understand 80k’s explicit prioritization of x-risk over other causes. In the “Career Guide”, it’s about halfway down the page. If the 1-1 advising tab, you have to go down to the FAQs at the bottom of the page, and even then it only refers to “pressing problems” and links back to the research page. And on the research page itself, the issue is that it doesn’t give a sense that the organization strongly recommends AI over the rest, or that x-risk gets the lion’s share of organizational resources.
I’m not trying to be nitpicky, but trying to convey that a lot of less engaged EAs (or people who are just considering impactful careers) are coming in, reading the website, and maybe browsing the job board or thinking of applying for advising—without realizing just how convinced on AGI 80k is (and correspondingly, not realizing how strongly they will be sold on AGI in advisory calls). This may not just be less engaged EAs too, depending on how you defined engaged—like I was reading Singer since two decades ago; have been a GWWC pledger since 2014; and whenever giving to GiveWell have actually taken the time to examine their CEAs and research reports. And yet until I actually moved into direct EA work via the CE incubation program, I didn’t realize how AGI-focused 80k was.
People will never get the same mistaken impression when looking at Non-Linear or Lightcone or BERI or SFF. I think part of the problem is (a) putting up a lot of causes on the problems page, which gives the reader the impression of a big tent/broad focus, and (b) having normie aesthetics (compare: longtermist websites). While I do think it’s correct and valuable to do both, the downside is that without more explicit clarification (e.g. what Non-Linear does, just bluntly saying on the front page in font 40: “We incubate AI x-risk nonprofits by connecting founders with ideas, funding, and mentorship”), the casual reader of the website doesn’t understand that 80k basically works on AGI.
Yeah many of those things seem right to me.
I suspect the crux might be that I don’t necessarily think it’s a bad thing if “the casual reader of the website doesn’t understand that 80k basically works on AGI”. E.g. if 80k adds value to someone as they go through the career guide, even if they don’ realise that “the organization strongly recommends AI over the rest, or that x-risk gets the lion’s share of organizational resources”, is there a problem?
I would be concerned if 80k was not adding value. E.g. I can imagine more salesly tactics that look like making a big song and dance about how much the reader needs their advice, without providing any actual guidance until they deliver the final pitch, where the reader is basically given the choice of signing up for 80k’s view/service, or looking for some alternative provider/resource that can help them. But I don’t think that that’s happening here.
I can also imagine being concerned if the service was not transparent until you were actually on the call, and then you received some sort of unsolicited cause prioritisation pitch. But again, I don’t think that’s what’s happening; as discussed, it’s pretty transparent on the advising page and cause prio page what they’re doing.