For (1) Iâm agree with 80kâs approach in theoryâitâs just that cost-effectiveness is likely heavily driven by the cause-level impact adjustmentâso youâll want to model that in a lot of detail.
For (2), I think just declaring up front what you think is the most impactful cause(s) and what youâre focusing on is pretty valuable? And I suppose when people do apply/âemail, itâs worth making that sort of caveat as well. For our own GHD grantmaking, we do try to declare on our front page that our current focus is NCD policy and also if someone approaches us raising the issue of grants, we make clear what our current grant cycle is focused on.
Makes sense on (1). I agree that this kind of methodology is not very externally legible and depends heavily on cause prioritisation, sub-cause prioritisation, your view on the most impactful interventions, etc. I think itâs worth tracking for internal decision-making even if external stakeholders might not agree with all the ratings and decisions. (The system I came up with for Animal Advocacy Careersâ impact evaluation suffered similar issues within animal advocacy.)
For (2), Iâm not sure why you donât think 80 do this. E.g. the page on âWhat are the most pressing world problems?â has the following opening paragraph:
Then the actual ranking is very clear: AI 1, pandemics 2, nuclear war 3, etc.
And the advising page says quite prominently âWeâre most helpful for people who⌠Are interested in the problems we think are most pressing, which you can read about in our problem profiles.â The FAQ on âWhat are you looking for in the application?â mentions that one criterion is âAre interested in working on our pressing problemsâ.
Of course it would be possible to make it more prominent, but it seems like theyâve put these things pretty clearly on the front.
It seems pretty reasonable to me that 80k would want to talk to people who seem promising but donât share all the same cause prio views as them; supporting people to think through cause prio seems like a big way they can add value. So I wouldnât expect them to try to actively deter people who sign up and seem worth advising but, despite the clear labelling on the advising page, donât already share the same cause prio rankings as 80k. You also suggest âwhen people do apply/âemail, itâs worth making that sort of caveat as wellâ, and that seems in the active deterrence ballpark to me; to the effect of âhey are you sure you want this call?â
On (2). If you go to 80kâs front page (https://ââ80000hours.org/ââ), there is no mention that the organizationalâs focus is AGI or that they believe it to be the most important cause. For the other high-level pages accessible from the navigation bar, things are similar not obvious. For example, in âStart Hereâ, you have to read 22 paragraphs down to understand 80kâs explicit prioritization of x-risk over other causes. In the âCareer Guideâ, itâs about halfway down the page. If the 1-1 advising tab, you have to go down to the FAQs at the bottom of the page, and even then it only refers to âpressing problemsâ and links back to the research page. And on the research page itself, the issue is that it doesnât give a sense that the organization strongly recommends AI over the rest, or that x-risk gets the lionâs share of organizational resources.
Iâm not trying to be nitpicky, but trying to convey that a lot of less engaged EAs (or people who are just considering impactful careers) are coming in, reading the website, and maybe browsing the job board or thinking of applying for advisingâwithout realizing just how convinced on AGI 80k is (and correspondingly, not realizing how strongly they will be sold on AGI in advisory calls). This may not just be less engaged EAs too, depending on how you defined engagedâlike I was reading Singer since two decades ago; have been a GWWC pledger since 2014; and whenever giving to GiveWell have actually taken the time to examine their CEAs and research reports. And yet until I actually moved into direct EA work via the CE incubation program, I didnât realize how AGI-focused 80k was.
People will never get the same mistaken impression when looking at Non-Linear or Lightcone or BERI or SFF. I think part of the problem is (a) putting up a lot of causes on the problems page, which gives the reader the impression of a big tent/âbroad focus, and (b) having normie aesthetics (compare: longtermist websites). While I do think itâs correct and valuable to do both, the downside is that without more explicit clarification (e.g. what Non-Linear does, just bluntly saying on the front page in font 40: âWe incubate AI x-risk nonprofits âby connecting founders with ideas, funding, and mentorshipâ), the casual reader of the website doesnât understand that 80k basically works on AGI.
I suspect the crux might be that I donât necessarily think itâs a bad thing if âthe casual reader of the website doesnât understand that 80k basically works on AGIâ. E.g. if 80k adds value to someone as they go through the career guide, even if they donâ realise that âthe organization strongly recommends AI over the rest, or that x-risk gets the lionâs share of organizational resourcesâ, is there a problem?
I would be concerned if 80k was not adding value. E.g. I can imagine more salesly tactics that look like making a big song and dance about how much the reader needs their advice, without providing any actual guidance until they deliver the final pitch, where the reader is basically given the choice of signing up for 80kâs view/âservice, or looking for some alternative provider/âresource that can help them. But I donât think that thatâs happening here.
I can also imagine being concerned if the service was not transparent until you were actually on the call, and then you received some sort of unsolicited cause prioritisation pitch. But again, I donât think thatâs whatâs happening; as discussed, itâs pretty transparent on the advising page and cause prio page what theyâre doing.
Hi Jamie,
For (1) Iâm agree with 80kâs approach in theoryâitâs just that cost-effectiveness is likely heavily driven by the cause-level impact adjustmentâso youâll want to model that in a lot of detail.
For (2), I think just declaring up front what you think is the most impactful cause(s) and what youâre focusing on is pretty valuable? And I suppose when people do apply/âemail, itâs worth making that sort of caveat as well. For our own GHD grantmaking, we do try to declare on our front page that our current focus is NCD policy and also if someone approaches us raising the issue of grants, we make clear what our current grant cycle is focused on.
Hope my two cents is somewhat useful!
Makes sense on (1). I agree that this kind of methodology is not very externally legible and depends heavily on cause prioritisation, sub-cause prioritisation, your view on the most impactful interventions, etc. I think itâs worth tracking for internal decision-making even if external stakeholders might not agree with all the ratings and decisions. (The system I came up with for Animal Advocacy Careersâ impact evaluation suffered similar issues within animal advocacy.)
For (2), Iâm not sure why you donât think 80 do this. E.g. the page on âWhat are the most pressing world problems?â has the following opening paragraph:
Then the actual ranking is very clear: AI 1, pandemics 2, nuclear war 3, etc.
And the advising page says quite prominently âWeâre most helpful for people who⌠Are interested in the problems we think are most pressing, which you can read about in our problem profiles.â The FAQ on âWhat are you looking for in the application?â mentions that one criterion is âAre interested in working on our pressing problemsâ.
Of course it would be possible to make it more prominent, but it seems like theyâve put these things pretty clearly on the front.
It seems pretty reasonable to me that 80k would want to talk to people who seem promising but donât share all the same cause prio views as them; supporting people to think through cause prio seems like a big way they can add value. So I wouldnât expect them to try to actively deter people who sign up and seem worth advising but, despite the clear labelling on the advising page, donât already share the same cause prio rankings as 80k. You also suggest âwhen people do apply/âemail, itâs worth making that sort of caveat as wellâ, and that seems in the active deterrence ballpark to me; to the effect of âhey are you sure you want this call?â
On (2). If you go to 80kâs front page (https://ââ80000hours.org/ââ), there is no mention that the organizationalâs focus is AGI or that they believe it to be the most important cause. For the other high-level pages accessible from the navigation bar, things are similar not obvious. For example, in âStart Hereâ, you have to read 22 paragraphs down to understand 80kâs explicit prioritization of x-risk over other causes. In the âCareer Guideâ, itâs about halfway down the page. If the 1-1 advising tab, you have to go down to the FAQs at the bottom of the page, and even then it only refers to âpressing problemsâ and links back to the research page. And on the research page itself, the issue is that it doesnât give a sense that the organization strongly recommends AI over the rest, or that x-risk gets the lionâs share of organizational resources.
Iâm not trying to be nitpicky, but trying to convey that a lot of less engaged EAs (or people who are just considering impactful careers) are coming in, reading the website, and maybe browsing the job board or thinking of applying for advisingâwithout realizing just how convinced on AGI 80k is (and correspondingly, not realizing how strongly they will be sold on AGI in advisory calls). This may not just be less engaged EAs too, depending on how you defined engagedâlike I was reading Singer since two decades ago; have been a GWWC pledger since 2014; and whenever giving to GiveWell have actually taken the time to examine their CEAs and research reports. And yet until I actually moved into direct EA work via the CE incubation program, I didnât realize how AGI-focused 80k was.
People will never get the same mistaken impression when looking at Non-Linear or Lightcone or BERI or SFF. I think part of the problem is (a) putting up a lot of causes on the problems page, which gives the reader the impression of a big tent/âbroad focus, and (b) having normie aesthetics (compare: longtermist websites). While I do think itâs correct and valuable to do both, the downside is that without more explicit clarification (e.g. what Non-Linear does, just bluntly saying on the front page in font 40: âWe incubate AI x-risk nonprofits âby connecting founders with ideas, funding, and mentorshipâ), the casual reader of the website doesnât understand that 80k basically works on AGI.
Yeah many of those things seem right to me.
I suspect the crux might be that I donât necessarily think itâs a bad thing if âthe casual reader of the website doesnât understand that 80k basically works on AGIâ. E.g. if 80k adds value to someone as they go through the career guide, even if they donâ realise that âthe organization strongly recommends AI over the rest, or that x-risk gets the lionâs share of organizational resourcesâ, is there a problem?
I would be concerned if 80k was not adding value. E.g. I can imagine more salesly tactics that look like making a big song and dance about how much the reader needs their advice, without providing any actual guidance until they deliver the final pitch, where the reader is basically given the choice of signing up for 80kâs view/âservice, or looking for some alternative provider/âresource that can help them. But I donât think that thatâs happening here.
I can also imagine being concerned if the service was not transparent until you were actually on the call, and then you received some sort of unsolicited cause prioritisation pitch. But again, I donât think thatâs whatâs happening; as discussed, itâs pretty transparent on the advising page and cause prio page what theyâre doing.