(1) Impact measures: Iâm very appreciative of the amount of thought that went into developing the DIPY measure. The main concern (from the outside) with respect to DIPY is that it is critically dependent on the impact-adjustment variableâitâs probably the single biggest driver of uncertainty (since causes can vary by many magnitudes). Depending on whether you think the work is impactful (or if youâre sceptical, e.g. because youâre an AGI sceptic or because youâre convinced of the importance of preventing AGI risk but worried about counterproductivity from getting people into AI etc), the estimate will fluctuate very heavily (and could be zero or significantly negative). From the perspective of an external funder, itâs hard to be convinced of robust cost-effectiveness (or speaking for myself, as a researcher, itâs hard to validate).
(2) I think we would both agree that AGI (and to a lesser extent, GCR more broadly) is 80,000 Hourâs primary focus.
I suppose the disagreement then is the extent to which neartermist work gets any focus at all. This is to some extent subjective, and also dependent on hard-to-observe decision-making and resource-allocation done internally. With (a) the team not currently planning to focus on neartermist content for the website (the most visible thing), (b) the career advisory/â1-1 work being very AGI-focused too (to my understanding), and (c) fundamentally, OP being 80,000 Hourâs main funder, and all of OPâs 80k grants being from the GCR capacity building team over the past 2-3 yearsâI think from an outside perspective, a reasonable assumption is that AGI/âGCR is >=75% of marginal resources committed. I exclude the job board from analysis here because I understand it absorbs comparatively little internal FTE right now.
The other issue we seem to disagree on is whether 80k has made its prioritization sufficiently obvious. It appreciate that this is somewhat subjective, but it might be worth erring on the side of being too obvious hereâI think the relevant metric would be âDoes a average EA who looks at the job board or signs up for career consulting understand that 80,000 Hours prefers I prioritize AGI?â, and Iâm not sure thatâs the case right now.
(3) Bad career jobsâthis was a concern aired, but we didnât have too much time to investigate itâwe just flag it out as a potential risk for people to consider.
(4) Similarly, we deprioritized the issue of whether getting people into AI companies worsens AI risk. We leave it up to potential donors to be something they might have to weigh and consider the pros and cons of (e.g. per Benâs article) and to make their decisions accordingly.
Hey Joel, Iâm wondering if you have recommendations on (1) or on the transparency/âclarity element of (2)?
(Context being that I think 80k do a good job on these things, and I expect Iâm doing a less good job on the equivalents in my own talent search org. Having a sense of what an âeven betterâ version might look like could help shift my sort of internal/âpersonal overton window of possibilities.)
For (1) Iâm agree with 80kâs approach in theoryâitâs just that cost-effectiveness is likely heavily driven by the cause-level impact adjustmentâso youâll want to model that in a lot of detail.
For (2), I think just declaring up front what you think is the most impactful cause(s) and what youâre focusing on is pretty valuable? And I suppose when people do apply/âemail, itâs worth making that sort of caveat as well. For our own GHD grantmaking, we do try to declare on our front page that our current focus is NCD policy and also if someone approaches us raising the issue of grants, we make clear what our current grant cycle is focused on.
Makes sense on (1). I agree that this kind of methodology is not very externally legible and depends heavily on cause prioritisation, sub-cause prioritisation, your view on the most impactful interventions, etc. I think itâs worth tracking for internal decision-making even if external stakeholders might not agree with all the ratings and decisions. (The system I came up with for Animal Advocacy Careersâ impact evaluation suffered similar issues within animal advocacy.)
For (2), Iâm not sure why you donât think 80 do this. E.g. the page on âWhat are the most pressing world problems?â has the following opening paragraph:
Then the actual ranking is very clear: AI 1, pandemics 2, nuclear war 3, etc.
And the advising page says quite prominently âWeâre most helpful for people who⌠Are interested in the problems we think are most pressing, which you can read about in our problem profiles.â The FAQ on âWhat are you looking for in the application?â mentions that one criterion is âAre interested in working on our pressing problemsâ.
Of course it would be possible to make it more prominent, but it seems like theyâve put these things pretty clearly on the front.
It seems pretty reasonable to me that 80k would want to talk to people who seem promising but donât share all the same cause prio views as them; supporting people to think through cause prio seems like a big way they can add value. So I wouldnât expect them to try to actively deter people who sign up and seem worth advising but, despite the clear labelling on the advising page, donât already share the same cause prio rankings as 80k. You also suggest âwhen people do apply/âemail, itâs worth making that sort of caveat as wellâ, and that seems in the active deterrence ballpark to me; to the effect of âhey are you sure you want this call?â
On (2). If you go to 80kâs front page (https://ââ80000hours.org/ââ), there is no mention that the organizationalâs focus is AGI or that they believe it to be the most important cause. For the other high-level pages accessible from the navigation bar, things are similar not obvious. For example, in âStart Hereâ, you have to read 22 paragraphs down to understand 80kâs explicit prioritization of x-risk over other causes. In the âCareer Guideâ, itâs about halfway down the page. If the 1-1 advising tab, you have to go down to the FAQs at the bottom of the page, and even then it only refers to âpressing problemsâ and links back to the research page. And on the research page itself, the issue is that it doesnât give a sense that the organization strongly recommends AI over the rest, or that x-risk gets the lionâs share of organizational resources.
Iâm not trying to be nitpicky, but trying to convey that a lot of less engaged EAs (or people who are just considering impactful careers) are coming in, reading the website, and maybe browsing the job board or thinking of applying for advisingâwithout realizing just how convinced on AGI 80k is (and correspondingly, not realizing how strongly they will be sold on AGI in advisory calls). This may not just be less engaged EAs too, depending on how you defined engagedâlike I was reading Singer since two decades ago; have been a GWWC pledger since 2014; and whenever giving to GiveWell have actually taken the time to examine their CEAs and research reports. And yet until I actually moved into direct EA work via the CE incubation program, I didnât realize how AGI-focused 80k was.
People will never get the same mistaken impression when looking at Non-Linear or Lightcone or BERI or SFF. I think part of the problem is (a) putting up a lot of causes on the problems page, which gives the reader the impression of a big tent/âbroad focus, and (b) having normie aesthetics (compare: longtermist websites). While I do think itâs correct and valuable to do both, the downside is that without more explicit clarification (e.g. what Non-Linear does, just bluntly saying on the front page in font 40: âWe incubate AI x-risk nonprofits âby connecting founders with ideas, funding, and mentorshipâ), the casual reader of the website doesnât understand that 80k basically works on AGI.
I suspect the crux might be that I donât necessarily think itâs a bad thing if âthe casual reader of the website doesnât understand that 80k basically works on AGIâ. E.g. if 80k adds value to someone as they go through the career guide, even if they donâ realise that âthe organization strongly recommends AI over the rest, or that x-risk gets the lionâs share of organizational resourcesâ, is there a problem?
I would be concerned if 80k was not adding value. E.g. I can imagine more salesly tactics that look like making a big song and dance about how much the reader needs their advice, without providing any actual guidance until they deliver the final pitch, where the reader is basically given the choice of signing up for 80kâs view/âservice, or looking for some alternative provider/âresource that can help them. But I donât think that thatâs happening here.
I can also imagine being concerned if the service was not transparent until you were actually on the call, and then you received some sort of unsolicited cause prioritisation pitch. But again, I donât think thatâs whatâs happening; as discussed, itâs pretty transparent on the advising page and cause prio page what theyâre doing.
Hi Arden,
Thanks for engaging.
(1) Impact measures: Iâm very appreciative of the amount of thought that went into developing the DIPY measure. The main concern (from the outside) with respect to DIPY is that it is critically dependent on the impact-adjustment variableâitâs probably the single biggest driver of uncertainty (since causes can vary by many magnitudes). Depending on whether you think the work is impactful (or if youâre sceptical, e.g. because youâre an AGI sceptic or because youâre convinced of the importance of preventing AGI risk but worried about counterproductivity from getting people into AI etc), the estimate will fluctuate very heavily (and could be zero or significantly negative). From the perspective of an external funder, itâs hard to be convinced of robust cost-effectiveness (or speaking for myself, as a researcher, itâs hard to validate).
(2) I think we would both agree that AGI (and to a lesser extent, GCR more broadly) is 80,000 Hourâs primary focus.
I suppose the disagreement then is the extent to which neartermist work gets any focus at all. This is to some extent subjective, and also dependent on hard-to-observe decision-making and resource-allocation done internally. With (a) the team not currently planning to focus on neartermist content for the website (the most visible thing), (b) the career advisory/â1-1 work being very AGI-focused too (to my understanding), and (c) fundamentally, OP being 80,000 Hourâs main funder, and all of OPâs 80k grants being from the GCR capacity building team over the past 2-3 yearsâI think from an outside perspective, a reasonable assumption is that AGI/âGCR is >=75% of marginal resources committed. I exclude the job board from analysis here because I understand it absorbs comparatively little internal FTE right now.
The other issue we seem to disagree on is whether 80k has made its prioritization sufficiently obvious. It appreciate that this is somewhat subjective, but it might be worth erring on the side of being too obvious hereâI think the relevant metric would be âDoes a average EA who looks at the job board or signs up for career consulting understand that 80,000 Hours prefers I prioritize AGI?â, and Iâm not sure thatâs the case right now.
(3) Bad career jobsâthis was a concern aired, but we didnât have too much time to investigate itâwe just flag it out as a potential risk for people to consider.
(4) Similarly, we deprioritized the issue of whether getting people into AI companies worsens AI risk. We leave it up to potential donors to be something they might have to weigh and consider the pros and cons of (e.g. per Benâs article) and to make their decisions accordingly.
Hey Joel, Iâm wondering if you have recommendations on (1) or on the transparency/âclarity element of (2)?
(Context being that I think 80k do a good job on these things, and I expect Iâm doing a less good job on the equivalents in my own talent search org. Having a sense of what an âeven betterâ version might look like could help shift my sort of internal/âpersonal overton window of possibilities.)
Hi Jamie,
For (1) Iâm agree with 80kâs approach in theoryâitâs just that cost-effectiveness is likely heavily driven by the cause-level impact adjustmentâso youâll want to model that in a lot of detail.
For (2), I think just declaring up front what you think is the most impactful cause(s) and what youâre focusing on is pretty valuable? And I suppose when people do apply/âemail, itâs worth making that sort of caveat as well. For our own GHD grantmaking, we do try to declare on our front page that our current focus is NCD policy and also if someone approaches us raising the issue of grants, we make clear what our current grant cycle is focused on.
Hope my two cents is somewhat useful!
Makes sense on (1). I agree that this kind of methodology is not very externally legible and depends heavily on cause prioritisation, sub-cause prioritisation, your view on the most impactful interventions, etc. I think itâs worth tracking for internal decision-making even if external stakeholders might not agree with all the ratings and decisions. (The system I came up with for Animal Advocacy Careersâ impact evaluation suffered similar issues within animal advocacy.)
For (2), Iâm not sure why you donât think 80 do this. E.g. the page on âWhat are the most pressing world problems?â has the following opening paragraph:
Then the actual ranking is very clear: AI 1, pandemics 2, nuclear war 3, etc.
And the advising page says quite prominently âWeâre most helpful for people who⌠Are interested in the problems we think are most pressing, which you can read about in our problem profiles.â The FAQ on âWhat are you looking for in the application?â mentions that one criterion is âAre interested in working on our pressing problemsâ.
Of course it would be possible to make it more prominent, but it seems like theyâve put these things pretty clearly on the front.
It seems pretty reasonable to me that 80k would want to talk to people who seem promising but donât share all the same cause prio views as them; supporting people to think through cause prio seems like a big way they can add value. So I wouldnât expect them to try to actively deter people who sign up and seem worth advising but, despite the clear labelling on the advising page, donât already share the same cause prio rankings as 80k. You also suggest âwhen people do apply/âemail, itâs worth making that sort of caveat as wellâ, and that seems in the active deterrence ballpark to me; to the effect of âhey are you sure you want this call?â
On (2). If you go to 80kâs front page (https://ââ80000hours.org/ââ), there is no mention that the organizationalâs focus is AGI or that they believe it to be the most important cause. For the other high-level pages accessible from the navigation bar, things are similar not obvious. For example, in âStart Hereâ, you have to read 22 paragraphs down to understand 80kâs explicit prioritization of x-risk over other causes. In the âCareer Guideâ, itâs about halfway down the page. If the 1-1 advising tab, you have to go down to the FAQs at the bottom of the page, and even then it only refers to âpressing problemsâ and links back to the research page. And on the research page itself, the issue is that it doesnât give a sense that the organization strongly recommends AI over the rest, or that x-risk gets the lionâs share of organizational resources.
Iâm not trying to be nitpicky, but trying to convey that a lot of less engaged EAs (or people who are just considering impactful careers) are coming in, reading the website, and maybe browsing the job board or thinking of applying for advisingâwithout realizing just how convinced on AGI 80k is (and correspondingly, not realizing how strongly they will be sold on AGI in advisory calls). This may not just be less engaged EAs too, depending on how you defined engagedâlike I was reading Singer since two decades ago; have been a GWWC pledger since 2014; and whenever giving to GiveWell have actually taken the time to examine their CEAs and research reports. And yet until I actually moved into direct EA work via the CE incubation program, I didnât realize how AGI-focused 80k was.
People will never get the same mistaken impression when looking at Non-Linear or Lightcone or BERI or SFF. I think part of the problem is (a) putting up a lot of causes on the problems page, which gives the reader the impression of a big tent/âbroad focus, and (b) having normie aesthetics (compare: longtermist websites). While I do think itâs correct and valuable to do both, the downside is that without more explicit clarification (e.g. what Non-Linear does, just bluntly saying on the front page in font 40: âWe incubate AI x-risk nonprofits âby connecting founders with ideas, funding, and mentorshipâ), the casual reader of the website doesnât understand that 80k basically works on AGI.
Yeah many of those things seem right to me.
I suspect the crux might be that I donât necessarily think itâs a bad thing if âthe casual reader of the website doesnât understand that 80k basically works on AGIâ. E.g. if 80k adds value to someone as they go through the career guide, even if they donâ realise that âthe organization strongly recommends AI over the rest, or that x-risk gets the lionâs share of organizational resourcesâ, is there a problem?
I would be concerned if 80k was not adding value. E.g. I can imagine more salesly tactics that look like making a big song and dance about how much the reader needs their advice, without providing any actual guidance until they deliver the final pitch, where the reader is basically given the choice of signing up for 80kâs view/âservice, or looking for some alternative provider/âresource that can help them. But I donât think that thatâs happening here.
I can also imagine being concerned if the service was not transparent until you were actually on the call, and then you received some sort of unsolicited cause prioritisation pitch. But again, I donât think thatâs whatâs happening; as discussed, itâs pretty transparent on the advising page and cause prio page what theyâre doing.