On (2). If you go to 80k’s front page (https://80000hours.org/), there is no mention that the organizational’s focus is AGI or that they believe it to be the most important cause. For the other high-level pages accessible from the navigation bar, things are similar not obvious. For example, in “Start Here”, you have to read 22 paragraphs down to understand 80k’s explicit prioritization of x-risk over other causes. In the “Career Guide”, it’s about halfway down the page. If the 1-1 advising tab, you have to go down to the FAQs at the bottom of the page, and even then it only refers to “pressing problems” and links back to the research page. And on the research page itself, the issue is that it doesn’t give a sense that the organization strongly recommends AI over the rest, or that x-risk gets the lion’s share of organizational resources.
I’m not trying to be nitpicky, but trying to convey that a lot of less engaged EAs (or people who are just considering impactful careers) are coming in, reading the website, and maybe browsing the job board or thinking of applying for advising—without realizing just how convinced on AGI 80k is (and correspondingly, not realizing how strongly they will be sold on AGI in advisory calls). This may not just be less engaged EAs too, depending on how you defined engaged—like I was reading Singer since two decades ago; have been a GWWC pledger since 2014; and whenever giving to GiveWell have actually taken the time to examine their CEAs and research reports. And yet until I actually moved into direct EA work via the CE incubation program, I didn’t realize how AGI-focused 80k was.
People will never get the same mistaken impression when looking at Non-Linear or Lightcone or BERI or SFF. I think part of the problem is (a) putting up a lot of causes on the problems page, which gives the reader the impression of a big tent/broad focus, and (b) having normie aesthetics (compare: longtermist websites). While I do think it’s correct and valuable to do both, the downside is that without more explicit clarification (e.g. what Non-Linear does, just bluntly saying on the front page in font 40: “We incubate AI x-risk nonprofits by connecting founders with ideas, funding, and mentorship”), the casual reader of the website doesn’t understand that 80k basically works on AGI.
I suspect the crux might be that I don’t necessarily think it’s a bad thing if “the casual reader of the website doesn’t understand that 80k basically works on AGI”. E.g. if 80k adds value to someone as they go through the career guide, even if they don’ realise that “the organization strongly recommends AI over the rest, or that x-risk gets the lion’s share of organizational resources”, is there a problem?
I would be concerned if 80k was not adding value. E.g. I can imagine more salesly tactics that look like making a big song and dance about how much the reader needs their advice, without providing any actual guidance until they deliver the final pitch, where the reader is basically given the choice of signing up for 80k’s view/service, or looking for some alternative provider/resource that can help them. But I don’t think that that’s happening here.
I can also imagine being concerned if the service was not transparent until you were actually on the call, and then you received some sort of unsolicited cause prioritisation pitch. But again, I don’t think that’s what’s happening; as discussed, it’s pretty transparent on the advising page and cause prio page what they’re doing.
On (2). If you go to 80k’s front page (https://80000hours.org/), there is no mention that the organizational’s focus is AGI or that they believe it to be the most important cause. For the other high-level pages accessible from the navigation bar, things are similar not obvious. For example, in “Start Here”, you have to read 22 paragraphs down to understand 80k’s explicit prioritization of x-risk over other causes. In the “Career Guide”, it’s about halfway down the page. If the 1-1 advising tab, you have to go down to the FAQs at the bottom of the page, and even then it only refers to “pressing problems” and links back to the research page. And on the research page itself, the issue is that it doesn’t give a sense that the organization strongly recommends AI over the rest, or that x-risk gets the lion’s share of organizational resources.
I’m not trying to be nitpicky, but trying to convey that a lot of less engaged EAs (or people who are just considering impactful careers) are coming in, reading the website, and maybe browsing the job board or thinking of applying for advising—without realizing just how convinced on AGI 80k is (and correspondingly, not realizing how strongly they will be sold on AGI in advisory calls). This may not just be less engaged EAs too, depending on how you defined engaged—like I was reading Singer since two decades ago; have been a GWWC pledger since 2014; and whenever giving to GiveWell have actually taken the time to examine their CEAs and research reports. And yet until I actually moved into direct EA work via the CE incubation program, I didn’t realize how AGI-focused 80k was.
People will never get the same mistaken impression when looking at Non-Linear or Lightcone or BERI or SFF. I think part of the problem is (a) putting up a lot of causes on the problems page, which gives the reader the impression of a big tent/broad focus, and (b) having normie aesthetics (compare: longtermist websites). While I do think it’s correct and valuable to do both, the downside is that without more explicit clarification (e.g. what Non-Linear does, just bluntly saying on the front page in font 40: “We incubate AI x-risk nonprofits by connecting founders with ideas, funding, and mentorship”), the casual reader of the website doesn’t understand that 80k basically works on AGI.
Yeah many of those things seem right to me.
I suspect the crux might be that I don’t necessarily think it’s a bad thing if “the casual reader of the website doesn’t understand that 80k basically works on AGI”. E.g. if 80k adds value to someone as they go through the career guide, even if they don’ realise that “the organization strongly recommends AI over the rest, or that x-risk gets the lion’s share of organizational resources”, is there a problem?
I would be concerned if 80k was not adding value. E.g. I can imagine more salesly tactics that look like making a big song and dance about how much the reader needs their advice, without providing any actual guidance until they deliver the final pitch, where the reader is basically given the choice of signing up for 80k’s view/service, or looking for some alternative provider/resource that can help them. But I don’t think that that’s happening here.
I can also imagine being concerned if the service was not transparent until you were actually on the call, and then you received some sort of unsolicited cause prioritisation pitch. But again, I don’t think that’s what’s happening; as discussed, it’s pretty transparent on the advising page and cause prio page what they’re doing.