Do you have approximate statistics on the percentage distribution of paths you most commonly recommend during your 1-1 calls? In particular AI Safety related vs anything else, and in AI Safety working at top labs vs policy vs theoretical research. For example: “we recommend 1% of people in our calls to consider work in something climate-related, 50% consider work in AI Safety at OpenAI/other top labs, 50% to consider work in AI-policy, 20% to consider work in biosecurity, 30% in EA meta, 5% in earning to give, …”
I ask because I heard the meme that “80,000hours calls are not worth the time, they just tell everyone to go into AI safety”. I think it’s not true, but I would like to have some data to refute it.
This is pretty hard to answer because we often talk through multiple cause areas with advisees. We aren’t trying to tell people exactly what to do; we try to talk through ideas with people so they have more clarity on what they want to do. Most people simply haven’t asked themselves, “How do I define positive impact, and how can I have that kind of impact?” We try to help people think through this question based on their personal moral intuitions. Our general approach is to discuss our top cause areas and/or cause areas where we think advisees could have some comparative advantage, but to ultimately defer to the advisee on their preferences; we’re big believers in people doing what they’re actually motivated to do. We don’t think it’s sustainable in the long term to work on something that you’re not so interested in.
I also don’t think we track what % of people *we* think should go into AI safety. We don’t think everybody should be working on our top problems (again see “do you think everyone should work on your top list of world problems” https://80000hours.org/problem-profiles/#problems-faq). But AI risk is the world problem we rank as most pressing, and we’re very excited about helping people productively work on in this area. if somebody isn’t excited by it or doesn’t seem like a good fit, we will discuss what they’re interested in instead. Some members of our team are people who considered AI safety as a career path but realised it’s not for them — so we’re very sympathetic to this! For example, I applied for a job at an AI Safety lab and was rejected.
Re: calls not being worth people’s time, on a 7 point scale (1 = “useless”, 4 = “somewhat useful”, 7 = “really useful”) most of my advisees consider their calls to be useful; 97% said their call was at least somewhat useful (aka at least a 4⁄7), 75% rated it as a 6⁄7 or 7⁄7. So it seems like a reasonable way to spend a couple of hours (between prep/call/reflection) of your life ;)
Do you have approximate statistics on the percentage distribution of paths you most commonly recommend during your 1-1 calls? In particular AI Safety related vs anything else, and in AI Safety working at top labs vs policy vs theoretical research. For example: “we recommend 1% of people in our calls to consider work in something climate-related, 50% consider work in AI Safety at OpenAI/other top labs, 50% to consider work in AI-policy, 20% to consider work in biosecurity, 30% in EA meta, 5% in earning to give, …”
I ask because I heard the meme that “80,000hours calls are not worth the time, they just tell everyone to go into AI safety”. I think it’s not true, but I would like to have some data to refute it.
This is pretty hard to answer because we often talk through multiple cause areas with advisees. We aren’t trying to tell people exactly what to do; we try to talk through ideas with people so they have more clarity on what they want to do. Most people simply haven’t asked themselves, “How do I define positive impact, and how can I have that kind of impact?” We try to help people think through this question based on their personal moral intuitions. Our general approach is to discuss our top cause areas and/or cause areas where we think advisees could have some comparative advantage, but to ultimately defer to the advisee on their preferences; we’re big believers in people doing what they’re actually motivated to do. We don’t think it’s sustainable in the long term to work on something that you’re not so interested in.
I also don’t think we track what % of people *we* think should go into AI safety. We don’t think everybody should be working on our top problems (again see “do you think everyone should work on your top list of world problems” https://80000hours.org/problem-profiles/#problems-faq). But AI risk is the world problem we rank as most pressing, and we’re very excited about helping people productively work on in this area. if somebody isn’t excited by it or doesn’t seem like a good fit, we will discuss what they’re interested in instead. Some members of our team are people who considered AI safety as a career path but realised it’s not for them — so we’re very sympathetic to this! For example, I applied for a job at an AI Safety lab and was rejected.
Re: calls not being worth people’s time, on a 7 point scale (1 = “useless”, 4 = “somewhat useful”, 7 = “really useful”) most of my advisees consider their calls to be useful; 97% said their call was at least somewhat useful (aka at least a 4⁄7), 75% rated it as a 6⁄7 or 7⁄7. So it seems like a reasonable way to spend a couple of hours (between prep/call/reflection) of your life ;)