Thanks for writing this up! Just a few rough thoughts:
Regarding the absorbency of AI Safety Researcher: I have heard people in the movement tossing around that 1/6th of the AI landscape (funding, people) being devoted to safety would be worth aspiring to. That would be a lot of roles to fill (most of which, to be fair, don’t exist as of yet), though I didn’t crunch the numbers. The main difference to working in policy would be that the profile/background is a lot more narrow. On the other hand, a lot of those roles may not fit what you mean by “researcher”, and realistically won’t be filled with EAs.
I’m also wondering if you’re arguing against propagating the “hits-based approach” for careers to a general audience and find it hard to disentangle this here. There’s probably a high absorbency for policy careers, but only few people succeeding in that path will have an extraordinarily high impact. I’m trying to point at some sort of 2x2 matrix with absorbency and risk aversion, where we might eventually fall short on people taking risks in a low-absorbency career path because we need a lot of people who try and fail in order to get to the impact we’d like.
Thanks for writing this up! Just a few rough thoughts:
Regarding the absorbency of AI Safety Researcher: I have heard people in the movement tossing around that 1/6th of the AI landscape (funding, people) being devoted to safety would be worth aspiring to. That would be a lot of roles to fill (most of which, to be fair, don’t exist as of yet), though I didn’t crunch the numbers. The main difference to working in policy would be that the profile/background is a lot more narrow. On the other hand, a lot of those roles may not fit what you mean by “researcher”, and realistically won’t be filled with EAs.
I’m also wondering if you’re arguing against propagating the “hits-based approach” for careers to a general audience and find it hard to disentangle this here. There’s probably a high absorbency for policy careers, but only few people succeeding in that path will have an extraordinarily high impact. I’m trying to point at some sort of 2x2 matrix with absorbency and risk aversion, where we might eventually fall short on people taking risks in a low-absorbency career path because we need a lot of people who try and fail in order to get to the impact we’d like.