I’m interested in figuring out “what skill profiles are most leveraged for altruistic work after we get the first competent AI agents”?
I think this might be one of the most important questions for field-building orgs to work on (in any cause area but particularly AI safety). I think 80k and other AIS-motivated careers groups should try to think hard about this question and what it means for their strategy going forward.
I’m optimistic about productivity on the question of “what skill profiles are most leveraged after we get the first competent AI-agents” because:
few people have actually tried to think about what managing an AI workforce would look like
this doesn’t feel conceptually abstract (e.g. some people have already adopted LMs in their day-to-day workflows)
something like drop-in replacements for human remote workers are more likely than AIs less analogous to human workers if timelines are short (and if timelines are short, this work is more urgent than if timelines are long)
One important consideration is whether human managers are going to be good proxies for AI-agent managers. It’s plausible to me that the majority of object-level AI safety work that has ever be done will be done by AI agents. It could be that the current status quo persists, where people’s leverage routes through things like leadership and management skills, or it could be that ~everyone will become more leveraged proportional to their access to AI models (roughly their budget) due to AI agents having different affordances than humans, or something else.
For example, suppose that AI agents are much more tolerant, hardworking, and understanding of user intent for short tasks than human workers. I expect that having good strategic takes would become much more leveraged and having good internal stakeholder management skills would become much less leveraged (relative to today). If 80k (for example) thought that was likely, then maybe they should double down on finding people with great strategy takes and care less about typical management skills.
Maybe “junior” roles will be automated before “senior” roles almost uniformly across the economy. In that case, it’s even more valuable than it is right now to focus on getting “senior” people.
Or maybe coding will be automated before everything else, and for some reason people’s relative leverage inside the company stays pretty fixed, or it’s too hard to say which kinds of jobs will be most leveraged at software companies. Then “coding” still becomes much more leveraged per dollar than today, and it’s probably useful to find more value-aligned coders.
It would be surprising to me if the price of certain kinds of intellectual labour decreased by 100x, and this had little impact on people’s relative leverage.
(disclaimer: I’m not sure whether the above stories actually go through, they are meant to just be illustrative of the kind of thinking that seems undersupplied)
I’m interested in figuring out “what skill profiles are most leveraged for altruistic work after we get the first competent AI agents”?
I think this might be one of the most important questions for field-building orgs to work on (in any cause area but particularly AI safety). I think 80k and other AIS-motivated careers groups should try to think hard about this question and what it means for their strategy going forward.
I’m optimistic about productivity on the question of “what skill profiles are most leveraged after we get the first competent AI-agents” because:
few people have actually tried to think about what managing an AI workforce would look like
this doesn’t feel conceptually abstract (e.g. some people have already adopted LMs in their day-to-day workflows)
something like drop-in replacements for human remote workers are more likely than AIs less analogous to human workers if timelines are short (and if timelines are short, this work is more urgent than if timelines are long)
One important consideration is whether human managers are going to be good proxies for AI-agent managers. It’s plausible to me that the majority of object-level AI safety work that has ever be done will be done by AI agents. It could be that the current status quo persists, where people’s leverage routes through things like leadership and management skills, or it could be that ~everyone will become more leveraged proportional to their access to AI models (roughly their budget) due to AI agents having different affordances than humans, or something else.
For example, suppose that AI agents are much more tolerant, hardworking, and understanding of user intent for short tasks than human workers. I expect that having good strategic takes would become much more leveraged and having good internal stakeholder management skills would become much less leveraged (relative to today). If 80k (for example) thought that was likely, then maybe they should double down on finding people with great strategy takes and care less about typical management skills.
Maybe “junior” roles will be automated before “senior” roles almost uniformly across the economy. In that case, it’s even more valuable than it is right now to focus on getting “senior” people.
Or maybe coding will be automated before everything else, and for some reason people’s relative leverage inside the company stays pretty fixed, or it’s too hard to say which kinds of jobs will be most leveraged at software companies. Then “coding” still becomes much more leveraged per dollar than today, and it’s probably useful to find more value-aligned coders.
It would be surprising to me if the price of certain kinds of intellectual labour decreased by 100x, and this had little impact on people’s relative leverage.
(disclaimer: I’m not sure whether the above stories actually go through, they are meant to just be illustrative of the kind of thinking that seems undersupplied)