We as a movement do a terrible job of communicating just how hard it might be to get a job in AI Safety and honestly cause people to anchor/rely too much on EA resources which sets unrealistic expectations and also not being fair
So, some high-level suggestions based on my interactions with other people I have are:
Being more explicit about this in 80K hours calls or talking about the funding bar (potentially somehow with grantmakers/ intro’ing to successful candidates who do independent stuff). Maybe organisations could explicitly state this in their fellowship/intern/job applications: “Only 10 out of 300 last year got selected” so that people don’t over-rely on some applications.
There is a very obvious point that Community Builders can only do so much because their general job is to point resources out and set initial things rolling. I think that as community builders, being vocal about this from an early point is important. This could look like, “Hey, I only know as much as you do now that you have read AGI SF and Superintelligence.” Community builders could also try connecting with slightly more senior people and doing intros on a selective basis(e.g., I know a few good community builders who try to go out of their way to an EAGx to score convos with such people).
I think metrics for 80K, and CBs need to be more heavily weighted towards(if not already) “X went on to do an internship and publish a paper” and away from “this guy read superintelligence and did a fun hackathon”. The latter also creates weird sub-incentives for community members to score brownie points with CBs and make a lot of movement with little impactful progress.
Talking about creating your own opportunities seems really untalked about in EA circles- there is a lot of talk about finding opportunities and overwhelming newcomers with EA org webpages, which, coupled with neglectedness, causes them to overestimate the opportunities. Maybe there could be a guide for this, some sort of a group/support for this?
For early career folks, maybe there could be some sort of a peer buddy system where people who are a little bit further down the road can get matched and collaborate/talk. A lot of these conversations involve safe spaces, building trust and talking about really sensitive issues(like finances, runway planning and critical feedback on applications). I have been lucky to build such a circle within EA, but I recognize that’s only because of certain opportunities I got early on, along with being comfortable with reaching out to people, something which not necessarily everyone is.
We need to identify more proactive people who already have a track record of social impact/being driven by certain kinds of research instead of just high-potential people- these are probably the only people who will actually convert to returns for the movement(very crudely speaking). This is even more true in non-EA hubs where good connections aren’t just one local meetup away as with NYC or Oxford. I think there is a higher attrition rate of high-potential people in LMICs, at least partly due to this.
I agree, and I think part of the problem is giving a false impression of what kind of training or experience is most useful. You mention this in the over-reliance on EA resources which I think is a major problem. This is especially an issue when someone applies for AI Safety related jobs outside of the EA sphere—because they’re competing with people with a much wider range of experience.
I’ve always felt there should be a ‘useful non-EA courses/resources’ guide.
I think I’m in some ways confused about this. I think it’s true that the hiring situation is hard, but my priors say that this is likely to change fast[1] and that the downside risk for many people is probably low: especially in the technical side, time upskilling for AI Safety is probably not time completely wasted for the industry at large[2].
Are there any particular things you think we could do better? I think one could be just in general being less quick to suggest AIS as a career path for people who might be in risk for financial hardship as a result. Career guides in general do seem very oriented to people who can often take the risk of spending months unemployed, and doing upskilling or job hunting.[3]
Especially ML-wise. But this is probably less true (if at all) for people upskilling in AI Safety policy and governance, strategy and fieldbuilding, etc.
I predict scaling up organizations will be slow and painful rather than easy and fast. I predict organizations will have a hard time productively scaling fast, though they may be either productive or scale fast.
This seems mostly independent of funding.
(I don’t particularly disagree with the rest of the comment.)
I think this would be more the result of new orgs rather than bigger orgs? Like I would argue that we currently don’t have anything near the optimal amount of orgs dedicated to training programs, and as funding increases, we will probably get a lot of them.
Appreciate that, I woke up this morning after finally quitting (the new “ChatGPT 5 is your boss” pilot was too much!), about to register “AI4Gud.com″ and get me in the race, but have reconsidered based on this excellent advice.
A lot of policy research seems to be written with an agenda in mind to shape the narrative. And this kind of destroys the point of policy research which is supposed to inform stakeholders and not actively convince or really nudge them.
This might cause polarization in some topics and is in itself, probably snatching legitimacy away from the space.
I have seen similar concerning parallels in the non-profit space, where some third-sector actors endorse/do things which they see as being good but destroys trust in the whole space.
I find the Biden chip export controls a step in the right direction, and it also made me update my world model of compute governance being an impactful lever. However, I am concerned that our goals aren’t aligned with theirs; US policymakers’ incentive right now is to curb China’s tech growth and fun trade war reasons, not pause AI.
This optimization for different incentives is probably going to create some split between US policymakers and AI safety folks as time goes on.
It also makes China more likely to treat this as a tech race which sets up interesting competitive race dynamics between the US and China which I don’t see talked about enough.
After talking and working for some time with non-EA organisations in the AI Policy space, I believe that we need to give more credence to the here-and-now of AI safety policy as well to get the attention of policymakers and get our foot in the door. That also gives us space to collaborate with other think tanks and organisations outside of the x-risk space that are proactive and committed to AI policy. Right now, a lot of those people also see X-risks as being fringe and radical(and these are people who are supposed to be on our side).
Governments tend to move slowly, with due process, and in small increments(think, “We are going to first maybe do some risk monitoring, only then auditing”). Policymakers are only visionaries with horizons until the end of their terms(hmm, no surprise). Usually, broad strokes in policy require precedents of a similar size for it to be feasible within a policymakers’ agenda and the Overton window.
Every group that comes to a policy meeting thinks that their agenda item is the most pressing because, by definition, most of the time, contacting and getting meetings with policymakers means that you are proactive and have done your homework.
I want to see more EAs respond to Public Voice Opportunities, for instance- something I rarely hear on the EA forum or via EA channels/material.
I agree. I suspect that responses to calls for evidence over the years played a big role in introducing and normalising xrisk research ideas in the UK context, before the big moves we’ve seen in the last year.
This is a really good point, and perhaps the number one mistake I see in this area. People also forget that policy changes have colossal impacts on very complex human systems—the bigger the change, the bigger the impact. A small step is a lot easier for end-user buy-in to stomach than a large one.
I often advise to think the cost and effort difference between “I have to re-wallpaper one wall” as opposed to “I need to tear my house down to the foundations and rebuild it”.
That said, I think a lot of it is because actual policy is super hard to break into and get experience. There needs to be more training etc available to people—particularly early careers researchers.
I feel like being exposed early on to longer form GovAI-type reports has made me set the bar high for writing my thoughts out in short form, which really sucks in terms of an output standpoint.
I see way too many people confusing movement with progress in the policy space.
There can be a lot of drafts becoming bills with still significant room for regulatory capture in the specifics, which will be decided later on. Take risk levels, for instance, which are subjective—lots of legal leeway for companies to exploit.
People in EA end up optimizing for EA credentials so they can virtue signal to grantmakers, but grantmakers would probably like people to scope out non-EA opportunities because that allows us to introduce unknown people to the concerns we have
For policy recommendations, put forth things that actually build on or move the status quo.
For example, recommending a “National Youth Council” without a mandate can be a uniquely bad idea- instead of ignoring your (usually inactive) youth org, now, policymakers will ignore the Council of (usually inactive)youth orgs all while you(the actually proactive person), walk away with the false notion of a job well done.
The problem with AI safety policy is that if we don’t specify and attempt to answer the technical concerns then someone else will and safety wash the concerns away.
CSOs need to understand what they themselves mean when they say “explainable” and “algorithmic transparency.”
It’s important to think about the policy space from a meta-level incentives/factors that might get in the way of having an impact, such as making AI safer.
One I heard today was that policy people thrive in moments of regulatory uncertainty, while this is bad for companies.
Communicating by keeping human rights at the centre of AI Policy discussion is extremely underappreciated. For e.g., the UN Human Rights chief in 2021 called for a moratorium on the sale and use of artificial intelligence (AI) systems until adequate safeguards are put in place.
Respect for human rights is a well-established central norm; leverage it
Everyone who seems to be writing policy papers/ doing technical work seems to be keeping generative AI at the back of their mind, when framing their work or impact.
This narrow-eyed focus on gen AI might almost certainly be net-negative for us- unknowingly or unintentionally ignoring ripple effects of the gen AI boom in other fields (like robotics companies getting more funding leading to more capabilities, and that leads to new types of risks).
And guess who benefits if we do end up getting good evals/standards in place for gen AI? It seems to me companies/investors are clear winners because we have to go back to the drawing board and now advocate for the same kind of stuff for robotics or a different kind of AI use-case/type all while the development/capability cycles keep maturing.
We seem to be in whack-a-mole territory now because of the overton window shifting for investors.
I don’t think we have a good answer to what happens after we do auditing of an AI model and find something wrong.
Given that our current understanding of AI’s internal workings is at least a generation behind, it’s not exactly like we can isolate what mechanism is causing certain behaviours. (Would really appreciate any input here- I see very little to no discussion on this in governance papers; it’s almost as if policy folks are oblivious to the technical hurdles which await working groups)
With open-source models being released and on ramps to downstream innovation lowering, the safety challenges may not be a single threshold but rather an ongoing, iterative cat-and-mouse game.
Just underscores the importance of people in the policy/safety field thinking far ahead
It’s frankly quite concerning that usually technical specifications are only worked on by Working Groups after high-level qualitative goals are set by policymakers- seems to open a can of worms for different interpretations and safety washing.
We as a movement do a terrible job of communicating just how hard it might be to get a job in AI Safety and honestly cause people to anchor/rely too much on EA resources which sets unrealistic expectations and also not being fair
Any suggestions?
So, some high-level suggestions based on my interactions with other people I have are:
Being more explicit about this in 80K hours calls or talking about the funding bar (potentially somehow with grantmakers/ intro’ing to successful candidates who do independent stuff). Maybe organisations could explicitly state this in their fellowship/intern/job applications: “Only 10 out of 300 last year got selected” so that people don’t over-rely on some applications.
There is a very obvious point that Community Builders can only do so much because their general job is to point resources out and set initial things rolling. I think that as community builders, being vocal about this from an early point is important. This could look like, “Hey, I only know as much as you do now that you have read AGI SF and Superintelligence.” Community builders could also try connecting with slightly more senior people and doing intros on a selective basis(e.g., I know a few good community builders who try to go out of their way to an EAGx to score convos with such people).
I think metrics for 80K, and CBs need to be more heavily weighted towards(if not already) “X went on to do an internship and publish a paper” and away from “this guy read superintelligence and did a fun hackathon”. The latter also creates weird sub-incentives for community members to score brownie points with CBs and make a lot of movement with little impactful progress.
Talking about creating your own opportunities seems really untalked about in EA circles- there is a lot of talk about finding opportunities and overwhelming newcomers with EA org webpages, which, coupled with neglectedness, causes them to overestimate the opportunities. Maybe there could be a guide for this, some sort of a group/support for this?
For early career folks, maybe there could be some sort of a peer buddy system where people who are a little bit further down the road can get matched and collaborate/talk. A lot of these conversations involve safe spaces, building trust and talking about really sensitive issues(like finances, runway planning and critical feedback on applications). I have been lucky to build such a circle within EA, but I recognize that’s only because of certain opportunities I got early on, along with being comfortable with reaching out to people, something which not necessarily everyone is.
We need to identify more proactive people who already have a track record of social impact/being driven by certain kinds of research instead of just high-potential people- these are probably the only people who will actually convert to returns for the movement(very crudely speaking). This is even more true in non-EA hubs where good connections aren’t just one local meetup away as with NYC or Oxford. I think there is a higher attrition rate of high-potential people in LMICs, at least partly due to this.
I agree, and I think part of the problem is giving a false impression of what kind of training or experience is most useful. You mention this in the over-reliance on EA resources which I think is a major problem. This is especially an issue when someone applies for AI Safety related jobs outside of the EA sphere—because they’re competing with people with a much wider range of experience.
I’ve always felt there should be a ‘useful non-EA courses/resources’ guide.
I think I’m in some ways confused about this. I think it’s true that the hiring situation is hard, but my priors say that this is likely to change fast[1] and that the downside risk for many people is probably low: especially in the technical side, time upskilling for AI Safety is probably not time completely wasted for the industry at large[2].
Are there any particular things you think we could do better? I think one could be just in general being less quick to suggest AIS as a career path for people who might be in risk for financial hardship as a result. Career guides in general do seem very oriented to people who can often take the risk of spending months unemployed, and doing upskilling or job hunting.[3]
Both as a result of higher funding and people funding a lot of orgs whenever there is both excess talent and funding overhang.
Especially ML-wise. But this is probably less true (if at all) for people upskilling in AI Safety policy and governance, strategy and fieldbuilding, etc.
Something something rich western countries
I predict scaling up organizations will be slow and painful rather than easy and fast. I predict organizations will have a hard time productively scaling fast, though they may be either productive or scale fast.
This seems mostly independent of funding.
(I don’t particularly disagree with the rest of the comment.)
I think this would be more the result of new orgs rather than bigger orgs? Like I would argue that we currently don’t have anything near the optimal amount of orgs dedicated to training programs, and as funding increases, we will probably get a lot of them.
I’d also predict something similar about well functioning founding orgs.
Doing things is hard.
At this point, we need an 80k page on “What to do after leaving Open AI”
Don’t start another AI safety lab
Did something happen?
Appreciate that, I woke up this morning after finally quitting (the new “ChatGPT 5 is your boss” pilot was too much!), about to register “AI4Gud.com″ and get me in the race, but have reconsidered based on this excellent advice.
A lot of policy research seems to be written with an agenda in mind to shape the narrative. And this kind of destroys the point of policy research which is supposed to inform stakeholders and not actively convince or really nudge them.
This might cause polarization in some topics and is in itself, probably snatching legitimacy away from the space.
I have seen similar concerning parallels in the non-profit space, where some third-sector actors endorse/do things which they see as being good but destroys trust in the whole space.
This gives me scary unilaterist’s curse vibes..
I find the Biden chip export controls a step in the right direction, and it also made me update my world model of compute governance being an impactful lever. However, I am concerned that our goals aren’t aligned with theirs; US policymakers’ incentive right now is to curb China’s tech growth and fun trade war reasons, not pause AI.
This optimization for different incentives is probably going to create some split between US policymakers and AI safety folks as time goes on.
It also makes China more likely to treat this as a tech race which sets up interesting competitive race dynamics between the US and China which I don’t see talked about enough.
After talking and working for some time with non-EA organisations in the AI Policy space, I believe that we need to give more credence to the here-and-now of AI safety policy as well to get the attention of policymakers and get our foot in the door. That also gives us space to collaborate with other think tanks and organisations outside of the x-risk space that are proactive and committed to AI policy. Right now, a lot of those people also see X-risks as being fringe and radical(and these are people who are supposed to be on our side).
Governments tend to move slowly, with due process, and in small increments(think, “We are going to first maybe do some risk monitoring, only then auditing”). Policymakers are only visionaries with horizons until the end of their terms(hmm, no surprise). Usually, broad strokes in policy require precedents of a similar size for it to be feasible within a policymakers’ agenda and the Overton window.
Every group that comes to a policy meeting thinks that their agenda item is the most pressing because, by definition, most of the time, contacting and getting meetings with policymakers means that you are proactive and have done your homework.
I want to see more EAs respond to Public Voice Opportunities, for instance- something I rarely hear on the EA forum or via EA channels/material.
I agree. I suspect that responses to calls for evidence over the years played a big role in introducing and normalising xrisk research ideas in the UK context, before the big moves we’ve seen in the last year.
e.g. a few representative examples
(2016) https://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/science-and-technology-committee/robotics-and-artificial-intelligence/written/32690.pdf
(2017)
https://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/artificial-intelligence-committee/artificial-intelligence/written/69587.html
(2022)
https://www.longtermresilience.org/post/future-of-compute-review-submission-of-evidence
And many more.
This is a really good point, and perhaps the number one mistake I see in this area. People also forget that policy changes have colossal impacts on very complex human systems—the bigger the change, the bigger the impact. A small step is a lot easier for end-user buy-in to stomach than a large one.
I often advise to think the cost and effort difference between “I have to re-wallpaper one wall” as opposed to “I need to tear my house down to the foundations and rebuild it”.
That said, I think a lot of it is because actual policy is super hard to break into and get experience. There needs to be more training etc available to people—particularly early careers researchers.
It’s interesting how we also have scope neglect of key historical events:
Death Toll: Siege of Athens in 86 BC ~ People killed in Hiroshima <<< Battle of Stalingrad
I feel like being exposed early on to longer form GovAI-type reports has made me set the bar high for writing my thoughts out in short form, which really sucks in terms of an output standpoint.
I see way too many people confusing movement with progress in the policy space.
There can be a lot of drafts becoming bills with still significant room for regulatory capture in the specifics, which will be decided later on. Take risk levels, for instance, which are subjective—lots of legal leeway for companies to exploit.
People in EA end up optimizing for EA credentials so they can virtue signal to grantmakers, but grantmakers would probably like people to scope out non-EA opportunities because that allows us to introduce unknown people to the concerns we have
For policy recommendations, put forth things that actually build on or move the status quo.
For example, recommending a “National Youth Council” without a mandate can be a uniquely bad idea- instead of ignoring your (usually inactive) youth org, now, policymakers will ignore the Council of (usually inactive)youth orgs all while you(the actually proactive person), walk away with the false notion of a job well done.
The problem with AI safety policy is that if we don’t specify and attempt to answer the technical concerns then someone else will and safety wash the concerns away.
CSOs need to understand what they themselves mean when they say “explainable” and “algorithmic transparency.”
It’s important to think about the policy space from a meta-level incentives/factors that might get in the way of having an impact, such as making AI safer.
One I heard today was that policy people thrive in moments of regulatory uncertainty, while this is bad for companies.
Communicating by keeping human rights at the centre of AI Policy discussion is extremely underappreciated.
For e.g., the UN Human Rights chief in 2021 called for a moratorium on the sale and use of artificial intelligence (AI) systems until adequate safeguards are put in place.
Respect for human rights is a well-established central norm; leverage it
The real danger isn’t just from AI getting better- it’s from it getting good enough that humans start over-relying on it and offloading tasks to it.
Remember that Petrov had automatic detection systems, too; he just independently came to the conclusion not to fire nukes back.
Everyone who seems to be writing policy papers/ doing technical work seems to be keeping generative AI at the back of their mind, when framing their work or impact.
This narrow-eyed focus on gen AI might almost certainly be net-negative for us- unknowingly or unintentionally ignoring ripple effects of the gen AI boom in other fields (like robotics companies getting more funding leading to more capabilities, and that leads to new types of risks).
And guess who benefits if we do end up getting good evals/standards in place for gen AI? It seems to me companies/investors are clear winners because we have to go back to the drawing board and now advocate for the same kind of stuff for robotics or a different kind of AI use-case/type all while the development/capability cycles keep maturing.
We seem to be in whack-a-mole territory now because of the overton window shifting for investors.
I don’t think we have a good answer to what happens after we do auditing of an AI model and find something wrong.
Given that our current understanding of AI’s internal workings is at least a generation behind, it’s not exactly like we can isolate what mechanism is causing certain behaviours. (Would really appreciate any input here- I see very little to no discussion on this in governance papers; it’s almost as if policy folks are oblivious to the technical hurdles which await working groups)
At some point, one has to ask “Am I in the cause area because I am an EA or am in EA because I am in the cause area?”
With open-source models being released and on ramps to downstream innovation lowering, the safety challenges may not be a single threshold but rather an ongoing, iterative cat-and-mouse game.
Just underscores the importance of people in the policy/safety field thinking far ahead
It’s frankly quite concerning that usually technical specifications are only worked on by Working Groups after high-level qualitative goals are set by policymakers- seems to open a can of worms for different interpretations and safety washing.
Updated away from this generally- there is a balance.
Good example for why I updated away is 28:27 from the video at: