I have this impression of OpenPhil as being the Harvard of EA orgsâthat is, itâs the premier choice of workplace for many highly-engaged EAs, drawing in lots of talent, with distortionary effects on other orgs trying to hire đ
When should someone who cares a lot about GCRs decide not to work at OP?
When should someone who cares a lot about GCRs decide not to work at OP?
I agree that there are several advantages of working at Open Phil, but I also think there are some good answers to âwhy wouldnât someone want to work at OP?â
Culture, worldview, and relationship with labs
Many people have an (IMO fairly accurate) impression that OpenPhil is conservative, biased toward inaction, generally prefers maintaining the status quo, and is generally in favor of maintaining positive relationships with labs.
As Iâve gotten more involved in AI policy, Iâve updated more strongly toward this position. While simple statements always involve a bit of gloss/âimprecision, I think characterizations like âOpenPhil has taken a bet on the scaling labsâ, âOpenPhil is concerned about disrupting relationships with labsâ, and even âOpenPhil sometimes uses its influence to put pressure on orgs to not do things that would disrupt the status quoâ are fairly accurate.
The most extreme version of this critique is that perhaps OpenPhil has been net negative through its explicit funding for labs and implicit contributions to a culture that funnels money and talent toward labs and other organizations that entrench a lab-friendly status quo.
This might change as OpenPhil hires new people and plans to spend more money, but by default, I expect that OpenPhil will continue to play the âbe nice with labs//âdonât disrupt the status quoâ role in the space. (In contrast to organizations like MIRI, Conjecture, FLI, the Center for AI Policy, perhaps CAIS).
Lots of people want to work there; replaceability
Given OPâs high status, lots of folks want to work there. Some people think the difference between the âbest applicantâ and the â2nd best applicantâ is often pretty large, but this certainly doesnât seem true in all cases.
I think if someone EG had an opportunity to work at OP vs. start their own organization or do something that requires more agency/âentrepreneurship, there might be a strong case for them to do the latter, since itâs much less likely to happen by default.
What does the world need?
I think this is somewhat related to the first point, but Iâll flesh it out in a different way.
Some people think that we need more ârowingââ like, OPâs impact is clearly good, and if we just add some more capacity to the grantmakers and make more grants that look pretty similar to previous grants, weâre pushing the world into a considerably better direction.
Some people think that the default trajectory is not going so well, and this is (partially or largely) caused or maintained by the OP ecosystem Under this worldview, one might think that adding some additional capacity to OP is not actually all that helpful in expectation.
Instead, people with this worldview believe that projects that aim to (for example) advocate for strong regulations, engage with the media, make the public more aware about AI risk, and do other forms of direct work more focused on folks outside of the core EA community might be more impactful.
Of course, part of this depends on how open OP will be to people âsteeringâ from within. My expectation is that it would be pretty hard to steer OP from within (my impression is that lots of smart people have tried, and folks like Ajeya and Luke have clearly been thinking about things for a long time, and the culture has already been shaped by many core EAs, and thereâs a lot of inertia, so a random new junior person is pretty unlikely to substantially shift their worldview, though I of course could be wrong).
(I began working for OP on the AI governance team in June. Iâm commenting in a personal capacity based on my own observations; other team members may disagree with me.)
OpenPhil sometimes uses its influence to put pressure on orgs to not do things that would disrupt the status quo
FWIW I really donât think OP is in the business of preserving the status quo. People who work on AI at OP have a range of opinions on just about every issue, but I donât think any of us feel good about the status quo! People (including non-grantees) often ask us for our thoughts about a proposed action, and weâll share if we think some action might be counterproductive, but many things weâd consider âproductiveâ look very different from âpreserving the status quo.â For example, I would consider the CAIS statement to be pretty disruptive to the status quo and productive, and people at Open Phil were excited about it and spent a bunch of time finding additional people to sign it before it was published.
Lots of people want to work there; replaceability
I agree that OP has an easier time recruiting than many other orgs, though perhaps a harder time than frontier labs. But at risk of self-flattery, I think the people weâve hired would generally be hard to replace â these roles require a fairly rare combination of traits. People who have them can be huge value-adds relative to the counterfactual!
pretty hard to steer OP from within
I basically disagree with this. There are areas where senior staff have strong takes, but theyâll definitely engage with the views of junior staff, and they sometimes change their minds. Also, the AI world is changing fast, and as a result our strategy has been changing fast, and there are areas full of new terrain where a new hire could really shape our strategy. (This is one way in which grantmaker capacity is a serious bottleneck.)
Iâm not officially part of the AMA but Iâm one of the disagreevotes so Iâll chime in.
As someone whoâs only recently started, the vibe this post gives of it being hard for me to disagree with established wisdom and/âor push the org to do things differently, meaning my only role is to âjust push out more money along the OP party lineâ, is just miles away from what Iâve experienced.
If anything, I think how much ownership Iâve needed to take for the projects Iâm working on has been the biggest challenge of starting the role. Itâs one that (I hope) Iâm rising to, but itâs hard!
In terms of how open OP is to steering from within, it seems worth distinguishing âhow likely is a random junior person to substantially shift the worldview of the orgâ, and âwhat would the experience of that person be like if they tried toâ. Luke has, from before I had an offer, repeatedly demonstrated that he wants and values my disagreement in how he reacts to it and acts on it, and itâs something I really appreciate about his management.
Ah, sorry you got that impression from my question! I mostly meant Harvard in terms of âdesirability among applicantsâ as opposed to âestablished bureaucracyâ. My outside impression is that a lot of people I respect a lot (like you!) made the decision to go work at OP instead of one of their many other options. And that Iâve heard informal complaints from leaders of other EA orgs, roughly âitâs hard to find and keep good people, because our best candidates keep joining OP insteadâ. So I was curious to learn more about OPâs internal thinking about this effect.
This is a hard question to answer, because there are so many different jobs someone could take in the GCR space that might be really impactful. And while we have a good sense of what someone can achieve by working at OP, we canât easily compare that to all the other options someone might have. A comparison like âOP vs. grad schoolâ or âOP vs. pursuing a government careerâ comes with dozens of different considerations that would play out differently for any specific person.
Ultimately, we hope people will consider jobs weâve posted (if they seem like a good fit), and also consider anything else that looks promising to them.
I have this impression of OpenPhil as being the Harvard of EA orgsâthat is, itâs the premier choice of workplace for many highly-engaged EAs, drawing in lots of talent, with distortionary effects on other orgs trying to hire đ
When should someone who cares a lot about GCRs decide not to work at OP?
I agree that there are several advantages of working at Open Phil, but I also think there are some good answers to âwhy wouldnât someone want to work at OP?â
Culture, worldview, and relationship with labs
Many people have an (IMO fairly accurate) impression that OpenPhil is conservative, biased toward inaction, generally prefers maintaining the status quo, and is generally in favor of maintaining positive relationships with labs.
As Iâve gotten more involved in AI policy, Iâve updated more strongly toward this position. While simple statements always involve a bit of gloss/âimprecision, I think characterizations like âOpenPhil has taken a bet on the scaling labsâ, âOpenPhil is concerned about disrupting relationships with labsâ, and even âOpenPhil sometimes uses its influence to put pressure on orgs to not do things that would disrupt the status quoâ are fairly accurate.
The most extreme version of this critique is that perhaps OpenPhil has been net negative through its explicit funding for labs and implicit contributions to a culture that funnels money and talent toward labs and other organizations that entrench a lab-friendly status quo.
This might change as OpenPhil hires new people and plans to spend more money, but by default, I expect that OpenPhil will continue to play the âbe nice with labs//âdonât disrupt the status quoâ role in the space. (In contrast to organizations like MIRI, Conjecture, FLI, the Center for AI Policy, perhaps CAIS).
Lots of people want to work there; replaceability
Given OPâs high status, lots of folks want to work there. Some people think the difference between the âbest applicantâ and the â2nd best applicantâ is often pretty large, but this certainly doesnât seem true in all cases.
I think if someone EG had an opportunity to work at OP vs. start their own organization or do something that requires more agency/âentrepreneurship, there might be a strong case for them to do the latter, since itâs much less likely to happen by default.
What does the world need?
I think this is somewhat related to the first point, but Iâll flesh it out in a different way.
Some people think that we need more ârowingââ like, OPâs impact is clearly good, and if we just add some more capacity to the grantmakers and make more grants that look pretty similar to previous grants, weâre pushing the world into a considerably better direction.
Some people think that the default trajectory is not going so well, and this is (partially or largely) caused or maintained by the OP ecosystem Under this worldview, one might think that adding some additional capacity to OP is not actually all that helpful in expectation.
Instead, people with this worldview believe that projects that aim to (for example) advocate for strong regulations, engage with the media, make the public more aware about AI risk, and do other forms of direct work more focused on folks outside of the core EA community might be more impactful.
Of course, part of this depends on how open OP will be to people âsteeringâ from within. My expectation is that it would be pretty hard to steer OP from within (my impression is that lots of smart people have tried, and folks like Ajeya and Luke have clearly been thinking about things for a long time, and the culture has already been shaped by many core EAs, and thereâs a lot of inertia, so a random new junior person is pretty unlikely to substantially shift their worldview, though I of course could be wrong).
(I began working for OP on the AI governance team in June. Iâm commenting in a personal capacity based on my own observations; other team members may disagree with me.)
FWIW I really donât think OP is in the business of preserving the status quo. People who work on AI at OP have a range of opinions on just about every issue, but I donât think any of us feel good about the status quo! People (including non-grantees) often ask us for our thoughts about a proposed action, and weâll share if we think some action might be counterproductive, but many things weâd consider âproductiveâ look very different from âpreserving the status quo.â For example, I would consider the CAIS statement to be pretty disruptive to the status quo and productive, and people at Open Phil were excited about it and spent a bunch of time finding additional people to sign it before it was published.
I agree that OP has an easier time recruiting than many other orgs, though perhaps a harder time than frontier labs. But at risk of self-flattery, I think the people weâve hired would generally be hard to replace â these roles require a fairly rare combination of traits. People who have them can be huge value-adds relative to the counterfactual!
I basically disagree with this. There are areas where senior staff have strong takes, but theyâll definitely engage with the views of junior staff, and they sometimes change their minds. Also, the AI world is changing fast, and as a result our strategy has been changing fast, and there are areas full of new terrain where a new hire could really shape our strategy. (This is one way in which grantmaker capacity is a serious bottleneck.)
Wow lots of disagreement hereâIâm curious what the disagreement is about, if anyone wants to explain?
Iâm not officially part of the AMA but Iâm one of the disagreevotes so Iâll chime in.
As someone whoâs only recently started, the vibe this post gives of it being hard for me to disagree with established wisdom and/âor push the org to do things differently, meaning my only role is to âjust push out more money along the OP party lineâ, is just miles away from what Iâve experienced.
If anything, I think how much ownership Iâve needed to take for the projects Iâm working on has been the biggest challenge of starting the role. Itâs one that (I hope) Iâm rising to, but itâs hard!
In terms of how open OP is to steering from within, it seems worth distinguishing âhow likely is a random junior person to substantially shift the worldview of the orgâ, and âwhat would the experience of that person be like if they tried toâ. Luke has, from before I had an offer, repeatedly demonstrated that he wants and values my disagreement in how he reacts to it and acts on it, and itâs something I really appreciate about his management.
Ah, sorry you got that impression from my question! I mostly meant Harvard in terms of âdesirability among applicantsâ as opposed to âestablished bureaucracyâ. My outside impression is that a lot of people I respect a lot (like you!) made the decision to go work at OP instead of one of their many other options. And that Iâve heard informal complaints from leaders of other EA orgs, roughly âitâs hard to find and keep good people, because our best candidates keep joining OP insteadâ. So I was curious to learn more about OPâs internal thinking about this effect.
This is a hard question to answer, because there are so many different jobs someone could take in the GCR space that might be really impactful. And while we have a good sense of what someone can achieve by working at OP, we canât easily compare that to all the other options someone might have. A comparison like âOP vs. grad schoolâ or âOP vs. pursuing a government careerâ comes with dozens of different considerations that would play out differently for any specific person.
Ultimately, we hope people will consider jobs weâve posted (if they seem like a good fit), and also consider anything else that looks promising to them.