When should someone who cares a lot about GCRs decide not to work at OP?
I agree that there are several advantages of working at Open Phil, but I also think there are some good answers to “why wouldn’t someone want to work at OP?”
Culture, worldview, and relationship with labs
Many people have an (IMO fairly accurate) impression that OpenPhil is conservative, biased toward inaction, generally prefers maintaining the status quo, and is generally in favor of maintaining positive relationships with labs.
As I’ve gotten more involved in AI policy, I’ve updated more strongly toward this position. While simple statements always involve a bit of gloss/imprecision, I think characterizations like “OpenPhil has taken a bet on the scaling labs”, “OpenPhil is concerned about disrupting relationships with labs”, and even “OpenPhil sometimes uses its influence to put pressure on orgs to not do things that would disrupt the status quo” are fairly accurate.
The most extreme version of this critique is that perhaps OpenPhil has been net negative through its explicit funding for labs and implicit contributions to a culture that funnels money and talent toward labs and other organizations that entrench a lab-friendly status quo.
This might change as OpenPhil hires new people and plans to spend more money, but by default, I expect that OpenPhil will continue to play the “be nice with labs//don’t disrupt the status quo” role in the space. (In contrast to organizations like MIRI, Conjecture, FLI, the Center for AI Policy, perhaps CAIS).
Lots of people want to work there; replaceability
Given OP’s high status, lots of folks want to work there. Some people think the difference between the “best applicant” and the “2nd best applicant” is often pretty large, but this certainly doesn’t seem true in all cases.
I think if someone EG had an opportunity to work at OP vs. start their own organization or do something that requires more agency/entrepreneurship, there might be a strong case for them to do the latter, since it’s much less likely to happen by default.
What does the world need?
I think this is somewhat related to the first point, but I’ll flesh it out in a different way.
Some people think that we need more “rowing”– like, OP’s impact is clearly good, and if we just add some more capacity to the grantmakers and make more grants that look pretty similar to previous grants, we’re pushing the world into a considerably better direction.
Some people think that the default trajectory is not going so well, and this is (partially or largely) caused or maintained by the OP ecosystem Under this worldview, one might think that adding some additional capacity to OP is not actually all that helpful in expectation.
Instead, people with this worldview believe that projects that aim to (for example) advocate for strong regulations, engage with the media, make the public more aware about AI risk, and do other forms of direct work more focused on folks outside of the core EA community might be more impactful.
Of course, part of this depends on how open OP will be to people “steering” from within. My expectation is that it would be pretty hard to steer OP from within (my impression is that lots of smart people have tried, and folks like Ajeya and Luke have clearly been thinking about things for a long time, and the culture has already been shaped by many core EAs, and there’s a lot of inertia, so a random new junior person is pretty unlikely to substantially shift their worldview, though I of course could be wrong).
(I began working for OP on the AI governance team in June. I’m commenting in a personal capacity based on my own observations; other team members may disagree with me.)
OpenPhil sometimes uses its influence to put pressure on orgs to not do things that would disrupt the status quo
FWIW I really don’t think OP is in the business of preserving the status quo. People who work on AI at OP have a range of opinions on just about every issue, but I don’t think any of us feel good about the status quo! People (including non-grantees) often ask us for our thoughts about a proposed action, and we’ll share if we think some action might be counterproductive, but many things we’d consider “productive” look very different from “preserving the status quo.” For example, I would consider the CAIS statement to be pretty disruptive to the status quo and productive, and people at Open Phil were excited about it and spent a bunch of time finding additional people to sign it before it was published.
Lots of people want to work there; replaceability
I agree that OP has an easier time recruiting than many other orgs, though perhaps a harder time than frontier labs. But at risk of self-flattery, I think the people we’ve hired would generally be hard to replace — these roles require a fairly rare combination of traits. People who have them can be huge value-adds relative to the counterfactual!
pretty hard to steer OP from within
I basically disagree with this. There are areas where senior staff have strong takes, but they’ll definitely engage with the views of junior staff, and they sometimes change their minds. Also, the AI world is changing fast, and as a result our strategy has been changing fast, and there are areas full of new terrain where a new hire could really shape our strategy. (This is one way in which grantmaker capacity is a serious bottleneck.)
I’m not officially part of the AMA but I’m one of the disagreevotes so I’ll chime in.
As someone who’s only recently started, the vibe this post gives of it being hard for me to disagree with established wisdom and/or push the org to do things differently, meaning my only role is to ‘just push out more money along the OP party line’, is just miles away from what I’ve experienced.
If anything, I think how much ownership I’ve needed to take for the projects I’m working on has been the biggest challenge of starting the role. It’s one that (I hope) I’m rising to, but it’s hard!
In terms of how open OP is to steering from within, it seems worth distinguishing ‘how likely is a random junior person to substantially shift the worldview of the org’, and ‘what would the experience of that person be like if they tried to’. Luke has, from before I had an offer, repeatedly demonstrated that he wants and values my disagreement in how he reacts to it and acts on it, and it’s something I really appreciate about his management.
Ah, sorry you got that impression from my question! I mostly meant Harvard in terms of “desirability among applicants” as opposed to “established bureaucracy”. My outside impression is that a lot of people I respect a lot (like you!) made the decision to go work at OP instead of one of their many other options. And that I’ve heard informal complaints from leaders of other EA orgs, roughly “it’s hard to find and keep good people, because our best candidates keep joining OP instead”. So I was curious to learn more about OP’s internal thinking about this effect.
I agree that there are several advantages of working at Open Phil, but I also think there are some good answers to “why wouldn’t someone want to work at OP?”
Culture, worldview, and relationship with labs
Many people have an (IMO fairly accurate) impression that OpenPhil is conservative, biased toward inaction, generally prefers maintaining the status quo, and is generally in favor of maintaining positive relationships with labs.
As I’ve gotten more involved in AI policy, I’ve updated more strongly toward this position. While simple statements always involve a bit of gloss/imprecision, I think characterizations like “OpenPhil has taken a bet on the scaling labs”, “OpenPhil is concerned about disrupting relationships with labs”, and even “OpenPhil sometimes uses its influence to put pressure on orgs to not do things that would disrupt the status quo” are fairly accurate.
The most extreme version of this critique is that perhaps OpenPhil has been net negative through its explicit funding for labs and implicit contributions to a culture that funnels money and talent toward labs and other organizations that entrench a lab-friendly status quo.
This might change as OpenPhil hires new people and plans to spend more money, but by default, I expect that OpenPhil will continue to play the “be nice with labs//don’t disrupt the status quo” role in the space. (In contrast to organizations like MIRI, Conjecture, FLI, the Center for AI Policy, perhaps CAIS).
Lots of people want to work there; replaceability
Given OP’s high status, lots of folks want to work there. Some people think the difference between the “best applicant” and the “2nd best applicant” is often pretty large, but this certainly doesn’t seem true in all cases.
I think if someone EG had an opportunity to work at OP vs. start their own organization or do something that requires more agency/entrepreneurship, there might be a strong case for them to do the latter, since it’s much less likely to happen by default.
What does the world need?
I think this is somewhat related to the first point, but I’ll flesh it out in a different way.
Some people think that we need more “rowing”– like, OP’s impact is clearly good, and if we just add some more capacity to the grantmakers and make more grants that look pretty similar to previous grants, we’re pushing the world into a considerably better direction.
Some people think that the default trajectory is not going so well, and this is (partially or largely) caused or maintained by the OP ecosystem Under this worldview, one might think that adding some additional capacity to OP is not actually all that helpful in expectation.
Instead, people with this worldview believe that projects that aim to (for example) advocate for strong regulations, engage with the media, make the public more aware about AI risk, and do other forms of direct work more focused on folks outside of the core EA community might be more impactful.
Of course, part of this depends on how open OP will be to people “steering” from within. My expectation is that it would be pretty hard to steer OP from within (my impression is that lots of smart people have tried, and folks like Ajeya and Luke have clearly been thinking about things for a long time, and the culture has already been shaped by many core EAs, and there’s a lot of inertia, so a random new junior person is pretty unlikely to substantially shift their worldview, though I of course could be wrong).
(I began working for OP on the AI governance team in June. I’m commenting in a personal capacity based on my own observations; other team members may disagree with me.)
FWIW I really don’t think OP is in the business of preserving the status quo. People who work on AI at OP have a range of opinions on just about every issue, but I don’t think any of us feel good about the status quo! People (including non-grantees) often ask us for our thoughts about a proposed action, and we’ll share if we think some action might be counterproductive, but many things we’d consider “productive” look very different from “preserving the status quo.” For example, I would consider the CAIS statement to be pretty disruptive to the status quo and productive, and people at Open Phil were excited about it and spent a bunch of time finding additional people to sign it before it was published.
I agree that OP has an easier time recruiting than many other orgs, though perhaps a harder time than frontier labs. But at risk of self-flattery, I think the people we’ve hired would generally be hard to replace — these roles require a fairly rare combination of traits. People who have them can be huge value-adds relative to the counterfactual!
I basically disagree with this. There are areas where senior staff have strong takes, but they’ll definitely engage with the views of junior staff, and they sometimes change their minds. Also, the AI world is changing fast, and as a result our strategy has been changing fast, and there are areas full of new terrain where a new hire could really shape our strategy. (This is one way in which grantmaker capacity is a serious bottleneck.)
Wow lots of disagreement here—I’m curious what the disagreement is about, if anyone wants to explain?
I’m not officially part of the AMA but I’m one of the disagreevotes so I’ll chime in.
As someone who’s only recently started, the vibe this post gives of it being hard for me to disagree with established wisdom and/or push the org to do things differently, meaning my only role is to ‘just push out more money along the OP party line’, is just miles away from what I’ve experienced.
If anything, I think how much ownership I’ve needed to take for the projects I’m working on has been the biggest challenge of starting the role. It’s one that (I hope) I’m rising to, but it’s hard!
In terms of how open OP is to steering from within, it seems worth distinguishing ‘how likely is a random junior person to substantially shift the worldview of the org’, and ‘what would the experience of that person be like if they tried to’. Luke has, from before I had an offer, repeatedly demonstrated that he wants and values my disagreement in how he reacts to it and acts on it, and it’s something I really appreciate about his management.
Ah, sorry you got that impression from my question! I mostly meant Harvard in terms of “desirability among applicants” as opposed to “established bureaucracy”. My outside impression is that a lot of people I respect a lot (like you!) made the decision to go work at OP instead of one of their many other options. And that I’ve heard informal complaints from leaders of other EA orgs, roughly “it’s hard to find and keep good people, because our best candidates keep joining OP instead”. So I was curious to learn more about OP’s internal thinking about this effect.