I’m curious about the process that led to your salary ranges, for all the teams broadly, but especially for the technical AI safety roles, where the case for having very expensive counterfactuals (in money and otherwise) is cleanest.
When I was trying to ballpark a salary range for a position that is in some ways comparable to a grantmaking position at Open Phil, most reference jobs I considered had an upper range that’s higher than OP’s, especially in the Bay Area[1]:
Of course it makes sense that the upper end of for-profit pay is higher, for various practical and optical reasons. But I think I was a bit surprised by how the pay range at some other (nonprofit) reference institutions were quite high by my lights. I distinctively recall numbers for e.g. GiveWell being much lower in the recent past (including after inflation adjustment). And in particular, current ranges for salaries at reference institutions were broadly higher than OP’s, despite Open Phil’s work being more neglected and of similar or higher importance.
So what process did you use to come up up with your salary ranges? In particular, did the algorithm take into account reference ranges in 2023, or was it (perhaps accidentally) anchored on earlier numbers from years past?
COI disclaimer: I did apply to OP so I guess that there’s a small COI in the very conjunctive and unlikely world that this comment might affect my future salary.
TBC, I also see lower bounds that’s more similar to OP’s, or in some cases much lower. But it intuitively makes sense to me that OP’s hiring bar is aiming to be higher than eg junior roles at most other EA orgs, or that of non-EA foundations with a much higher headcount and thus greater ability to onboard junior people.
Generally, we try to compensate people in such a way that compensation is neither the main reason to be at Open Phil nor the main reason to consider leaving. We rely on market data to set compensation for each role, aiming to compete with a candidate’s “reasonable alternatives” (e.g., other foundations, universities, or high-end nonprofits; not roles like finance or tech where compensation is the main driving factor in recruiting). Specifically, we default to using a salary survey of other large foundations (Croner), and currently target the 75th percentile, as well as offering modest upwards adjustments on top of the base numbers for staff in SF and DC (where we think there are positive externalities for the org from staff being able to cowork in person, but higher cost of living). I can’t speak to what they’re currently doing, but historically, GiveWell has used the same salary survey; I’d guess that the Senior Research role is benchmarked to Program Officer, which is a more senior role than we’re currently posting for in this GCR round, which explains the higher compensation. I don’t know what BMGF benchmarks you are looking at, but I’d guess you’re looking at more senior positions that typically require more experience and control higher budgets at the higher end.
That said, your point about technical AI Safety researchers at various nonprofit orgs making more than our benchmarks is something that we’ve been reflecting on internally and think does represent a relevant “reasonable alternative” for the kinds of folks that we’re aiming to hire, and so we’re planning to create a new comp ladder for technical AI Safety roles, and in the meantime have moderately increased the posted comp for the open TAIS associate and senior associate roles.
I’m curious about the process that led to your salary ranges, for all the teams broadly, but especially for the technical AI safety roles, where the case for having very expensive counterfactuals (in money and otherwise) is cleanest.
When I was trying to ballpark a salary range for a position that is in some ways comparable to a grantmaking position at Open Phil, most reference jobs I considered had an upper range that’s higher than OP’s, especially in the Bay Area[1]:
GiveWell Senior Researcher (209k)
Program managers for technical subfields at Bill and Melinda Gates Foundation (~100k to ~300k)
Technical AI Safety researchers at various nonprofit orgs (~100k to ~400k)
Bay Area Rapid Transit Police (123k to 203k)
Of course it makes sense that the upper end of for-profit pay is higher, for various practical and optical reasons. But I think I was a bit surprised by how the pay range at some other (nonprofit) reference institutions were quite high by my lights. I distinctively recall numbers for e.g. GiveWell being much lower in the recent past (including after inflation adjustment). And in particular, current ranges for salaries at reference institutions were broadly higher than OP’s, despite Open Phil’s work being more neglected and of similar or higher importance.
So what process did you use to come up up with your salary ranges? In particular, did the algorithm take into account reference ranges in 2023, or was it (perhaps accidentally) anchored on earlier numbers from years past?
COI disclaimer: I did apply to OP so I guess that there’s a small COI in the very conjunctive and unlikely world that this comment might affect my future salary.
TBC, I also see lower bounds that’s more similar to OP’s, or in some cases much lower. But it intuitively makes sense to me that OP’s hiring bar is aiming to be higher than eg junior roles at most other EA orgs, or that of non-EA foundations with a much higher headcount and thus greater ability to onboard junior people.
Generally, we try to compensate people in such a way that compensation is neither the main reason to be at Open Phil nor the main reason to consider leaving. We rely on market data to set compensation for each role, aiming to compete with a candidate’s “reasonable alternatives” (e.g., other foundations, universities, or high-end nonprofits; not roles like finance or tech where compensation is the main driving factor in recruiting). Specifically, we default to using a salary survey of other large foundations (Croner), and currently target the 75th percentile, as well as offering modest upwards adjustments on top of the base numbers for staff in SF and DC (where we think there are positive externalities for the org from staff being able to cowork in person, but higher cost of living). I can’t speak to what they’re currently doing, but historically, GiveWell has used the same salary survey; I’d guess that the Senior Research role is benchmarked to Program Officer, which is a more senior role than we’re currently posting for in this GCR round, which explains the higher compensation. I don’t know what BMGF benchmarks you are looking at, but I’d guess you’re looking at more senior positions that typically require more experience and control higher budgets at the higher end.
That said, your point about technical AI Safety researchers at various nonprofit orgs making more than our benchmarks is something that we’ve been reflecting on internally and think does represent a relevant “reasonable alternative” for the kinds of folks that we’re aiming to hire, and so we’re planning to create a new comp ladder for technical AI Safety roles, and in the meantime have moderately increased the posted comp for the open TAIS associate and senior associate roles.