Just a quick note in favor of putting more specific information about compensation ranges in recruitment posts. Pay is by necessity an important factor for many people, and it feels like a matter of respect for applicants that they not spend time on the application process without having that information. I suspect having publicly available data points on compensation also helps ensure pay equity and levels some of the inherent knowledge imbalance between employers and job-seekers, reducing variance in the job search process. This all feels particularly true for EA, which is too young to have standardized roles and compensation across a lot of organizations.
I’m not sure if you are giving us accolades for putting this information in the job ads or missed that specific salary information is in the job ads. But we definitely believe in salary transparency for all the reasons you mentioned and if there’s anything we can do to be more transparent, please let us know!
I just totally missed that the info was in the job ads—so thank you very much for providing that information, it’s really great to see. Sorry for missing it the first time around!
For people wondering and who haven’t clicked through to the job ads on the website, below is the compensation ranges for the Researcher roles:
We do not require candidates to have formal academic credentials to be successful. We are hiring for three levels of experience:
Associate Researcher - ~1 year of research experience. The salary for this level is $65,000/yr to $70,000/yr, prorated for part-time work.
Researcher—either a relevant Masters degree and/or ~2 years of research experience. Experience working in one of our priority topics in an industry setting would count. The salary for this level is between $70,000/yr and $77,000/yr, depending on years of experience and the nature of your qualifications, prorated for part-time work.
Senior Researcher—either a relevant PhD degree and/or 5+ years of research experience. The salary for this level is between $77,000/yr and $85,000/yr.
I suspect having publicly available data points on compensation also helps ensure pay equity and levels some of the inherent knowledge imbalance between employers and job-seekers, reducing variance in the job search process. This all feels particularly true for EA, which is too young to have standardized roles and compensation across a lot of organizations.
Eh....
If I was writing a similar comment, I think I would choose to consider writing instead of “reducing variance” instead something like “improving efficiency and transparency, so organizations and candidates can maximize impact”.
Maybe instead of “standardized roles and compensation across a lot of organizations” I would say something like “mature market arising from impactful organizations so that candidates have a useful expectation of wage”. (E.g. The sense that a seasoned software developer knows what she could get paid in the Bay Area and it’s not just some uniform prior between $50k and $10M).
The rest of this comment is low effort and a ramble, isn’t on anyone to know, but I think I will continue to write because it’s just good to know about, or something. Why I think someone would care about this:
Depending on the cruxes of whether you accept the relevant worldview/cause area/models of talent, I think the impact and salaries being talked about here, driven by tails (e.g. “400k to 4M”) would make it unworkable to have “standardized” salaries or “ensure pay equity” that most people would mean. Like, salary caps wouldn’t work out, people would just create new entities or something, and it would just add a whole layer of chicanery.
Credibility of the EA movement seems important, so it’s good to be aware of things like “anti-trust”, “fiduciary duty” and as Gregory Lewis puts it, “colourably illegal”. Knowing what these do would be useful if you are trying to build institutions and speak to institutions to edit AI policy and literally stop WW3.
But wait there’s more!
While the above is probably true, here’s some facts that make it even more awkward:
The count of distinct funders for AI and longtermist EA initiatives is approximately one and a half. So creating a bunch of entities (consultancies, think tanks) that have a pretty “normalized” price of labor is probably justified in some theoretical, practical and virtuous sense.
The salary issue is not academic. Right now, an EA org seems to be flat out willing to pay 70% of US tech sector wages, lifted by the historic FAANG event (e.g. “we would be open to paying $315k for someone who would make $450k in industry”). This seems right and virtuous in this worldview. At the same time, there is a segment of EA with a strong frugal aesthetic that is also virtuous, and importantly, older with many staff. So, despite dismissing both central planning and “equity” above, a laissez faire sort of approach is going to be unwieldy. What will happen is that the comp gradient will create structural issues.
Well, so anyways this is all a thing for these CEOs, executive directors and grant makers to work out.
It’s why they pay them the big bucks...except for the founders of Rethink Priority and their officers, with mean salaries being about $33K according to their 2020 Form 990.
I think the takeaway is that I think there is a problem here that can be resolved completely by at least tripling the current salaries of RP officers and founders.
It’s why they pay them the big bucks...except for the founders of Rethink Priority and their officers, with mean salaries being about $33K according to their 2020 Form 990.
I think the takeaway is that I think there is a problem here that can be resolved completely by at least tripling the current salaries of RP officers and founders.
It’s worth noting that we have tripled pay since our 2020 Form 990 (covering 2019). CEO pay is currently $103,959/yr.
It’s why they pay them the big bucks...except for the founders of Rethink Priority and their officers, with mean salaries being about $33K according to their 2020 Form 990.
I think the takeaway is that I think there is a problem here that can be resolved completely by at least tripling the current salaries of RP officers and founders.
For onlookers, I will again note that our current pay range is 65k-85k for researchers. which is >2x 33k, though not quite 3x.
There are various reasons for the difference between our historically (and currently and probably future) lower rates of pay than Lightcone, including but not limited to a) being fully remote instead of based fully in one of the most expensive cities on the planet and b) most of RP’s historical focus being on animal welfare, where there is significantly less of a funding overhang than in x-risk circles and c) most of our employees (not myself) have counterfactuals in academia or other nonprofits rather than the tech or finance sector.
That said, I (personally) directionally agree with you that more pay for some of the earlier people is probably a good idea. At a risk of sounding obsequious, I do think there’s a strong case that Peter and Marcus and some of the other early employees ought to a) get some risk compensation for developing RP into the institution that it is today or b) have higher salaries to help them make better time-value tradeoffs or c) both.
Just a quick note in favor of putting more specific information about compensation ranges in recruitment posts. Pay is by necessity an important factor for many people, and it feels like a matter of respect for applicants that they not spend time on the application process without having that information. I suspect having publicly available data points on compensation also helps ensure pay equity and levels some of the inherent knowledge imbalance between employers and job-seekers, reducing variance in the job search process. This all feels particularly true for EA, which is too young to have standardized roles and compensation across a lot of organizations.
I’m not sure if you are giving us accolades for putting this information in the job ads or missed that specific salary information is in the job ads. But we definitely believe in salary transparency for all the reasons you mentioned and if there’s anything we can do to be more transparent, please let us know!
I just totally missed that the info was in the job ads—so thank you very much for providing that information, it’s really great to see. Sorry for missing it the first time around!
No problem—sorry that wasn’t clear!
Feel free to apply if the salary range and other job relevant job details make sense for your personal and professional priorities!
For people wondering and who haven’t clicked through to the job ads on the website, below is the compensation ranges for the Researcher roles:
Eh....
If I was writing a similar comment, I think I would choose to consider writing instead of “reducing variance” instead something like “improving efficiency and transparency, so organizations and candidates can maximize impact”.
Maybe instead of “standardized roles and compensation across a lot of organizations” I would say something like “mature market arising from impactful organizations so that candidates have a useful expectation of wage”. (E.g. The sense that a seasoned software developer knows what she could get paid in the Bay Area and it’s not just some uniform prior between $50k and $10M).
So the main perspective for why this is relevant is shown by this comment chain where Gregory Lewis has a final comment, and his comment seems correct.
Uh.
The rest of this comment is low effort and a ramble, isn’t on anyone to know, but I think I will continue to write because it’s just good to know about, or something. Why I think someone would care about this:
Depending on the cruxes of whether you accept the relevant worldview/cause area/models of talent, I think the impact and salaries being talked about here, driven by tails (e.g. “400k to 4M”) would make it unworkable to have “standardized” salaries or “ensure pay equity” that most people would mean. Like, salary caps wouldn’t work out, people would just create new entities or something, and it would just add a whole layer of chicanery.
Credibility of the EA movement seems important, so it’s good to be aware of things like “anti-trust”, “fiduciary duty” and as Gregory Lewis puts it, “colourably illegal”. Knowing what these do would be useful if you are trying to build institutions and speak to institutions to edit AI policy and literally stop WW3.
But wait there’s more!
While the above is probably true, here’s some facts that make it even more awkward:
The count of distinct funders for AI and longtermist EA initiatives is approximately one and a half. So creating a bunch of entities (consultancies, think tanks) that have a pretty “normalized” price of labor is probably justified in some theoretical, practical and virtuous sense.
The salary issue is not academic. Right now, an EA org seems to be flat out willing to pay 70% of US tech sector wages, lifted by the historic FAANG event (e.g. “we would be open to paying $315k for someone who would make $450k in industry”). This seems right and virtuous in this worldview. At the same time, there is a segment of EA with a strong frugal aesthetic that is also virtuous, and importantly, older with many staff. So, despite dismissing both central planning and “equity” above, a laissez faire sort of approach is going to be unwieldy. What will happen is that the comp gradient will create structural issues.
Well, so anyways this is all a thing for these CEOs, executive directors and grant makers to work out.
It’s why they pay them the big bucks...except for the founders of Rethink Priority and their officers, with mean salaries being about $33K according to their 2020 Form 990.
I think the takeaway is that I think there is a problem here that can be resolved completely by at least tripling the current salaries of RP officers and founders.
It’s worth noting that we have tripled pay since our 2020 Form 990 (covering 2019). CEO pay is currently $103,959/yr.
For onlookers, I will again note that our current pay range is 65k-85k for researchers. which is >2x 33k, though not quite 3x.
There are various reasons for the difference between our historically (and currently and probably future) lower rates of pay than Lightcone, including but not limited to a) being fully remote instead of based fully in one of the most expensive cities on the planet and b) most of RP’s historical focus being on animal welfare, where there is significantly less of a funding overhang than in x-risk circles and c) most of our employees (not myself) have counterfactuals in academia or other nonprofits rather than the tech or finance sector.
That said, I (personally) directionally agree with you that more pay for some of the earlier people is probably a good idea. At a risk of sounding obsequious, I do think there’s a strong case that Peter and Marcus and some of the other early employees ought to a) get some risk compensation for developing RP into the institution that it is today or b) have higher salaries to help them make better time-value tradeoffs or c) both.