Potential employees have a unique lever to influence the behaviors of AI labs

(Cross posted from my personal blog)

People who have received and are considering an offer from an AI lab are in a uniquely good spot to influence the actions of that lab.

People who care about AI safety and alignment often have things they wish labs would do. These could be requests about prioritizing alignment and safety (eg. having a sufficiently staffed alignment team, having a public and credible safety and alignment plan), good governance (eg. having a mission, board structure, and entity structure that allows safety and alignment to be prioritized), information security, or similar. This post by Holden goes through some lab asks, but take this as illustrative, not exhaustive!

So you probably have, or could generate, some practices or structures you wish labs would have in the realm of safety and alignment. Once you have received an offer to work for a lab, that lab suddenly cares about what you think far more than when you are someone who is just writing forum posts or tweeting at them.

This post will go through some ways to potentially influence the lab in a positive direction after you have received your offer.

Does this work? This is anecdata but I have seen offer holders win concessions, and I have heard recruiters talk about how these sorts of behaviors influence the lab’s strategy.

We also have reason to expect this works given that hiring good ML and AI researchers is competitive, and that businesses have changed aspects about themselves in the past partially to help with recruitment. Some efforts for gender or ethnic diversity or environmental sustainability are taken so that hiring from groups who care about these things doesn’t become too difficult. One example is that Google changed its sexual harassment rules and did not renew its contract with the Pentagon over mass employee pushback. Of course some of this stuff they may have intrinsically cared about or done to appease the customers or the public at large, but it seems employees have a more direct lever and have successfully used it.

The Strategy

There are steps you can take at different stages of your hiring process. The best time to do this is when you have received an offer, because then you know they are interested in you and so will care about your opinion.

Follow up call(s) or email just after receiving offer

In the follow up call after your offer you can express any concerns before you join. This is a good time to make requests. I recommend being polite, grateful for the offer, and framing these as “Well, look I’m excited about the role but I just have some uncertainties or aspects that if they were addressed would make this is a really easy decision for me”

Some example asks:

  • I want the safety/​alignment team to be larger

  • I want to see more public comms about alignment strategy

  • I would like to see coordination with other labs on safety standards and slower scaling, as well as independent auditing of safety and security efforts

  • I want an empowered, independent board

Theory of change:

They might actually grant requests! I have seen this happen. If they don’t, they will still hear that information and if enough people say it, they may grant it in the future. This also sets you up for the next alternative which is…

When you turn down an offer

If you end up turning down the offer, either to work at another AI lab or some other entity, you should tell them why you did. If you partially turned them down because of concerns about their strategy or that they didn’t fulfill one of your asks, tell them!

The most direct way to do this is to email your recruiter. Eg. write to the recruiter something like:

“Thanks for this offer. I decided to turn it down because I felt that [insert thing: the alignment team did not receive adequate resources /​ there wasn’t a whistleblower protection policy /​ I didn’t see thoughtful public commitments around safety, etc…] Best of luck with your search! If these things change, I would consider applying in the future.

Keep it polite, they will receive the feedback better that way. If you took a job at a more safety conscious org, tell them that.

Theory of change:

They wanted you! They failed to get you! They will have to explain to their manager why they failed to get you. If this happens enough times, concerns like yours will end up noted as a reason that recruiting is going less well than it could. This info will get passed up the chain. From speaking to recruiters, I can tell you they think about these things and prioritize them in recruiting strategies

When you accept an offer

If you are joining an org who has some safety or alignment practice or institution that you appreciate, signal boost it! This could look like a social media post (tweet, linkedin lol) where you say:

“I am joining x place! I am proud to work at a place with such a large and empowered alignment team. It shows x takes AI safety seriously.” or “… I am proud to work at a place with sound governance structures like [a windfall clause/​public benefit structure etc]

Theory of change:

This shows other hiring managers why you chose that role, and shows what recruits are prioritizing. It signals boosts good traits these labs could have.

What if I want to work at a lab that actually sucks across these dimensions BECAUSE I believe it is neglected and I’ll have more impact there?

That desire isn’t necessarily in tension with this plan! You should still do the whole “make requests” thing after you get your offer, even if they are less likely to do them. And when you join, if they have any good aspects, you should still call those out. It’s like if you are generally kind of messy but your girlfriend notices you do the dishes one day and says “Wow, thanks so much! The kitchen looks so clean.” You might go, “Damn, that felt nice, maybe I’ll do the dishes more often” in contrast to how you would have felt if she’d pointed out your general messiness or called you a slob. Not guaranteed to make a huge difference, but can push things on the margin, which is, you know, what we’re all about.

Some caveats:

  • Obviously the more they want to hire you, the better this lever is.

  • This strategy works best when lots of recruits practice it with similar asks because it signal boosts the asks. Labs can only focus on so many things, so it is helpful when the asks are concentrated on a few issues they could actually do given their resources and constraints.

Risks:

  • You might be worried that if you do this, it will hurt your relationship with the lab. Once you receive an offer, they are extraordinarily unlikely to rescind it over making some safety or alignment requests. I have never heard of anything remotely like this, but obviously I cannot guarantee it is impossible. They may not listen to or respond to the requests, but you don’t really need to worry about losing the offer (NOTE: this does NOT apply before you have the offer! I am NOT recommending this strategy during the application or interview process, this comes after.)

  • You might worry that being vaguely demanding will make the labs less likely to hire people like you, eg people who care about safety and alignment. This would be bad! This strategy largely relies on the fact that people who “get AGI” tend to also be more cognizant about alignment and safety. The labs might care far more about hiring researchers and engineers that “get AGI” than the other part, but currently they tend to be packaged. We probably can’t always count on this. Two things that can help here: Safety and alignment field building among talented ML people really matters. And if you are already into safety and alignment, you should, to the extent it is possible, get as good as you can as an ML researcher or engineer, try to be world class—that’s when these labs will care most about your values.

This strategy works! It can be scary, but people negotiate salary all the time. This is approximately as awkward as that, but is for a much larger purpose.

Why is this post anonymous? I work at one of the labs and didn’t want to weird out my employer. I also don’t want you to think this is just a dig at other labs—it’s not! This advice works for any lab, including my own.

Credit: Shoutout to Rohin Shah, and members of a workshop at the Summit on Existential Security for ideas that influenced this piece.