While not directly related, this post gives an opportunity to ask question that are interesting and might benefit others:
How would an aligned EA working on something approximately EA, recruit and judge other SWE EAs for a software role?
Would you use leetcode or a similar puzzle style of coding, or pair programming or some kind of assignment based project?
What if I wanted to recruit someone better or more senior than me (by a moderate margin)? How would I go about this?
What have you found surprising about recruiting or building a team for a small project?
(I think this is hard or a big topic. For example, SWE is a lot about architecture/vision/design decisions that often take on a social/cultural aspect.)
One reason I’m asking is that I’m assuming the answers might be different than normal, because there might be differences/advantages in having an EA recruit an EA.
(On the other hand, there’s probably reasons and examples that are evidence against this).
As someone who has recently been in the AI Safety org interview circuit, about 50% of interviews were traditional Leetcode style algorithmic/coding puzzle and 50% were more practical. This seems pretty typical compared to industry.
The EA orgs I interviewed with were very candid about their approach, and I was much less surprised by the style of interview I got than I was surprised when interviewing in industry. Anthropic, Ought, and CEA all very explicitly lay out what their interviews look like publicly. My experience was that the interviews matched the public description very well.
It’s mostly the same as a normal org hiring, besides things about value alignment (which are a whole other story)
If I’d have to hire someone more senior than me, I’d ask someone more senior than me to help interviewing. EA has some pretty senior people. (just referred one of them to this comment, maybe he’ll reply)
It depends a lot on what I’m hiring for and what the candidate’s background is. If it’s an ML job, I ask ML programming and design questions, but if I’m hiring someone to do networking, I’ll ask a question about distributed algorithms or something. This is in contrast to how Google hires, because they’re hiring generic SWEs so they don’t care a lot about the particulars.
If I had to hire someone more senior, I would reach out to other more senior people who I trust to ask them for help.
For a small team, generalism can be more important than it seems.
Thanks for the answers, this makes a lot of sense.
Can you be specific about #1? For example, what format of programming tests would you prefer to give to a generalist engineer?
By the way, do you mean something special or “hands on” for ML programming or design questions?
For ML programming, it seems bad to rely on ML or design questions in the sense of a verbal question and answer? I think actually designing/choosing ML scientific knowledge is a tiny part of the job, so I think many ML knowledge questions would be unnatural (rewarding memorization of standard ML books/selecting for “enthusiasts” who read up on recent libraries, and blow out strong talent who solved a lot of hard real world problems).
Yeah I personally find it very hard to do ML interviews for that reason. So far I’m doing a mix of theory/conceptual questions and practical ML coding questions. It helps if the conceptual questions include some unusual setups, or ask about unusal tweaks.
While not directly related, this post gives an opportunity to ask question that are interesting and might benefit others:
How would an aligned EA working on something approximately EA, recruit and judge other SWE EAs for a software role?
Would you use leetcode or a similar puzzle style of coding, or pair programming or some kind of assignment based project?
What if I wanted to recruit someone better or more senior than me (by a moderate margin)? How would I go about this?
What have you found surprising about recruiting or building a team for a small project?
(I think this is hard or a big topic. For example, SWE is a lot about architecture/vision/design decisions that often take on a social/cultural aspect.)
One reason I’m asking is that I’m assuming the answers might be different than normal, because there might be differences/advantages in having an EA recruit an EA.
(On the other hand, there’s probably reasons and examples that are evidence against this).
As someone who has recently been in the AI Safety org interview circuit, about 50% of interviews were traditional Leetcode style algorithmic/coding puzzle and 50% were more practical. This seems pretty typical compared to industry.
The EA orgs I interviewed with were very candid about their approach, and I was much less surprised by the style of interview I got than I was surprised when interviewing in industry. Anthropic, Ought, and CEA all very explicitly lay out what their interviews look like publicly. My experience was that the interviews matched the public description very well.
My priors:
It’s mostly the same as a normal org hiring, besides things about value alignment (which are a whole other story)
If I’d have to hire someone more senior than me, I’d ask someone more senior than me to help interviewing. EA has some pretty senior people. (just referred one of them to this comment, maybe he’ll reply)
It depends a lot on what I’m hiring for and what the candidate’s background is. If it’s an ML job, I ask ML programming and design questions, but if I’m hiring someone to do networking, I’ll ask a question about distributed algorithms or something. This is in contrast to how Google hires, because they’re hiring generic SWEs so they don’t care a lot about the particulars.
If I had to hire someone more senior, I would reach out to other more senior people who I trust to ask them for help.
For a small team, generalism can be more important than it seems.
Thanks for the answers, this makes a lot of sense.
Can you be specific about #1? For example, what format of programming tests would you prefer to give to a generalist engineer?
By the way, do you mean something special or “hands on” for ML programming or design questions?
For ML programming, it seems bad to rely on ML or design questions in the sense of a verbal question and answer? I think actually designing/choosing ML scientific knowledge is a tiny part of the job, so I think many ML knowledge questions would be unnatural (rewarding memorization of standard ML books/selecting for “enthusiasts” who read up on recent libraries, and blow out strong talent who solved a lot of hard real world problems).
Yeah I personally find it very hard to do ML interviews for that reason. So far I’m doing a mix of theory/conceptual questions and practical ML coding questions. It helps if the conceptual questions include some unusual setups, or ask about unusal tweaks.