It sounds like you’re a fairly senior software engineer, so my first thought is to look at engineering roles at AI safety orgs. There are a bunch of them! You’ve probably already seen this post, but just in case: AI Safety Needs Great Engineers.
It sounds to me like you’re concerned about a gap between the type of engineering work you’re good at, and the type of engineering work that AI safety orgs need. This is something I’ve also been thinking about a lot recently. I’m a full stack developer for a consumer product, which means I spend a lot of time discussing plans with product managers, writing React code, and sometimes working on backend APIs. Whereas it seems like AI safety orgs mostly need great backend engineers who are very comfortable setting up infrastructure and working with distributed systems, and/or machine learning engineers.
This suggests 2 options to me, if you want to stay focused on software engineering rather than research or something else:
Find a way that you can help using your existing skills. This sounds like your option A above, but to me option A reads like you want to work independently as a contractor or something? Idk, it sounds like you’re not too sure what it would look like in practice. But there are AI safety orgs that have job postings for full-stack or frontend/UX engineers. If this lines up with your skillset and personal fit, this could be a really good option. One example is Ought. They’re unusual in the AI safety space in that they’re building a user-facing product, so all of the frontend skills that apply at any other startup would apply here. I know other AI safety orgs have frontend roles too, but I think they’re more focused on building internal tooling.
Build up your backend/infrastructure/ML skills enough that you could fill one of the more common AI safety engineering roles, like this one. I don’t know how easy it is for a great frontend engineer to become a great backend/infra engineer. I expect it’s MUCH faster to make that leap than it is for a complete novice to become a great backend engineer. But how quickly you can do it depends on a lot of things like your existing experience, and how great a learning environment you’re able to put yourself in for learning the new stuff.
I’m personally trying to decide between these options right now. The first thing to check is whether you feel excited at all about option 2. If ramping up in those new areas sounds super unpleasant, then I think you can rule that option out right away. But if you feel excited about both options and think you could be successful at either (which is the situation I’m in), then it’s a tougher question. I’m planning to talk to a bunch of AI safety folks at EAG in a few weeks to help figure out how to maximize my impact, and I hope to have more clarity on the matter then. I’ll update this comment afterwards if I have anything new to add.
I think my experience is probably sufficient to apply to Anthropic or Redwood or any other place that doesn’t need an ML background. Including my background in backend/infra. I did many “tech lead” roles where I was basically in charge of everything, so I’m up for that.
What I enjoy:
The thing I would be missing, I imagine, is the social interaction or something like that.
I don’t think I’d enjoy sitting on a hard problem for weeks/months alone, I imagine I’d be sad.
Location:
I don’t want to relocate (at least not a fulltime relocation), so Anthropic is off the table
Why do you think that Anthropic or Redwood etc would be missing social interaction? I wouldn’t have assumed that… on the Anthropic post I linked they mention that they love pair programming.
It sounds like you’re a fairly senior software engineer, so my first thought is to look at engineering roles at AI safety orgs. There are a bunch of them! You’ve probably already seen this post, but just in case: AI Safety Needs Great Engineers.
It sounds to me like you’re concerned about a gap between the type of engineering work you’re good at, and the type of engineering work that AI safety orgs need. This is something I’ve also been thinking about a lot recently. I’m a full stack developer for a consumer product, which means I spend a lot of time discussing plans with product managers, writing React code, and sometimes working on backend APIs. Whereas it seems like AI safety orgs mostly need great backend engineers who are very comfortable setting up infrastructure and working with distributed systems, and/or machine learning engineers.
This suggests 2 options to me, if you want to stay focused on software engineering rather than research or something else:
Find a way that you can help using your existing skills. This sounds like your option A above, but to me option A reads like you want to work independently as a contractor or something? Idk, it sounds like you’re not too sure what it would look like in practice. But there are AI safety orgs that have job postings for full-stack or frontend/UX engineers. If this lines up with your skillset and personal fit, this could be a really good option. One example is Ought. They’re unusual in the AI safety space in that they’re building a user-facing product, so all of the frontend skills that apply at any other startup would apply here. I know other AI safety orgs have frontend roles too, but I think they’re more focused on building internal tooling.
Build up your backend/infrastructure/ML skills enough that you could fill one of the more common AI safety engineering roles, like this one. I don’t know how easy it is for a great frontend engineer to become a great backend/infra engineer. I expect it’s MUCH faster to make that leap than it is for a complete novice to become a great backend engineer. But how quickly you can do it depends on a lot of things like your existing experience, and how great a learning environment you’re able to put yourself in for learning the new stuff.
I’m personally trying to decide between these options right now. The first thing to check is whether you feel excited at all about option 2. If ramping up in those new areas sounds super unpleasant, then I think you can rule that option out right away. But if you feel excited about both options and think you could be successful at either (which is the situation I’m in), then it’s a tougher question. I’m planning to talk to a bunch of AI safety folks at EAG in a few weeks to help figure out how to maximize my impact, and I hope to have more clarity on the matter then. I’ll update this comment afterwards if I have anything new to add.
What I’m good at:
I think my experience is probably sufficient to apply to Anthropic or Redwood or any other place that doesn’t need an ML background. Including my background in backend/infra. I did many “tech lead” roles where I was basically in charge of everything, so I’m up for that.
What I enjoy:
The thing I would be missing, I imagine, is the social interaction or something like that.
I don’t think I’d enjoy sitting on a hard problem for weeks/months alone, I imagine I’d be sad.
Location:
I don’t want to relocate (at least not a fulltime relocation), so Anthropic is off the table
Why do you think that Anthropic or Redwood etc would be missing social interaction? I wouldn’t have assumed that… on the Anthropic post I linked they mention that they love pair programming.
Anthropic and Redwood will hire you with zero ML experience so please don’t spend time learning ML before applying
[I think this deserves its own comment]
Yes, good point, I shouldn’t have included ML in the list of things to learn in option 2.