[Question] Concerns about AI safety career change

Summary:

  • I’m a software engineer interested in working on AI safety, but confused about its career prospects. I outlined all my concerns below.

  • In particular, I had trouble finding accounts of engineers working in the field, and the differences between organizations/​companies working on AI safety are very unclear from the outside.

  • It’s also not clear if frontend skills are seen as useful, or whether applicants should reside within the US.

Full text:

I’m an experienced full-stack software engineer and software/​strategy consultant based in Japan. I’ve been loosely following EA since 2010, and have become increasingly concerned about AI x-risk since 2016. This has led me to regularly consider possible careers in AI safety, especially now that the demand for software engineers in the field has increased dramatically.

However, having spent ~15 hours reading about the current state of the field, organizations, and role of engineers, I find myself having more questions than I started with. In hope of finding more clarity and help share what engineers considering the career shift might be wondering, I decided to outline my main points of concern below:

  1. The only accounts of engineers working in AI safety I could find were two articles and a problem profile on 80,000 Hours[1][2][3]. Not even the AI Alignment Forum seemed to have any posts written by engineers sharing their experience. Despite this, most orgs have open positions for ML engineers, DevOps engineers, or generalist software developers. What are all of them doing?

    1. Many job descriptions listed very similar skills for engineers, even when the orgs seemed to have very different approaches on tackling AI safety problems. Is the set of required software skills really that uniform across organizations?

    2. Do software engineers in the field feel that their day-to-day work is meaningful? Are they regularly learning interesting and useful things? How do they see their career prospects?

    3. I’m also curious whether projects are done with a diverse set of technologies? Who is typically responsible for data transformations and cleanup? How much ML theory should an engineer coming into the field learn beforehand? (I’m excited to learn about ML, but got very mixed signals about the expectations.)

  2. Some orgs describe their agenda and goals. In many cases, these seemed very similar to me, as all of them are pragmatic and many even had shared or adjacent areas of research. Given the similarities, why are there so many different organizations? How is an outsider supposed to know what makes each of them unique?

    1. As an example, MIRI states that they want to “ensure that the creation of smarter-than-human machine intelligence has a positive impact”[4], Anthropic states they have “long-term goals of steerable, trustworthy AI”[5], Redwood Research states they want to “align—future systems with human interests”[6], and Center of AI Safety states they want to “reduce catastrophic and existential risks from AI”[7]. What makes these different from each other? They all sound like they’d lead to similar conclusions about what to work on.

    2. I was surprised to find that some orgs didn’t really describe their work or what differentiates them. How are they supposed to find the best engineers if interested ones can’t know what areas they are working on? I also found that it’s sometimes very difficult to evaluate whether an org is active and/​or trustworthy.

      1. Related to this, I was baffled to find that MIRI hasn’t updated their agenda since 2015[8], and their latest publication is dated at 2016[4]. However, their blog seems to have ~quarterly updates? Are they still relevant?

    3. Despite finding many orgs by reading articles and publications, I couldn’t find a good overall list of ones that specifically work on AI safety. Having such a list might be valuable for people coming into the field, especially if it had brief overviews on what makes each org stand out. It may also be relevant for donors and community builders, as well as people looking for a particular niche.

    4. It’s a bit unclear how the funding for AI safety is organized. Some groups get grants from CEA and longtermism funds, some are sponsored by universities, but many also seem like private companies? How does that work? (My impression is that AI safety is still very difficult to monetize.)

  3. Frontend skills are sometimes listed in AI safety orgs’ job descriptions, but rarely mentioned in problem profiles or overviews of the engineering work. Are people looking for frontend skills or not?

    1. As someone whose core experience is in developing business-critical web apps, I’m particularly curious about whether web/​mobile apps are needed to compliment other tools, and whether UI/​UX design is of any consideration in AI safety work.

    2. I’d argue that frontend and design skills can be relevant, in particular for meta tools like collaboration platforms, or for making results more visual and interactive (like OpenAI often does). Long-term research projects may also benefit from custom UIs for system deployment, management, and usage. I wonder what fraction of AI safety researchers would agree.

    3. My own skills are pretty evenly distributed between frontend and backend, and I’m left wondering whether AI safety orgs would need someone with more specialization (as opposed to skills they currently may not have).

  4. It seems a vast majority of AI safety work is done in the US. However, the US timezone is sometimes tricky in Asia due to little overlap in working hours. How much of a problem is this seen as? Are there any AI safety groups based in Asia, Africa, or EU that have a good track record?

    1. What would even be a reasonable heuristic for assessing “good track record” in this case? For research orgs one can look at recent publications, but not every org does research. The best I have right now is whether the org in question has been mentioned in at least two introductory posts across 80,000 Hours, EA Forum, and AI Alignment Forum. This could be another benefit of a curated list as mentioned above.

My counterfactual for not doing AI safety work would be becoming financially independent in ~3-5 years, after which I’d likely do independent work/​research around AI policy and meta-EA matters anyway. I’m thinking that transitioning into AI safety now could be better, as the problems have become more practical, the problems seem more urgent, and working on them would allow gaining relevant skills/​results sooner.

I decided to post this on the EA forum in order to get a broader view of opinions, including from people not currently engaged with the field. Any advice or insights would be much appreciated!

If you happen to be looking for someone with full-stack skills and are ok with flexible hours/​location, feel free to drop me a private message as well!

  1. ^
  2. ^
  3. ^
  4. ^
  5. ^
  6. ^
  7. ^
  8. ^