It’s tough to turn down an opportunity for career growth, but I would consider what kind of growth you’d get here. Building organizational tech isn’t directly related to research on AI safety, so it’s not a quick path to working on AI x-risk. I’m not sure that the more x-risk focused AI organizations are hiring for organizational tech, though perhaps you could get hired for a more general software position. A better opportunity for career growth might come from applying to LTFF or FTX regrantors to fund a Master’s in ML or independent reskilling.
That said, if you’re not certain about wanting to work on AI safety, there’s plenty of organizations in global poverty, alternative protein, public policy advocacy, and more than need organizational tech. While DeepMind does care about safety, I think their contribution to hastening the onset of AGI is ultimately very dangerous, and I would caution against supporting them in a general organizational capacity.
It’s tough to turn down an opportunity for career growth, but I would consider what kind of growth you’d get here. Building organizational tech isn’t directly related to research on AI safety, so it’s not a quick path to working on AI x-risk. I’m not sure that the more x-risk focused AI organizations are hiring for organizational tech, though perhaps you could get hired for a more general software position. A better opportunity for career growth might come from applying to LTFF or FTX regrantors to fund a Master’s in ML or independent reskilling.
That said, if you’re not certain about wanting to work on AI safety, there’s plenty of organizations in global poverty, alternative protein, public policy advocacy, and more than need organizational tech. While DeepMind does care about safety, I think their contribution to hastening the onset of AGI is ultimately very dangerous, and I would caution against supporting them in a general organizational capacity.