I recently spoke to an applied research engineer at DeepMind who I could put you in touch with. My understanding is that probably you could make better contributions to minimising AI x-risk elsewhere unless you are directly involved with the AI safety team. This is highly dependent on the details of your other potential avenues for contribution, and the exact role. For example, if you end up working very closely with the AI safety team, then this would be a more valuable role than if you were working elsewhere in DeepMind.
I recently spoke to an applied research engineer at DeepMind who I could put you in touch with. My understanding is that probably you could make better contributions to minimising AI x-risk elsewhere unless you are directly involved with the AI safety team. This is highly dependent on the details of your other potential avenues for contribution, and the exact role. For example, if you end up working very closely with the AI safety team, then this would be a more valuable role than if you were working elsewhere in DeepMind.
Feel free to message me and I’ll connect you.