As an AI Safety person, I tend to believe that the community should move more towards existential risk (not claiming AI Safety maximalism). On the other hand, even if this is an individual’s top priority, your diversification strategy may be optimal for them if AI safety is too abstract to fully engage their motivation.
In fact, I was considering doing some unrelated, non-EA volunteering so that I would have some more concrete impact as well, but I decided that I didn’t actually have time. I may end up doing this at some point, but I’m all-in with AI Safety for now.
As an AI Safety person, I tend to believe that the community should move more towards existential risk (not claiming AI Safety maximalism). On the other hand, even if this is an individual’s top priority, your diversification strategy may be optimal for them if AI safety is too abstract to fully engage their motivation.
In fact, I was considering doing some unrelated, non-EA volunteering so that I would have some more concrete impact as well, but I decided that I didn’t actually have time. I may end up doing this at some point, but I’m all-in with AI Safety for now.