I describe it as a calling. It’s not so much that I feel a strong emotion as I feel like it’s the most natural thing in the world that I would want to help people and do that in the most effective way possible. Since I focus specifically on x-risk from AI, I find this as a calling to address AI safety due to the natural way this feels like an obvious problem in desperate need of a solution.
For me it’s very similar to the kind of “calling” people talk about in religious contexts, and now that I’m Buddhists I conceptualized what happened when I was 18 that made me care about and start pursuing AI safety as the awakening of bodhicitta because although I already wanted to become enlightened at that time (even though I didn’t really appreciate what that meant) it wasn’t until I cared about saving humanity from AI that I developed the compassion and desire that drove me to bodhicitta. With time that calling has broadened even though I mainly focus on AI safety.
I describe it as a calling. It’s not so much that I feel a strong emotion as I feel like it’s the most natural thing in the world that I would want to help people and do that in the most effective way possible. Since I focus specifically on x-risk from AI, I find this as a calling to address AI safety due to the natural way this feels like an obvious problem in desperate need of a solution.
For me it’s very similar to the kind of “calling” people talk about in religious contexts, and now that I’m Buddhists I conceptualized what happened when I was 18 that made me care about and start pursuing AI safety as the awakening of bodhicitta because although I already wanted to become enlightened at that time (even though I didn’t really appreciate what that meant) it wasn’t until I cared about saving humanity from AI that I developed the compassion and desire that drove me to bodhicitta. With time that calling has broadened even though I mainly focus on AI safety.