I agree that the thing that matters is how the person working on this area prevents a catastrophic event, as opposed to the chance of catastrophe itself. And I agree that being excited about an issue/topic will make you more effective.
But working on the topic where you are less excited about can still have an overall better impact. I think that “being excited about” makes you unlikely to be more than ten times as effective as an average researcher.
Your example numbers, where you have a 0.001% impact on the thing you are excited about and 0.000001% impact for another thing, seem unrealistic, if these differences are supposed to capture the additional impact of being excited about a thing (I don’t think you meant to pick realistic here and these were just examples to illustrate your point. One might also argue that impacting AI alignment is much harder than climate change, then I would disagree based on arguments around neglectedness).
However, for OP it might also be a case of the common-among-EAs question of how much are you willing to sacrifice. We generally accept if people limit the sacrifices they are willing to make for altruistic causes, and switching from a field you are excited about to another more impactful field can be a big sacrifice.
I agree that the thing that matters is how the person working on this area prevents a catastrophic event, as opposed to the chance of catastrophe itself. And I agree that being excited about an issue/topic will make you more effective.
But working on the topic where you are less excited about can still have an overall better impact. I think that “being excited about” makes you unlikely to be more than ten times as effective as an average researcher.
Your example numbers, where you have a 0.001% impact on the thing you are excited about and 0.000001% impact for another thing, seem unrealistic, if these differences are supposed to capture the additional impact of being excited about a thing (I don’t think you meant to pick realistic here and these were just examples to illustrate your point. One might also argue that impacting AI alignment is much harder than climate change, then I would disagree based on arguments around neglectedness).
However, for OP it might also be a case of the common-among-EAs question of how much are you willing to sacrifice. We generally accept if people limit the sacrifices they are willing to make for altruistic causes, and switching from a field you are excited about to another more impactful field can be a big sacrifice.