Strongly agreed, and I think it’s one of the most important baseline arguments against AI risk. See Linch’s motivated reasoning critique of effective altruism:
I agree that theorizing is more fun than agonizing (for EA types), but I feel like the counterfactual should be theorizing vs theorizing, or agonizing vs agonizing.
Theorizing: Speaking for myself, I bounced off of both AI safety and animal welfare research, but I didn’t find animal welfare research less intellectually engaging, nor less motivating, than AI safety research. If anything the tractability and sense of novel territory makes it more motivating. Though maybe I’d find AI safety research more fun if I’m better at math. (I’m doing my current research on longtermist megaprojects partially because I do think it’s much more impactful than what I can do in the animal welfare space, but also partially because I find it more motivating and engaging, so take that however you will).
Agonizing: Descriptively, I don’t think the archetypical x-risk-focused researcher type is less neurotic or prone to mental health issues than the archetypical EAA. I think it’s more likely that their agonizing is different in kind rather than degree. To the extent that there is a difference that favors x-risk-focused researchers, I would guess it’s more due to other causes, e.g. a) demographic factors in which groups different cause areas draw from, b) the (recent) influx of relative wealth/financial stability for x-risk researchers, or c) potentially cause-area- and institution-specific cultural factors.
Strongly agreed, and I think it’s one of the most important baseline arguments against AI risk. See Linch’s motivated reasoning critique of effective altruism:
https://forum.effectivealtruism.org/posts/pxALB46SEkwNbfiNS/the-motivated-reasoning-critique-of-effective-altruism
I agree that theorizing is more fun than agonizing (for EA types), but I feel like the counterfactual should be theorizing vs theorizing, or agonizing vs agonizing.
Theorizing: Speaking for myself, I bounced off of both AI safety and animal welfare research, but I didn’t find animal welfare research less intellectually engaging, nor less motivating, than AI safety research. If anything the tractability and sense of novel territory makes it more motivating. Though maybe I’d find AI safety research more fun if I’m better at math. (I’m doing my current research on longtermist megaprojects partially because I do think it’s much more impactful than what I can do in the animal welfare space, but also partially because I find it more motivating and engaging, so take that however you will).
Agonizing: Descriptively, I don’t think the archetypical x-risk-focused researcher type is less neurotic or prone to mental health issues than the archetypical EAA. I think it’s more likely that their agonizing is different in kind rather than degree. To the extent that there is a difference that favors x-risk-focused researchers, I would guess it’s more due to other causes, e.g. a) demographic factors in which groups different cause areas draw from, b) the (recent) influx of relative wealth/financial stability for x-risk researchers, or c) potentially cause-area- and institution-specific cultural factors.