I’m sorry to hear about this, Nathan. As I say in the post, I do think that the question how to do gut-stuff right from a practical perspective is distinct from the epistemic angle that the post focuses on, and I think it’s important to attend to both.
I agree ideally one would do gut stuff right both practically and epistemically. In my case, the tradeoff of productivity loss and loss in general reasoning ability in exchange for some epistemic gains wasn’t worth it.
I think it’s plausible that for people in a similar situation to me—people who are good at making decisions based on just analytic reasoning and have reason to think that they might be vulnerable if they were to try to believe things on a gut level as well as an analytic one—should consider not engaging certain EA topics on a gut level (I don’t restrict this to AI safety—I know people who’ve had similar reactions thinking about nuclear risk and I’ve personally made the decision not to think about s-risk or animal welfare on a gut level either.)
I do want to emphasise that there was a tradeoff here—I think I have somewhat better AI safety takes as a result of thinking about AI safety on a gut level. The benefit though was reasonably small and not worth the other costs from an impartial welfareist perspective.
I’m sorry to hear about this, Nathan. As I say in the post, I do think that the question how to do gut-stuff right from a practical perspective is distinct from the epistemic angle that the post focuses on, and I think it’s important to attend to both.
I agree ideally one would do gut stuff right both practically and epistemically. In my case, the tradeoff of productivity loss and loss in general reasoning ability in exchange for some epistemic gains wasn’t worth it.
I think it’s plausible that for people in a similar situation to me—people who are good at making decisions based on just analytic reasoning and have reason to think that they might be vulnerable if they were to try to believe things on a gut level as well as an analytic one—should consider not engaging certain EA topics on a gut level (I don’t restrict this to AI safety—I know people who’ve had similar reactions thinking about nuclear risk and I’ve personally made the decision not to think about s-risk or animal welfare on a gut level either.)
I do want to emphasise that there was a tradeoff here—I think I have somewhat better AI safety takes as a result of thinking about AI safety on a gut level. The benefit though was reasonably small and not worth the other costs from an impartial welfareist perspective.