Thanks for clarifying! I agree that if someone just tells me (say) what they think the probability of AI causing an existential catastrophe is without telling me why, I shouldn’t update my beliefs much, and I should ask for their reasons. Ideally, they’d have compelling reasons for their beliefs.
That said, I think I might be slightly more in favour of forecasting being useful than you. I think that my own credence in (say) AI existential risk should be an input into how I make decisions, but that I should be pretty careful about where that credence has come from.
Thanks for clarifying! I agree that if someone just tells me (say) what they think the probability of AI causing an existential catastrophe is without telling me why, I shouldn’t update my beliefs much, and I should ask for their reasons. Ideally, they’d have compelling reasons for their beliefs.
That said, I think I might be slightly more in favour of forecasting being useful than you. I think that my own credence in (say) AI existential risk should be an input into how I make decisions, but that I should be pretty careful about where that credence has come from.
I think we’ve arrived at a nice place then! Thank you so much for reading!