I agree we should be skeptical! (Although I am open to believing such events are possible if there seem to be good reasons to think so.)
But while the intractability stuff is kind of interesting, I don’t think it actually says much about how skeptical we should be of different claims in practice.
I think if someone tells you that a potentially catastrophic event has positive probability, then the general intractability of probabilistic inference is a good reason to demand a demonstrably tractable model of the system that generates the event, before incurring massive costs. Otherwise, this person is just saying: look, I’ve got some beliefs about the world, and I’m able to infer from those believes that this event that’s never happened before has positive probability. My response is to say that this just isn’t the sort of thing we can do in the general case; we can only do it in the case of specific classes of models. Thus, my recommendation for more science and less forecasting in EA.
Thanks for clarifying! I agree that if someone just tells me (say) what they think the probability of AI causing an existential catastrophe is without telling me why, I shouldn’t update my beliefs much, and I should ask for their reasons. Ideally, they’d have compelling reasons for their beliefs.
That said, I think I might be slightly more in favour of forecasting being useful than you. I think that my own credence in (say) AI existential risk should be an input into how I make decisions, but that I should be pretty careful about where that credence has come from.
I agree we should be skeptical! (Although I am open to believing such events are possible if there seem to be good reasons to think so.)
But while the intractability stuff is kind of interesting, I don’t think it actually says much about how skeptical we should be of different claims in practice.
I think if someone tells you that a potentially catastrophic event has positive probability, then the general intractability of probabilistic inference is a good reason to demand a demonstrably tractable model of the system that generates the event, before incurring massive costs. Otherwise, this person is just saying: look, I’ve got some beliefs about the world, and I’m able to infer from those believes that this event that’s never happened before has positive probability. My response is to say that this just isn’t the sort of thing we can do in the general case; we can only do it in the case of specific classes of models. Thus, my recommendation for more science and less forecasting in EA.
Thanks for clarifying! I agree that if someone just tells me (say) what they think the probability of AI causing an existential catastrophe is without telling me why, I shouldn’t update my beliefs much, and I should ask for their reasons. Ideally, they’d have compelling reasons for their beliefs.
That said, I think I might be slightly more in favour of forecasting being useful than you. I think that my own credence in (say) AI existential risk should be an input into how I make decisions, but that I should be pretty careful about where that credence has come from.
I think we’ve arrived at a nice place then! Thank you so much for reading!