I guess the worry then is that you’re drawn into fanaticism: in principle, any positive probability event, however small that probability is, can be bad enough to justify taking extremely costly measures now to ameliorate it.
I’d also say that assigning all events positive probability can’t be a part of bayesianism in general if we want to allow for a continuum of possible events (e.g., as many possible events as there are real numbers).
I do think the best way out for the position I’m arguing against is something like: assume all events have positive probability, set an upper bound on the badness of events and the costliness of ameliorating them (to avoid fanaticism) and then hope you can run simulations that give you a tight margin for error with low failure probability.
So I think we should be skeptical of any claims that some event has positive probability when the event hasn’t happened before and we don’t have a well-worked-out model of the process that would produce that sort of event. It just strikes me that these features are more typical of longer-term predictions.