I would even take this further and argue that your chain of reasoning could be applied to most causes (perhaps even all?), which seems valid.
Would you disagree with this?
I mean, I didn’t actually give any argument for why I don’t believe AI safety is good overall (assuming pure longtermism, i.e., taking into account everything from now until the end of time). I just said that I would believe it if there was evidence robust to unknown unknowns. (I haven’t argued that there wasn’t such evidence already; although the burden of the proof is very much on the opposite claim tbf). But I think this criterion applies to all causes where unknown unknowns are substantial, and I believe this is all of them as long as we’re evaluating them from a pure longtermist perspective, yes. And whether there is any cause that meets this criterion depends on one’s values I think. From a classical utilitarian perspective (and assuming the trade-offs between suffering and pleasure that most longtermists endorse), for example, I think there’s very plausibly none that does meet this criterion.
I mean, I didn’t actually give any argument for why I don’t believe AI safety is good overall (assuming pure longtermism, i.e., taking into account everything from now until the end of time). I just said that I would believe it if there was evidence robust to unknown unknowns. (I haven’t argued that there wasn’t such evidence already; although the burden of the proof is very much on the opposite claim tbf). But I think this criterion applies to all causes where unknown unknowns are substantial, and I believe this is all of them as long as we’re evaluating them from a pure longtermist perspective, yes. And whether there is any cause that meets this criterion depends on one’s values I think. From a classical utilitarian perspective (and assuming the trade-offs between suffering and pleasure that most longtermists endorse), for example, I think there’s very plausibly none that does meet this criterion.