I agree with your reasoning, and the way you’ve articulated it is very compelling to me! It seems that the bar this evidence would need to reach is, quite literally, impossible.
I would even take this further and argue that your chain of reasoning could be applied to most causes (perhaps even all?), which seems valid.
Would you disagree with this?
Your reply also raises a broader question for me: What criteria must an intervention meet for our determinance credence in its expected value being positive to exceed 50%, thereby justifying work on it?
I would even take this further and argue that your chain of reasoning could be applied to most causes (perhaps even all?), which seems valid.
Would you disagree with this?
I mean, I didn’t actually give any argument for why I don’t believe AI safety is good overall (assuming pure longtermism, i.e., taking into account everything from now until the end of time). I just said that I would believe it if there was evidence robust to unknown unknowns. (I haven’t argued that there wasn’t such evidence already; although the burden of the proof is very much on the opposite claim tbf). But I think this criterion applies to all causes where unknown unknowns are substantial, and I believe this is all of them as long as we’re evaluating them from a pure longtermist perspective, yes. And whether there is any cause that meets this criterion depends on one’s values I think. From a classical utilitarian perspective (and assuming the trade-offs between suffering and pleasure that most longtermists endorse), for example, I think there’s very plausibly none that does meet this criterion.
I agree with your reasoning, and the way you’ve articulated it is very compelling to me! It seems that the bar this evidence would need to reach is, quite literally, impossible.
I would even take this further and argue that your chain of reasoning could be applied to most causes (perhaps even all?), which seems valid.
Would you disagree with this?
Your reply also raises a broader question for me: What criteria must an intervention meet for our determinance credence in its expected value being positive to exceed 50%, thereby justifying work on it?
I mean, I didn’t actually give any argument for why I don’t believe AI safety is good overall (assuming pure longtermism, i.e., taking into account everything from now until the end of time). I just said that I would believe it if there was evidence robust to unknown unknowns. (I haven’t argued that there wasn’t such evidence already; although the burden of the proof is very much on the opposite claim tbf). But I think this criterion applies to all causes where unknown unknowns are substantial, and I believe this is all of them as long as we’re evaluating them from a pure longtermist perspective, yes. And whether there is any cause that meets this criterion depends on one’s values I think. From a classical utilitarian perspective (and assuming the trade-offs between suffering and pleasure that most longtermists endorse), for example, I think there’s very plausibly none that does meet this criterion.