I don’t think you can possibly know whether they really are actually thinking of the unconditional probabilities or whether they just have very different opinions and instincts from you about the whole domain which make very different genuinely conditional probabilities seem reasonable.
It just looks a lot like motivated reasoning to me—kind of like they started with the conclusion and worked backward. Those examples are pretty unreasonable as conditional probabilities. Do they explain why “algorithms for transformative AGI” are very unlikely to meaningfully speed up software and hardware R&D?
I don’t think you can possibly know whether they really are actually thinking of the unconditional probabilities or whether they just have very different opinions and instincts from you about the whole domain which make very different genuinely conditional probabilities seem reasonable.
It just looks a lot like motivated reasoning to me—kind of like they started with the conclusion and worked backward. Those examples are pretty unreasonable as conditional probabilities. Do they explain why “algorithms for transformative AGI” are very unlikely to meaningfully speed up software and hardware R&D?