I don’t find accusations of fallacy helpful here. The author’s say in the abstract explicitly that they estimated the probability of each step conditional on the previous ones. So they are not making a simple, formal error like multiplying a bunch of unconditional probabilities whilst forgetting that only works if the probabilities are uncorrelated. Rather, you and Richard Ngo think that they’re estimates for the explicitly conditional probabilities are too low, and you are speculating that this is because they are still really think of the unconditional probabilities. But I don’t think “you are committing a fallacy” is a very good or fair way to describe “I disagree with your probabilities and I have some unevidenced speculation about why you are giving probabilities that are wrong”.
Saying they are conditional does not mean they are. For example, why is P(We invent a way for AGIs to learn faster than humans|We invent algorithms for transformative AGI) only 40%? Or P(AGI inference costs drop below $25/hr (per human equivalent)[1]|We invent algorithms for transformative AGI) only 16%!? These would be much more reasonable as unconditional probabilities. At the very least, “algorithms for transformative AGI” would be used to massively increase software and hardware R&D, even if expensive at first, such that inference costs would quickly drop.
I don’t think you can possibly know whether they really are actually thinking of the unconditional probabilities or whether they just have very different opinions and instincts from you about the whole domain which make very different genuinely conditional probabilities seem reasonable.
It just looks a lot like motivated reasoning to me—kind of like they started with the conclusion and worked backward. Those examples are pretty unreasonable as conditional probabilities. Do they explain why “algorithms for transformative AGI” are very unlikely to meaningfully speed up software and hardware R&D?
I don’t find accusations of fallacy helpful here. The author’s say in the abstract explicitly that they estimated the probability of each step conditional on the previous ones. So they are not making a simple, formal error like multiplying a bunch of unconditional probabilities whilst forgetting that only works if the probabilities are uncorrelated. Rather, you and Richard Ngo think that they’re estimates for the explicitly conditional probabilities are too low, and you are speculating that this is because they are still really think of the unconditional probabilities. But I don’t think “you are committing a fallacy” is a very good or fair way to describe “I disagree with your probabilities and I have some unevidenced speculation about why you are giving probabilities that are wrong”.
Saying they are conditional does not mean they are. For example, why is P(We invent a way for AGIs to learn faster than humans|We invent algorithms for transformative AGI) only 40%? Or P(AGI inference costs drop below $25/hr (per human equivalent)[1]|We invent algorithms for transformative AGI) only 16%!? These would be much more reasonable as unconditional probabilities. At the very least, “algorithms for transformative AGI” would be used to massively increase software and hardware R&D, even if expensive at first, such that inference costs would quickly drop.
As an aside, surely this milestone has basically now already been reached? At least for the 90% percentile human in most intellectual tasks.
I don’t think you can possibly know whether they really are actually thinking of the unconditional probabilities or whether they just have very different opinions and instincts from you about the whole domain which make very different genuinely conditional probabilities seem reasonable.
It just looks a lot like motivated reasoning to me—kind of like they started with the conclusion and worked backward. Those examples are pretty unreasonable as conditional probabilities. Do they explain why “algorithms for transformative AGI” are very unlikely to meaningfully speed up software and hardware R&D?