You don’t need to be an extreme longtermist to be sceptical about AI, it suffices to care about the next generation and not want extreme levels of change. I think looking too much into differing morals is the wrong lens here.
The most obvious explanation for how Altman and people more concerned about AI safety (not specifically EAs) differ seems to be in their estimates about how likely AI risk is vs other risks.
That being said, the point that it’s disingenuous to ascribe cognitive bias to Altman for having whatever opinion he has, is a fair one—and one shouldn’t go too far with it in view of general discourse norms. That said, given Altman’s exceptional capability for unilateral action due to his position, it’s reasonable to be at least concerned about it.
You don’t need to be an extreme longtermist to be sceptical about AI, it suffices to care about the next generation and not want extreme levels of change. I think looking too much into differing morals is the wrong lens here.
The most obvious explanation for how Altman and people more concerned about AI safety (not specifically EAs) differ seems to be in their estimates about how likely AI risk is vs other risks.
That being said, the point that it’s disingenuous to ascribe cognitive bias to Altman for having whatever opinion he has, is a fair one—and one shouldn’t go too far with it in view of general discourse norms. That said, given Altman’s exceptional capability for unilateral action due to his position, it’s reasonable to be at least concerned about it.