my default hypothesis is that you’re unconvinced by the arguments about AI risk in significant part because you are applying an usually high level of epistemic rigour
This seems plausible to me, based on:
The people I know who have thought deeply about AI risk and come away unconvinced often seems to match this pattern.
I think some of the people who care most about AI risk apply a lower level of epistemic rigour than I would, e.g. some seem to have much stronger beliefs about how the future will go than I think can be reasonably justified.
This seems plausible to me, based on:
The people I know who have thought deeply about AI risk and come away unconvinced often seems to match this pattern.
I think some of the people who care most about AI risk apply a lower level of epistemic rigour than I would, e.g. some seem to have much stronger beliefs about how the future will go than I think can be reasonably justified.