Executive summary: This post argues that during an intelligence explosion, humanity could face not just epistemic disruption but full epistemic collapse—a breakdown of shared frameworks for determining truth—leaving most people disempowered and unable to meaningfully participate in shaping the future.
Key points:
The paper by MacAskill and Moorhouse underestimates how destabilizing superintelligence could be; disruption may instead become collapse, with truth itself contested and unstable.
AI-assisted tools for reasoning (fact-checking, forecasting, augmented wisdom) rely on shared epistemic frameworks and trust—both of which may fail when basic assumptions are overturned rapidly.
Likely additional destabilizers include digital resurrection (hyperrealistic revivals of the dead) and preference extrapolation (AIs revealing hidden drives), both of which could erode people’s sense of identity and authority.
A “naturalist underclass” may emerge: those refusing cognitive/technological enhancements could become epistemically obsolete, excluded from democratic participation and daily social life.
The author provides a fictional vignette to illustrate what epistemic collapse might feel like—confusion, mistrust, alienation, and inability to engage with enhanced peers.
Potential mitigations include building transitional “epistemic scaffolding,” learning from historical worldview shifts, evaluating AI persuasion, mapping epistemic resilience, and exploring systems that allow unenhanced humans to track truth without full understanding.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: This post argues that during an intelligence explosion, humanity could face not just epistemic disruption but full epistemic collapse—a breakdown of shared frameworks for determining truth—leaving most people disempowered and unable to meaningfully participate in shaping the future.
Key points:
The paper by MacAskill and Moorhouse underestimates how destabilizing superintelligence could be; disruption may instead become collapse, with truth itself contested and unstable.
AI-assisted tools for reasoning (fact-checking, forecasting, augmented wisdom) rely on shared epistemic frameworks and trust—both of which may fail when basic assumptions are overturned rapidly.
Likely additional destabilizers include digital resurrection (hyperrealistic revivals of the dead) and preference extrapolation (AIs revealing hidden drives), both of which could erode people’s sense of identity and authority.
A “naturalist underclass” may emerge: those refusing cognitive/technological enhancements could become epistemically obsolete, excluded from democratic participation and daily social life.
The author provides a fictional vignette to illustrate what epistemic collapse might feel like—confusion, mistrust, alienation, and inability to engage with enhanced peers.
Potential mitigations include building transitional “epistemic scaffolding,” learning from historical worldview shifts, evaluating AI persuasion, mapping epistemic resilience, and exploring systems that allow unenhanced humans to track truth without full understanding.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.