Executive summary: An exploratory, steelmanning critique argues that contemporary longtermism risks amplifying a broader cultural drift toward safetyism and centralized control, is skewed by a streetlight effect toward extinction-risk work, and—when paired with hedonic utilitarian framings—can devalue individual human agency; the author proposes a more empowerment-focused, experimentation-friendly, pluralistic longtermism that also treats stable totalitarianism and “flourishing futures” as first-class priorities.
Key points:
Historical context & “cultural longtermism”: Longtermism is situated within a centuries-long rise in societal risk-aversion (post-WW2 liberalism, 1970s environmentalism/anti-nuclear). This tide brings real benefits but also stagnation risks that critics plausibly attribute to over-regulation and homogenizing global governance.
Reconciling perceptions of power: Even if explicit longtermist budgets are small, the indirect, often unseen costs of safetyist policy—slower medical progress, blocked nuclear power, NIMBY housing constraints, tabooed research—create “invisible graveyards,” making a de facto “strict culturally-longtermist state” more feasible than analysts assume.
Streetlight effect inside longtermism: Because extinction risks are unusually amenable to analysis and messaging, they crowd out harder-to-measure priorities—s-risks (e.g., stable totalitarianism), institutional quality, social technology, and positive-vision “flourishing futures”—potentially causing large path-dependent misallocations.
Utilitarian framings and the individual: Widespread (often implicit) reliance on total hedonic utilitarianism dissolves the moral salience of unique persons into interchangeable “qualia-moments” while elevating the survival of civilization as a whole—fueling totalitarian vibes and explaining why deaths of individuals (e.g., aging) receive less emphasis than civilization-level x-risk.
Risk of over-centralization: If longtermist x-risk agendas unintentionally bolster global regulation and control, they may increase the probability of totalitarian lock-in—the very kind of non-extinction catastrophe that longtermism underweights because it runs through messy socio-political channels.
Toward a more humanistic longtermism: Prioritize empowerment, experimentation, and credible-neutral social technologies (e.g., prediction markets, algorithmic policy rules, liability schemes); invest in governance concepts that reduce politicization, expand policy VOI via pluralism (charter-city-like diversity), and explicitly target anti-totalitarian interventions (propaganda/censorship-resistance, offense-defense mapping for control-enabling tech).
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: An exploratory, steelmanning critique argues that contemporary longtermism risks amplifying a broader cultural drift toward safetyism and centralized control, is skewed by a streetlight effect toward extinction-risk work, and—when paired with hedonic utilitarian framings—can devalue individual human agency; the author proposes a more empowerment-focused, experimentation-friendly, pluralistic longtermism that also treats stable totalitarianism and “flourishing futures” as first-class priorities.
Key points:
Historical context & “cultural longtermism”: Longtermism is situated within a centuries-long rise in societal risk-aversion (post-WW2 liberalism, 1970s environmentalism/anti-nuclear). This tide brings real benefits but also stagnation risks that critics plausibly attribute to over-regulation and homogenizing global governance.
Reconciling perceptions of power: Even if explicit longtermist budgets are small, the indirect, often unseen costs of safetyist policy—slower medical progress, blocked nuclear power, NIMBY housing constraints, tabooed research—create “invisible graveyards,” making a de facto “strict culturally-longtermist state” more feasible than analysts assume.
Streetlight effect inside longtermism: Because extinction risks are unusually amenable to analysis and messaging, they crowd out harder-to-measure priorities—s-risks (e.g., stable totalitarianism), institutional quality, social technology, and positive-vision “flourishing futures”—potentially causing large path-dependent misallocations.
Utilitarian framings and the individual: Widespread (often implicit) reliance on total hedonic utilitarianism dissolves the moral salience of unique persons into interchangeable “qualia-moments” while elevating the survival of civilization as a whole—fueling totalitarian vibes and explaining why deaths of individuals (e.g., aging) receive less emphasis than civilization-level x-risk.
Risk of over-centralization: If longtermist x-risk agendas unintentionally bolster global regulation and control, they may increase the probability of totalitarian lock-in—the very kind of non-extinction catastrophe that longtermism underweights because it runs through messy socio-political channels.
Toward a more humanistic longtermism: Prioritize empowerment, experimentation, and credible-neutral social technologies (e.g., prediction markets, algorithmic policy rules, liability schemes); invest in governance concepts that reduce politicization, expand policy VOI via pluralism (charter-city-like diversity), and explicitly target anti-totalitarian interventions (propaganda/censorship-resistance, offense-defense mapping for control-enabling tech).
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.