Executive summary: This reflective post argues that while taking ideas seriously—aligning one’s actions with abstract reasoning—is a core and admirable principle within effective altruism, it is also dangerous if done uncritically, as evidenced by both historical moral progress and harmful fanaticism; the author explores strategies, such as moral uncertainty and avoiding speculative fanaticism, for navigating this tension responsibly.
Key points:
Taking ideas seriously can lead to both moral progress and moral catastrophe: The author reflects on how people often act contrary to their stated beliefs, but effective altruists tend to let reasoning guide their actions, which can yield both transformative change (e.g., abolition, feminism) and dangerous extremism (e.g., the Zizians case).
Most people don’t act on abstract reasoning, even if they accept it intellectually: This cognitive dissonance—believing one thing while acting against it—is widespread and, in many ways, socially stabilizing.
Effective altruism embraces the risk of action-guided reasoning but should do so cautiously: The community aims to improve the world by connecting beliefs and actions, but this comes with the responsibility of avoiding mistakes that arise from flawed or speculative reasoning.
Tools for responsible idea-following include side-constraints, moral uncertainty, and fanatical-speculative filters: Refusing to cross moral lines (like violence), considering multiple moral frameworks, and being wary of speculative and extreme conclusions help mitigate the dangers.
Effective altruists often respond to these dangers by developing new ideas to take seriously: The community’s recursive nature—reflecting on the limits and implications of its own reasoning—is seen as both a feature and a risk.
The post is part of a broader sequence on EA definitions and is framed as a personal, exploratory reflection on norms and practices within the movement.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: This reflective post argues that while taking ideas seriously—aligning one’s actions with abstract reasoning—is a core and admirable principle within effective altruism, it is also dangerous if done uncritically, as evidenced by both historical moral progress and harmful fanaticism; the author explores strategies, such as moral uncertainty and avoiding speculative fanaticism, for navigating this tension responsibly.
Key points:
Taking ideas seriously can lead to both moral progress and moral catastrophe: The author reflects on how people often act contrary to their stated beliefs, but effective altruists tend to let reasoning guide their actions, which can yield both transformative change (e.g., abolition, feminism) and dangerous extremism (e.g., the Zizians case).
Most people don’t act on abstract reasoning, even if they accept it intellectually: This cognitive dissonance—believing one thing while acting against it—is widespread and, in many ways, socially stabilizing.
Effective altruism embraces the risk of action-guided reasoning but should do so cautiously: The community aims to improve the world by connecting beliefs and actions, but this comes with the responsibility of avoiding mistakes that arise from flawed or speculative reasoning.
Tools for responsible idea-following include side-constraints, moral uncertainty, and fanatical-speculative filters: Refusing to cross moral lines (like violence), considering multiple moral frameworks, and being wary of speculative and extreme conclusions help mitigate the dangers.
Effective altruists often respond to these dangers by developing new ideas to take seriously: The community’s recursive nature—reflecting on the limits and implications of its own reasoning—is seen as both a feature and a risk.
The post is part of a broader sequence on EA definitions and is framed as a personal, exploratory reflection on norms and practices within the movement.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.