Executive summary: This exploratory post argues that we should avoid fanaticism in moral philosophy and decision theory not by breaking core axioms or accepting bizarre conclusions, but by “curve-fitting” — allowing our theories to include exceptions for extreme, fanatical cases when the cost of ignoring intuitions outweighs the value of simplicity.
Key points:
Fanaticism arises when low-probability but astronomically high-payoff outcomes dominate decision-making, leading to intuitively absurd or exploitative results.
Rejecting fanaticism usually requires breaking widely-accepted axioms of decision theory, but keeping it leads to intolerable conclusions — creating a methodological tension.
The author proposes treating simplicity as important but not infinitely weighted: we should be willing to sacrifice some simplicity when extreme intuitions matter more.
Their solution is to “curve-fit” theories: preserve general principles but add exceptions in fanatical edge cases, akin to threshold deontology, multi-level utilitarianism, or moral pluralism.
This approach is not mere intuitionism or arbitrary permissivism; rather, it makes explicit the trade-offs philosophers already make between simplicity and fitting moral intuitions.
Implications for effective altruism are limited: while this method tempers concerns about fanaticism in longtermism or animal welfare debates, it doesn’t suggest abandoning low-probability but reasonable bets (e.g., seatbelts).
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: This exploratory post argues that we should avoid fanaticism in moral philosophy and decision theory not by breaking core axioms or accepting bizarre conclusions, but by “curve-fitting” — allowing our theories to include exceptions for extreme, fanatical cases when the cost of ignoring intuitions outweighs the value of simplicity.
Key points:
Fanaticism arises when low-probability but astronomically high-payoff outcomes dominate decision-making, leading to intuitively absurd or exploitative results.
Rejecting fanaticism usually requires breaking widely-accepted axioms of decision theory, but keeping it leads to intolerable conclusions — creating a methodological tension.
The author proposes treating simplicity as important but not infinitely weighted: we should be willing to sacrifice some simplicity when extreme intuitions matter more.
Their solution is to “curve-fit” theories: preserve general principles but add exceptions in fanatical edge cases, akin to threshold deontology, multi-level utilitarianism, or moral pluralism.
This approach is not mere intuitionism or arbitrary permissivism; rather, it makes explicit the trade-offs philosophers already make between simplicity and fitting moral intuitions.
Implications for effective altruism are limited: while this method tempers concerns about fanaticism in longtermism or animal welfare debates, it doesn’t suggest abandoning low-probability but reasonable bets (e.g., seatbelts).
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.