Executive summary: This exploratory post argues that many cross-cause prioritization judgments in Effective Altruism (EA) rely on philosophical arguments that are too fragile, underdeveloped, and contentious to justify high confidence, and calls for greater humility, skepticism, and diversification in how cause prioritization is approached.
Key points:
Philosophical foundations of cause prioritization are often weak and contested: High-stakes comparisons between causes like global health, animal welfare, and existential risk rely on contentious philosophical assumptions (e.g., population ethics, decision theory) where even specialists disagree and evidence is largely intuitive or inconclusive.
Aggregation methods yield dramatically different results and are themselves underdefined: Tools like Rethink Priorities’ moral parliament show that depending on how you aggregate across moral theories, you might end up prioritizing entirely different cause areas—even with the same inputs.
We should treat philosophical evidence with the same skepticism often applied to empirical studies: EA norms promote caution around empirical findings (e.g., needing replication for RCTs); similarly, philosophical conclusions—especially recent ones—should not be assumed robust just because they seem internally coherent.
Overconfidence in philosophical conclusions risks distorting decision-making: Given the fragility of many key premises, strong endorsements of specific causes or interventions often outpace the available justification, especially when they rest on specific philosophical worldviews or aggregation methods.
Calls for epistemic humility and practical diversification: Instead of treating EA as settled answers, we should treat it as a method for inquiry, remain open to pluralistic approaches, and explicitly consider uncertainty in cause prioritization and funding decisions.
Relying on intuitions, “common sense,” or anti-realist views doesn’t resolve the uncertainty: These alternatives fail to escape the need for explicit reasoning and risk undermining EA’s foundational commitment to evidence and argument.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: This exploratory post argues that many cross-cause prioritization judgments in Effective Altruism (EA) rely on philosophical arguments that are too fragile, underdeveloped, and contentious to justify high confidence, and calls for greater humility, skepticism, and diversification in how cause prioritization is approached.
Key points:
Philosophical foundations of cause prioritization are often weak and contested: High-stakes comparisons between causes like global health, animal welfare, and existential risk rely on contentious philosophical assumptions (e.g., population ethics, decision theory) where even specialists disagree and evidence is largely intuitive or inconclusive.
Aggregation methods yield dramatically different results and are themselves underdefined: Tools like Rethink Priorities’ moral parliament show that depending on how you aggregate across moral theories, you might end up prioritizing entirely different cause areas—even with the same inputs.
We should treat philosophical evidence with the same skepticism often applied to empirical studies: EA norms promote caution around empirical findings (e.g., needing replication for RCTs); similarly, philosophical conclusions—especially recent ones—should not be assumed robust just because they seem internally coherent.
Overconfidence in philosophical conclusions risks distorting decision-making: Given the fragility of many key premises, strong endorsements of specific causes or interventions often outpace the available justification, especially when they rest on specific philosophical worldviews or aggregation methods.
Calls for epistemic humility and practical diversification: Instead of treating EA as settled answers, we should treat it as a method for inquiry, remain open to pluralistic approaches, and explicitly consider uncertainty in cause prioritization and funding decisions.
Relying on intuitions, “common sense,” or anti-realist views doesn’t resolve the uncertainty: These alternatives fail to escape the need for explicit reasoning and risk undermining EA’s foundational commitment to evidence and argument.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.