Executive summary: This exploratory post argues that when assessing morally mixed actions—like personal energy use or AI adoption—we should avoid both trivializing small harms and catastrophizing them, instead using tools like Pigouvian taxes, rough cost–benefit heuristics, and carefully framed universalizability tests to distinguish reasonable from wasteful resource use.
Key points:
Two common mistakes in thinking about collective harms are the rounding to zero fallacy (ignoring small contributions) and the total cost fallacy (treating all contributions as equally catastrophic).
The ideal solution is to internalize externalities through policies like carbon taxes, which would make tradeoffs transparent and remove the moral burden from individuals.
In the absence of such policies, individuals should estimate the expected net value of their actions, focusing on cost-effective reductions (e.g., gasoline use over electricity) and remembering that donations to highly effective charities typically outweigh lifestyle sacrifices.
Universalizability reasoning (“what if everyone did that?”) can help but must be applied carefully: one should abstract to decision procedures, respect others’ preferences, and distinguish between subcategories of resource use to avoid absurd or overly broad conclusions.
Boycotts of technologies like AI, when motivated by indiscriminate universalization, risk suppressing good uses without affecting bad ones; a more sensible approach is to encourage and model responsible use.
Shifting social norms through moral stands is possible, but its effectiveness is empirical; activists should assess probabilities and stakes rather than acting on hope alone.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: This exploratory post argues that when assessing morally mixed actions—like personal energy use or AI adoption—we should avoid both trivializing small harms and catastrophizing them, instead using tools like Pigouvian taxes, rough cost–benefit heuristics, and carefully framed universalizability tests to distinguish reasonable from wasteful resource use.
Key points:
Two common mistakes in thinking about collective harms are the rounding to zero fallacy (ignoring small contributions) and the total cost fallacy (treating all contributions as equally catastrophic).
The ideal solution is to internalize externalities through policies like carbon taxes, which would make tradeoffs transparent and remove the moral burden from individuals.
In the absence of such policies, individuals should estimate the expected net value of their actions, focusing on cost-effective reductions (e.g., gasoline use over electricity) and remembering that donations to highly effective charities typically outweigh lifestyle sacrifices.
Universalizability reasoning (“what if everyone did that?”) can help but must be applied carefully: one should abstract to decision procedures, respect others’ preferences, and distinguish between subcategories of resource use to avoid absurd or overly broad conclusions.
Boycotts of technologies like AI, when motivated by indiscriminate universalization, risk suppressing good uses without affecting bad ones; a more sensible approach is to encourage and model responsible use.
Shifting social norms through moral stands is possible, but its effectiveness is empirical; activists should assess probabilities and stakes rather than acting on hope alone.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.