While I’ve seen the war/EA analogy in a few places (Brian Tomasik on triage, Scott Alexander’s “All of these Effecting Effective Effectiveness people don’t obsess over efficiency out of bloodlessness. They obsess because the struggle is so desperate, and the resources so few”, Holly Elmore’s post in your essay), I like how your essay explored this analogy more systematically than the others, and also came to a different conclusion, arguing instead for common sense morality even in wartime as a guardrail against abhorrent moral decision-making. I think it pairs well with Holden Karnofsky’s reminder that maximization is perilous:
Most EAs seem to take action by following a formula something like: “Take a job at an organization with an unusually high-impact mission, which it pursues using broadly accepted-by-society means even if its goals are unusual; donate an unusual amount to carefully chosen charities; maybe have some other relatively benign EA-connected habits like veganism or reducetarianism; otherwise, behave largely as I would if I weren’t an EA.”
I’m glad things are this way, and with things as they stand, I am happy to identify as an EA. But I don’t want to lose sight of the fact that EA likely works best with a strong dose of moderation. The core ideas on their own seem perilous, and that’s an ongoing challenge.
And I’m nervous about what I perceive as dynamics in some circles where people seem to “show off” how little moderation they accept—how self-sacrificing, “weird,” extreme, etc. they’re willing to be in the pursuit of EA goals. I think this dynamic is positive at times and fine in moderation, but I do think it risks spiraling into a problem.
One thing that was not clear to me from your essay is the ‘decision procedure’ for when to use common sense morality vs the wartime/EA lens. I like 80K’s “character, then constraints, then consequences” recipe here: first cultivate good character, then respect constraints e.g. the rights of others, and then do as much good as you can (although I would still be wary of obsessing over the last one too much, cf. Tyler Alterman’s reflections on balancing it with other wants and needs).
That said, what constitutes good character and what rights are worth respecting are also contextually contingent (culture, history, etc), sometimes in abhorrent ways; e.g. Kwame Anthony Appiah’s reminder that beating one’s wife and children was considered a father’s duty. This raises the question of whether we can ‘future-proof’ our ethics, which forces a closer examination of what common sense morality prescribes (as JBentham mentioned upthread). I don’t have any original insight here unfortunately despite having grappled with this set of questions for a while; my revealed-preferences-ethical stance is probably somewhere between the median do-gooder and the median EA as Holden sketched above…
Thanks for the links to those other pieces that address similar issues. I wasn’t aware of most of them and they are suuuper interesting/relevant! Seems like I have some more reading and researching to do.
I think I agree with what you referenced from 80K. I see virtues and good character as the foundation on which you can then build in a more maximizing way. Satisfying certain personal needs and wants also fits into this foundational category. Of course, how exactly one balances these aspects highly varies from person to person.
And these decisions are highly context-dependent, yes. What I wrote is only a very high-level frame. In practice, it is of course very important to consider which aspects of “common sense morality” we really want to follow, just as it is very important to reflect on which personal needs and wants we should really follow or prioritize. This is a tricky balancing act that I am constantly trying to master. And social norms are always in flux as well.
For instance, I certaintly don’t think it is okay to farm animals in ways that make them suffer unnecessarily, even though it is common practice and might be commonly seen as morally acceptable. This also means that I act in ways that deviate from the norm (i.e. plant-based consumption). But society around me is also adapting and some data suggests that a majority of people actually find the way we treat farmed animal abhorrent (they just don’t act on it or rationalize their behavior).
All of this to say: Yes, moral common sense is vague and constantly changing. Yes, we always need to reflect on it and not follow the majority blindly. But I think it is beneficial to find certain core value and virtues to adhere by (and those should be ones where we are confident that they are not overall harmful).
While I’ve seen the war/EA analogy in a few places (Brian Tomasik on triage, Scott Alexander’s “All of these Effecting Effective Effectiveness people don’t obsess over efficiency out of bloodlessness. They obsess because the struggle is so desperate, and the resources so few”, Holly Elmore’s post in your essay), I like how your essay explored this analogy more systematically than the others, and also came to a different conclusion, arguing instead for common sense morality even in wartime as a guardrail against abhorrent moral decision-making. I think it pairs well with Holden Karnofsky’s reminder that maximization is perilous:
One thing that was not clear to me from your essay is the ‘decision procedure’ for when to use common sense morality vs the wartime/EA lens. I like 80K’s “character, then constraints, then consequences” recipe here: first cultivate good character, then respect constraints e.g. the rights of others, and then do as much good as you can (although I would still be wary of obsessing over the last one too much, cf. Tyler Alterman’s reflections on balancing it with other wants and needs).
That said, what constitutes good character and what rights are worth respecting are also contextually contingent (culture, history, etc), sometimes in abhorrent ways; e.g. Kwame Anthony Appiah’s reminder that beating one’s wife and children was considered a father’s duty. This raises the question of whether we can ‘future-proof’ our ethics, which forces a closer examination of what common sense morality prescribes (as JBentham mentioned upthread). I don’t have any original insight here unfortunately despite having grappled with this set of questions for a while; my revealed-preferences-ethical stance is probably somewhere between the median do-gooder and the median EA as Holden sketched above…
Thanks for the links to those other pieces that address similar issues. I wasn’t aware of most of them and they are suuuper interesting/relevant! Seems like I have some more reading and researching to do.
I think I agree with what you referenced from 80K. I see virtues and good character as the foundation on which you can then build in a more maximizing way. Satisfying certain personal needs and wants also fits into this foundational category. Of course, how exactly one balances these aspects highly varies from person to person.
And these decisions are highly context-dependent, yes. What I wrote is only a very high-level frame. In practice, it is of course very important to consider which aspects of “common sense morality” we really want to follow, just as it is very important to reflect on which personal needs and wants we should really follow or prioritize. This is a tricky balancing act that I am constantly trying to master. And social norms are always in flux as well.
For instance, I certaintly don’t think it is okay to farm animals in ways that make them suffer unnecessarily, even though it is common practice and might be commonly seen as morally acceptable. This also means that I act in ways that deviate from the norm (i.e. plant-based consumption). But society around me is also adapting and some data suggests that a majority of people actually find the way we treat farmed animal abhorrent (they just don’t act on it or rationalize their behavior).
All of this to say: Yes, moral common sense is vague and constantly changing. Yes, we always need to reflect on it and not follow the majority blindly. But I think it is beneficial to find certain core value and virtues to adhere by (and those should be ones where we are confident that they are not overall harmful).