When engaged in moral philosophizing, modesty is key. There’s a scene in the movie Enter the Dragon during which Bruce Lee’s character admonishes his pupil: “Don’t think! Feel. It is like a finger pointing a way to the moon. Don’t concentrate on the finger or you will miss all that heavenly glory.”
I see a striking parallel between this analogy and the relationship between moral philosophies and morality itself. The moon of “morality” as we know it, that vague amalgam of our infinitely context-dependent intuitions, is nigh-impossible to fully pin down. Our sense of morality wiggles around, unpredictably jerked around in an (apparently) arbitrary fashion by the most subtle of changes in circumstance. Indeed, we as a species have not yet found a perfect model for our moral intuitions: they remain too nebulous, mercurial, and finely-tuned for us to craft a perfectly-fitted model that produces no repugnant conclusions and is universally convincing.
This reality is cause for philosophical modesty. All moral philosophies take their shot at trying to fit a perfect model to our stubbornly unquantifiable sense of morality, trying to impose some degree of objectivity onto any number of complex and unfamiliar situations. They have long tried to point their adherents in the best possible direction, to provide a base of common knowledge for what is right, what is wrong, how to react given factors x, y, and z. They are all fingers, pointing at different angles to a moon which cannot be neatly modeled and will not bend to any attempts to squarely categorize its nature.
I think that some of the more utilitarian-inclined Effective Altruists forget this sometimes, that utilitarianism is not an inherently superior philosophy relative to its alternatives: it, too, is a mere pointing finger. Importantly, this is not to say that we cannot prefer the implications of some philosophies to others, or argue that certain belief systems are more conducive to human well-being than others. Nevertheless, as EAs we must admit to ourselves that dogmatically and unsympathetically broadcasting moral superiority over other belief systems is a flimsy recruitment tactic, unlikely to facilitate EA’s transition into a truly prosperous and widespread global organization. Far worse than being a needlessly alienating and divisive tactic, it is also not utility-maximizing! Given that the EA philosophy is perceived by many skeptics as robotic and cold, it seems that the antidote to this PR problem lies in compassion, sympathy, and a willingness to engage in good-faith discussions with some of the more plausible moral alternatives to EA.
One of the most important rules of running for political office is that you never attack voters, even if they don’t support you. The EA movement as a whole can take a cue from this maxim. I know of no successful organization that rose to prominence, influence, and unmitigated success by judging, ignoring, or dismissing those very people whose support it needed to prosper in the first place. Like it or not, if EA is to accomplish its mission with maximum efficacy, it simply must convert many of the countless skeptics who remain unimpressed by what our organization stands for. The path to conversion will lie in a willingness to engage with those groups of people who hold divergent but nevertheless plausible moral opinions.
The philosophical arrogance that some Effective Altruists are guilty of is deeply off-putting to people who might otherwise express interest in joining EA. You catch more flies with honey than with vinegar. It is a fact that people will react against aggressive and patronizing attempts to coax them from their firmly-held beliefs: by haughtily espousing our belief that EA constitutes the best and most “moral” approach to important issues, we scheme ourselves out of countless new members who might’ve otherwise strengthened our organization.
Blatantly and coarsely telling someone their moral intuition is “wrong” is an impressively counterproductive strategy for a whole host of reasons. While it is fine and extremely defensible to prefer EA and its quasi-utilitarian philosophy over other philosophies, as I do, it is rather self-destructive to make the brash claim that its particular moral conclusions are superior and that, if you disagree, then you just need to “shut up and multiply.” Whether the statement is true or not (and I believe it to be true,) I can scarcely imagine a more condescending and uninviting approach when it comes to attracting new EAs.
Some EAs need to learn some modesty in their philosophizing! There are countless people out there trying desperately to do the right thing, whether by Jesus Christ, Allah, Brahma, or Peter Singer, and they certainly do not appreciate an Effective Altruist bluntly informing them that they are, in fact, being irrational, overly-emotional, scope insensitive, or otherwise misguided in their deeply-held moral values. These values often constitute the very bedrock of people’s entire life orientations. Given this reality, some (read: ample) compassion and open-mindedness is required on our part, obviously exempting those belief systems which are flagrantly and irredeemably incompatible with human well-being (e.g. Islamic/Christian fundamentalism, Nazism, etc.)
Nevertheless, we stomp on whatever philosophical common ground we may have with those palatable belief systems, however fertile with potential utility, when we greet their moral attitudes with rejection, condescension, or incredulity. Philosophical arrogance isn’t a good look for a person, let alone a budding organization that still has many, many people left unconvinced: there are plenty of people who would consider lending their efforts to EA, but only when we as an organization become less patronizing, more sympathetic to reasonably divergent moral frameworks, and much less sure of ourselves as philosophers.
The Case for Honey
When engaged in moral philosophizing, modesty is key. There’s a scene in the movie Enter the Dragon during which Bruce Lee’s character admonishes his pupil: “Don’t think! Feel. It is like a finger pointing a way to the moon. Don’t concentrate on the finger or you will miss all that heavenly glory.”
I see a striking parallel between this analogy and the relationship between moral philosophies and morality itself. The moon of “morality” as we know it, that vague amalgam of our infinitely context-dependent intuitions, is nigh-impossible to fully pin down. Our sense of morality wiggles around, unpredictably jerked around in an (apparently) arbitrary fashion by the most subtle of changes in circumstance. Indeed, we as a species have not yet found a perfect model for our moral intuitions: they remain too nebulous, mercurial, and finely-tuned for us to craft a perfectly-fitted model that produces no repugnant conclusions and is universally convincing.
This reality is cause for philosophical modesty. All moral philosophies take their shot at trying to fit a perfect model to our stubbornly unquantifiable sense of morality, trying to impose some degree of objectivity onto any number of complex and unfamiliar situations. They have long tried to point their adherents in the best possible direction, to provide a base of common knowledge for what is right, what is wrong, how to react given factors x, y, and z. They are all fingers, pointing at different angles to a moon which cannot be neatly modeled and will not bend to any attempts to squarely categorize its nature.
I think that some of the more utilitarian-inclined Effective Altruists forget this sometimes, that utilitarianism is not an inherently superior philosophy relative to its alternatives: it, too, is a mere pointing finger. Importantly, this is not to say that we cannot prefer the implications of some philosophies to others, or argue that certain belief systems are more conducive to human well-being than others. Nevertheless, as EAs we must admit to ourselves that dogmatically and unsympathetically broadcasting moral superiority over other belief systems is a flimsy recruitment tactic, unlikely to facilitate EA’s transition into a truly prosperous and widespread global organization. Far worse than being a needlessly alienating and divisive tactic, it is also not utility-maximizing! Given that the EA philosophy is perceived by many skeptics as robotic and cold, it seems that the antidote to this PR problem lies in compassion, sympathy, and a willingness to engage in good-faith discussions with some of the more plausible moral alternatives to EA.
One of the most important rules of running for political office is that you never attack voters, even if they don’t support you. The EA movement as a whole can take a cue from this maxim. I know of no successful organization that rose to prominence, influence, and unmitigated success by judging, ignoring, or dismissing those very people whose support it needed to prosper in the first place. Like it or not, if EA is to accomplish its mission with maximum efficacy, it simply must convert many of the countless skeptics who remain unimpressed by what our organization stands for. The path to conversion will lie in a willingness to engage with those groups of people who hold divergent but nevertheless plausible moral opinions.
The philosophical arrogance that some Effective Altruists are guilty of is deeply off-putting to people who might otherwise express interest in joining EA. You catch more flies with honey than with vinegar. It is a fact that people will react against aggressive and patronizing attempts to coax them from their firmly-held beliefs: by haughtily espousing our belief that EA constitutes the best and most “moral” approach to important issues, we scheme ourselves out of countless new members who might’ve otherwise strengthened our organization.
Blatantly and coarsely telling someone their moral intuition is “wrong” is an impressively counterproductive strategy for a whole host of reasons. While it is fine and extremely defensible to prefer EA and its quasi-utilitarian philosophy over other philosophies, as I do, it is rather self-destructive to make the brash claim that its particular moral conclusions are superior and that, if you disagree, then you just need to “shut up and multiply.” Whether the statement is true or not (and I believe it to be true,) I can scarcely imagine a more condescending and uninviting approach when it comes to attracting new EAs.
Some EAs need to learn some modesty in their philosophizing! There are countless people out there trying desperately to do the right thing, whether by Jesus Christ, Allah, Brahma, or Peter Singer, and they certainly do not appreciate an Effective Altruist bluntly informing them that they are, in fact, being irrational, overly-emotional, scope insensitive, or otherwise misguided in their deeply-held moral values. These values often constitute the very bedrock of people’s entire life orientations. Given this reality, some (read: ample) compassion and open-mindedness is required on our part, obviously exempting those belief systems which are flagrantly and irredeemably incompatible with human well-being (e.g. Islamic/Christian fundamentalism, Nazism, etc.)
Nevertheless, we stomp on whatever philosophical common ground we may have with those palatable belief systems, however fertile with potential utility, when we greet their moral attitudes with rejection, condescension, or incredulity. Philosophical arrogance isn’t a good look for a person, let alone a budding organization that still has many, many people left unconvinced: there are plenty of people who would consider lending their efforts to EA, but only when we as an organization become less patronizing, more sympathetic to reasonably divergent moral frameworks, and much less sure of ourselves as philosophers.