DISCLAIMER: Although this is a criticism of the LW/EA community, I offer it in good faith. I don’t mean to “take down” the community in any way. You can read this as a hypothesis for at least one cause of what some have called EA’s emotions problem. I also offer suggestions on how to address it. Relatedly, I should clarify that the ideals I express (regarding how much one should feel vs how much one should be doing cold reasoning in certain situations) are just that: ideals. They are simplified, generalized recommendations for the average person. Case by case recommendations are beyond the scope of this post. (Nor am I qualified to give any!) But obviously, for example, those who are neurodivergent (e.g. have Aspergers) shouldn’t be demeaned for not conforming to the ideals expressed here. Likewise though, it would be harmful to encourage those who are neurotypical to try to conform to an ideal better suited for someone who is neurodivergent: I do still worry we have “an emotions problem” in this community.
EDIT: Replaced the term “moral schizophrenia” with “internal moral disharmony” since the latter is more accurate and just. Thanks to AllAmericanBreakfast and Matt Goodman for highlighting this.
In case you missed it, amid the fallout from FTX’s collapse, its former CEO and major EA donor Sam Bankman-Fried (SBF) admitted that his talk of ethics was “mostly a front,” describing it as “this dumb game we woke Westerners play where we say all the right shibboleths and everyone likes us,” a game in which the winners decide what gets invested in and what doesn’t. He has since claimed that this was exaggerated venting intended for a friend audience, not the wider public. But still… yikes.
He also maintains that he did not know Alameda Research (the crypto hedge-fund heavily tied to FTX and owned by SBF) was over-leveraged, that he had no intention of doing anything sketchy like investing customers’ deposits. In an interview yesterday, he generally admitted to negligence but nothing more. Regarding his ignorance and his intentions, he might be telling the truth. Suppose he is: suppose he never condoned doing sketchy things as a means he could justify by some expected greater good. Where then is the borderline moral nihilism coming from? Note that it’s saying “all the right shibboleths” that he spoke of as mere means to an end, not the doing of sketchy things.
In what follows I will suggest what might have been going on in SBF’s head, in order to make a much higher confidence comment about the LW/EA community in general. Please don’t read too much into the armchair psychological diagnosis from a complete amateur – that isn’t the point. The point, to lay my cards on the table, is this: virtue ethicists would not be surprised if EAs suffer (in varying degrees) from an internal disharmony between their reasons and their motives at higher rates than the general population. This is a form of cognitive dissonance that can manifest itself in a number of ways, including (I submit) flirts with Machiavellian attitudes towards ethics. And this is not good. To explain this, I first need to lay some groundwork about normative ethics.
Virtue Ethics vs. Deontology vs. Consequentialism
Yet Another Absurdly Brief Introduction to Normative Ethics (YAABINE)
The LW/EA forums are littered with introductions, of varying quality and detail, to the three major families of normative ethical theories. Here is one from only two weeks ago. As such, my rendition of YAABINE will be even briefer than usual, and focuses only on theories of right action. (I encourage checking out the real deal though: here are SEP’s entries on virtue ethics, deontology and consequentialism).
Virtue Ethics (VE): the rightness and wrongness of actions are judged by the character traits at the source of the action. If an action “flows” from a virtue, it is right; from a vice, wrong. The psychological setup (e.g. motivation) of the agent is thus critical for assessing right and wrong. Notably, the correct psychological setup often involves not excessively reasoning: VE is not necessarily inclined towards rationalism. (Much more on this below).
Deonotological Ethics (DE): the rightness and wrongness of actions are judged by their accordance with the duties/rules/imperatives that apply to the agent. The most well known form of DE is Kantian Ethics (KE). Something I have yet to see mentioned on LW is that, for Kant, it’s not enough to act merely in accordance with moral imperatives: one’s actions must also result from sound moral reasoning about the imperatives that apply. KE, unlike VE, is very much a rationalist ethics.
Consequentialism: the rightness and wrongness of actions are judged solely by their consequences – their net effect on the amount of value in the world. What that value is, where it is, whether we’re talking about expected or actual effects, direct or indirect – these are all choice points for theorists. As very well put by Alicorn:
“Classic utilitarianism” could go by the longer, more descriptive name “actual direct maximizing aggregative total universal equal-consideration agent-neutral hedonic act consequentialism”.
Finally, a quick word on theories of intrinsic value (goodness) and how they relate to theories of right action (rightness): conceptually speaking, much recombination is possible. For example, you could explain both goodness and rightness in terms of the virtues, forming a sort of Fundamentalist VE. Or you could explain goodness in terms of human flourishing (eudaimonia), which you in turn use to explain a virtue ethical theory of rightness – by arguing that human excellence (virtue) is partially constitutive of human flourishing. That would form a Eudaimonic VE (a.k.a. Neo-Aristotelian VE). Note that under this theory, a world with maximal human flourishing is judged to be maximally good, but the rightness and wrongness of our actions are not judged based on whether they maximize human flourishing!
Those are standard combinations but, prima facie, there is nothing conceptually incoherent about unorthodox recombinations like a Hedonistic VE (goodness = pleasure, and having virtues is necessary for/constitutive of pleasure), or Eudaimonic Consequentialism (goodness = eudaimonia, and rightness = actions that maximize general eudaimonia). The number of possible positions further balloons as you distinguish more phenomena and try to relate them together. There are, for example, many different readings of “good” and different categories of judgement (e.g. judging whole lives vs states of affairs at given moments in time; judging public/corporate policies vs character traits of individuals; judging any event vs specifically human actions). The normative universe is vast, and things can get complicated fast.
Here I hope to keep things contained to a discussion of right action, but just remember: this only scratches the surface!
LW and EA, echo chambers for Consequentialism
Why bother with YAABINE?
Something nags me about previous introductions on the LW/EA forums: VE and DE are nearly always reinterpreted to fit within a consequentialist’s worldview. This is unsurprising of course: both LW and EA were founded by consequentialists and have retained their imprint. But that also means these forums are turning into something of an echo chamber on the topic (or so I fear). With this post, I explicitly intend to challenge my consequentialist readers. I’m going to try and do for VE what Alicorn does for DE: demonstrate how virtue ethicists would actually think through a case.
What does that consequentialist co-opting look like? A number have remarked that, on consequentialist grounds, it is generally right to operate as if VE were true (i.e. develop virtuous character traits) or operate as if DE (if not KE) were true (i.e. beware means-ends reasoning, respect more general rules and duties) or a mix of both. In fact the second suggestion has a long heritage: this is basically just Rule Consequentialism.
On the assumption that Consequentialism is true, I generally agree with these remarks. But let’s get something straight: you shouldn’t read these as charitable interpretations of DE and VE or something. There are very real differences and disagreements between the three major families of theory, and it’s an open question regarding who is right. FWIW currently VE has a slim plurality among philosophers, with DE as the runner up. Among ethicists (applied, normative, meta, feminist), it seems DE consistently has the plurality, with VE as runner up. Consequentialism is consistently in third place.
One way to think of the disagreement between the three families is in what facts they take to be explanatorily fundamental – the facts that form the basis for their unifying account of a whole range of normative judgments. Regarding judgments of actions, for DE the fundamental facts are about imperatives; for Consequentialism, consequences; for VE, the character of the agent. Every theory will have something to say about each of these terms – but different theories will take different terms to be fundamental. If it helps, you can roughly categorize these facts based on their location in the causal stream:
DE ultimately judges actions based on facts causally up-stream from the action (e.g. what promises were made?), along with, perhaps, some acausal facts (e.g. what imperatives are analytically possible/impossible for a mind to will coherently?);
VE ultimately judges actions based on facts immediately up-stream (e.g. what psychological facts about the agent explain how they reacted to the situation at hand?);
Consequentialism ultimately judges actions based on down-stream facts (e.g. what was the net effect on utility?).
This is an excessively simplistic and imperfect categorization, but hopefully it gets across the deeper disagreement between the families. Yes, it’s true, they tend to prescribe the same course of action in many scenarios, but they very much disagree on why and how we should pursue said course. And that matters. Such is the disagreement at the heart of this post.
The problem of thoughts too many
Bernard Williams, 20th century philosopher and long-time critic of utilitarianism, proposed the following thought experiment. Suppose you come across two people drowning. As you approach you notice: one is a stranger; the other, your spouse! You only have time to save one of them: who do you save? Repressing any gut impulse they might have, the well-trained utilitarian will at this point calculate (or recall past calculations of) the net effect on utility for each choice, based on their preferred form of utilitarianism and… they will have already failed to live up to the moment. According to Williams, someone who seeks a theoretical justification for the impulse to save the life of a loved one has had “one thought too many.” (Cf. this story about saving two children from an oncoming train: EY is very much giving a calculated justification for an impulse that only an over-thinking consequentialist would question).
Virtue ethicist Michael Stocker develops a similar story, asking us to imagine visiting a sick friend at the hospital. If our motivation for visiting our sick friend is that we think doing so will maximize the general good, (or best obeys the rules most conducive to the general good, or best respects our duties), then we are morally ugly in some way. If the roles were reversed, it would likely hurt to find out our friend came to visit us not because they care about us (because they felt a pit in their stomach when they heard we were hospitalized) but because they believe they are morally obligated (they consulted moral theory, reasoned about the facts, and determined this was so). Here, as before, there seems to be too much thinking getting in the way of (or even replacing) the correct motivation for acting as one should.
Note how anti-rationalist this is: part of the point here is that the thinking itself can be ugly. According to VE, in both these stories there should be little to no “slow thinking” going on at all – it is right for your “fast thinking,” your heuristics, to take the reins. Many virtue ethicists liken becoming virtuous to training one’s moral vision – learning to perceive an action as right, not to reason that it is right. Of course cold calculated reasoning has its place, and many situations call for it. But there are many more in which being calculating is wrong.
(If your heuristic is a consciously invoked utilitarian/deontological rule that you’ve passionately pledged yourself to, then the ugliness comes from the fact that your affect is misplaced – you care about the rule, when you should be caring about your friend. Just like cold reasoning, impassioned respect for procedure and duty can be appropriate at times; most times it amounts to rule-worship.)
Internal Moral Disharmony
In Stocker’s terms, a theory brings on “moral schizophrenia” when it produces disharmony between our reasons/justifications for acting and our motivations to act. Since this term is outdated and misleading, let’s call this malady of the spirit “internal moral disharmony.” As Stocker describes it (p454):
An extreme form of [this disharmony] is characterized, on the one hand, by being moved to do what one believes bad, harmful, ugly, abasing; on the other, by being disgusted, horrified, dismayed by what one wants to do. Perhaps such cases are rare. But a more modest [disharmony] between reason and motive is not, as can be seen in many examples of weakness of the will, indecisiveness, guilt, shame, self-deception, rationalization, and annoyance with oneself.
When our reasons (or love of rules) fully displace the right motivations to act, the disharmony is resolved but we get the aforementioned ugliness (in consequentialist terms: we do our friend/spouse harm by not actually caring about them). We become walking utility calculators (or rule-worshipers). Most of us, I would guess, are not so far gone, but instead struggle with this disharmony. It manifests itself as a sort of cognitive dissonance: we initially have the correct motivations to act, but thoughts too many get in the way, thoughts we would prefer not to have. Stocker’s claim is that Consequentialism is prone to producing this disharmony. Consequentialism has us get too accustomed to ethical analysis, to the point of it running counter our first (and good) impulses, causing us to engage in slow thinking automatically even when we would rather not. Resolving this dissonance is difficult – like trying to stop thinking about pink elephants. The fact we have this dissonance in our head makes us less than a paragon of virtue, but better than the walking utility calculator/rule-worshiper.
Besides being a flaw in our moral integrity, this dissonance is also harmful to ourselves. (Which seems to lead hedonistic consequentialists to conclude we should be the walking utility calculators/rule-worshipers!) Too much thinking about a choice – analyzing the options along more dimensions, weighing more considerations for and against each, increasing the number of options considered – will dampen one’s emotional attachment to the option chosen. Most of us have felt this before: too much back and forth on what to order at a restaurant leaves you less satisfied with whatever you eventually choose. Too much discussion about where to go, what to do, leaves everyone less satisfied with whatever is finally chosen. A number of publications in psychology confirm and elaborate on this readily apparent phenomenon (most famously Schwartz’s The Paradox of Choice). (Credit for this list of references goes to Eva Illouz, who finds evidence of this phenomenon in the way we choose our romantic partners today, especially men).
Regularly applying ethical analysis to every little thing (which consequentialists are prone to do!) can be especially bad and dangerous. When ethical considerations and choices start to leave you cold, you will struggle to find the motivation to do what you judge is right, making you weak-willed (a “less effective consequentialist” if you prefer). Or you might continue to go through the right motions, but it will be mechanical, without joy or passion or fulfillment. This is harm in itself, to oneself. But moreover, it leaves you vulnerable: this coldness is a short distance from the frozen wastes of cynicism and nihilism. When ethics looks like “just an optimization problem” to you, it can quickly start to look like “just a game.” Making careful analysis your first instinct means learning to repress your gut sense of what is right and wrong; once you do that, right and wrong might start to feel less important, at which point it becomes harder to hang onto the normative reality they structure. In the limit, it might lead one to completely confuse the two.
Given his penchant for consequentialist reasoning (and given that being colder is associated with being less risk-averse, making one a riskier gambler and more successful investor), it would not surprise me to learn that SBF has slipped into that coldness at times. This profile piece suggests Will McAskill has felt its touch. J.S. Mill, notable consequentialist, definitely suffered it. There are symptoms of it all over this post from Wei Dai and the ensuing comment thread (see my pushback here). In effect, much of EY’s sequence on morality encourages one to suppress affect and become a walking utility calculator or rule-worshiper (whether he intends this or not) – exactly what leads to this coldness. In short, I fear it is widespread in this community.
EDIT: The term”widespread” is vague – I should have been clearer. I do not suspect this coldness afflicts the majority of LW/EA people. Something more in the neighborhood of 20~5%. Since it’s not easy to measure this coldness, I have given a more concrete falsifiable prediction here. None of this is to say that, on net, the LW/EA community has a negative impact on people’s moral character. On the contrary, on net, I’m sure it’s positive. But if there is a problematic trend in the community (and if it had any role to play in the attitudes of certain high profile EAs towards ethics), I would hope the community takes steps to curb that trend.
The danger of overthinking things is of course general, with those who are “brainier” being especially susceptible. Given that this is a rationalist community – a community that encourages braininess – it would be no surprise to find it here at higher rates than the general population. However, I am surprised and disappointed that being brainy hasn’t been more visibly flagged as a risk factor! Let it be known: braininess comes with its own hazards (e.g. rationalizaion). This coldness is another one of them. LW should come with a warning label on it!
A problem for everybody...
If overthinking things is a very general problem, that suggests thoughts too many (roughly, “the tendency to overthink things in ethical situations”) is also general and not specific to Consequentialism. And indeed, VE can suffer it. In its simplest articulation, VE tells us to “do as the virtuous agent would do,” but telling your sick friend that you came to visit “because this is what the virtuous agent would do” is no better than the consequentialists response! You should visit your friend because you’re worried for them, full stop. Similarly, if someone was truly brave and truly loved their spouse, they would dive in to save them from drowning (instead of the stranger) without second thought.
Roughly, a theory is said to be self-effacing when the justification it provides for the rightness of an action is also recognized by the theory as being the wrong motivation for taking that action. Arguably, theories can avoid causing internal disharmony at the cost of being self-effacing. When Stocker first exposed self-effacement in Consequentialism and DE, it was viewed as something of a bug. But in some sense, it might actually be a feature: if there is no situation in which your theory recommends you stop consulting theory, then there is something wrong with that theory – it is not accounting for the realities of human psychology and the wrongness of thoughts too many. It’s unsurprising that self-effacement should show up in nearly every plausible theory of normative ethics – because theory tends to involve a lot of thinking.
...but especially (casual) consequentialists.
All that said, consequentialists should to be especially wary of developing thoughts too many for a few reasons:
Culture: the culture surrounding Consequentialism is very much one that encourages adopting the mindset of a maximizer, an optimizing number cruncher, someone who applies decision theory to every aspect of one’s life. Consequentialism and rationalism share a close history after all. In all things morality related, I advise rationalists tone down these attitudes (or at least flag them as hazardous) especially around less sophisticated, more casual audiences.
The theory’s core message: even though most consequentialist philosophers advise against using an act-consequentialist decision procedure, Act Consequentialism (“the right action = the action which results in the most good”) is still the slogan. Analyzing, calculating, optimizing and maximizing appears front and center in the theory. It seems to encourage the culture mentioned above from the outset. It’s only many observations later that sophisticated consequentialists will note that, the best way for humans to actually maximize utility is to operate as if VE or DE were true (i.e. by developing character traits or respecting rules that tend to maximize utility). Fewer still notice the ugliness of thoughts too many (and rule-worship). Advocates of Consequentialism should do two things to guard their converts against, if nothing else, the cognitive dissonance of internal moral disharmony:
At a minimum, converts should be made aware of the facts about human psychology (see §2.1 above) at the heart of this dissonance: these facts should to be highlighted aggressively. And early, lest the dissonance sets in before the reader’s sophistication.
Assuming you embrace self-effacement, connect the dots for your readers: highlight how Consequentialism self-effaces – where and how often it recommends that one stop considering Consequentialism’s theoretical justifications for one’s actions.
Virtue ethicists, for their part, are known for forestalling self-effacement by just not giving a theory in the first place – by resisting the demand to give a unifying account of a broad range of normative judgments about actions. They tend to prefer taking things case by case, insistently pointing to specific details in specific examples and just saying that’s what was key in that situation. They prefer studying the specific actions of virtuous exemplars and vicious foils. The formulation “the right action is the one the virtuous agent would take” is always reluctantly given, as more of a sketch of a theory than something we should think too hard about. This can make them frustrating theorists, but responsible writers (protecting you from developing thoughts too many), and decent moral guides. Excellent moral guides do less lecturing on moral theory, and more leading by example. Virtue ethicists like to sit somewhere in-between: they like lecturing on examples.
Note that, to proscribe consulting theory is not to prescribe pretense. Pretending to care (answering your friend “because I was worried!” when in fact your motivation was to maximize the general good) is just as ugly and will exacerbate the self-harm. That said theories can recognize that, under certain circumstances, “fake it ’til you make it” is the best policy available to an agent. Such might be the case for someone who was not fortunate enough to have good role models in their impressionable youth, and whose friends cannot/will not help them curb a serious vice. Conscious impersonation of the virtuous in an attempt to reshape their character might sadly be this person’s best option for turning their life around. But note that, even when this is the case, success is never fully achieved if the pretense doesn’t eventually stop being pretense – if the pretender doesn’t eventually win themselves over, displacing the motivation for pretense with the right motivation to act (e.g. a direct concern for the sick friend).
Prevention and Cure
Adopting VE vs. sophisticating Consequentialism
Aware of thoughts too many, what should one do?
Well, you could embrace VE. Draw on the practical wisdom encoded in the rich vocabulary of virtue and vice, emphatically ending your reason-giving on specific details that are morally salient to the case at hand (e.g. “Because ignoring Sal would have been callous! Haven’t you seen how lonely they’ve been lately?”). Don’t fuss too much with integrating and justifying each instance of normative judgment with an over-arching system of principles: morally speaking, you’re not really on the line for it, and its a hazardous task. If you are really so inclined, go ahead, but be sure to hang up those theoretical justifications when you leave the philosophy room, or the conscious self-improvement room. Sure, ignoring this sort of theoretical integration might make you less morally consistent, but consistency is just one virtue: part of the lesson here is that, in practice, when humans consciously optimize very hard for moral consistency they typically end up making unacceptable trade-offs in other virtues. Rationalists seem especially prone to over-emphasize consistency.
Alternatively, you could further sophisticate your Consequentialism. With some contortion, the lessons here can be folded in. One could read the above as just more reason, by consequentialist lights, to operate as if VE were true: adopt the virtuous agent’s decision procedure in order to avoid the harms resulting from thoughts too many and internal moral disharmony. But remark how thorough this co-opting strategy must now be: consequentialists won’t avoid those harms with mere impersonation of the virtuous agent. Adopting the virtuous agent’s decision procedure means completely winning yourself over, not just consciously reading off the script. Again, pretense that remains pretense not only fails to avoid thoughts too many but probably worsens the cognitive dissonance!
If you succeed in truly winning yourself over though, how much of your Consequentialism will remain? If you still engage in it, you will keep your consequentialist reasoning in check. Maybe you’ll reserve it for moments of self-reflection, insofar as self-reflection is still needed to regulate and maintain virtue. Or you might engage in it in the philosophy room (wary that spending too much time in there is hazardous, engendering thoughts too many). At some point though, you might find it was your Consequentialism that got co-opted by VE: if you are very successful, it will look more like your poor past self had to use consequentialist reasoning as a pretext for acquiring virtue, something you now regard as having intrinsic value, something worth pursuing for its own sake… Seems more honest to me if you give up the game now and endorse VE. Probably more effective too.
Anyway, if you go in for the co-opt, don’t forget, part of the lesson is to be mindful of the facts of human psychology. Invented virtues like the virtue of always-doing-what-the-best-consequentialist-would-do, besides being ad hoc and convenient for coddling one’s Consequentialism, are circular and completely miss the point. Trying to learn such a virtue just reduces to trying to become “the most consequential consequentialist.” But the question for consequentialists is precisely that: what character traits does the most consequential consequentialist tend to have? The minimum takeaway of this post: they don’t engage in consequentialist reasoning all the time!
Consequentialists might consider reading the traditional list of virtues as a time-tested catalog of the most valuable character traits (by consequentialist lights) that are attainable for humans. (Though see “objection h” here, for some complications on that picture).
Whether we go with VE or Consequentialism, it seems we need to tap into whatever self-discipline (and self-disciplining tools) we have and begin a virtuous cycle of good habit formation. Just remember that chanting creeds to yourself and faking it ’til you make it aren’t your only options! Encourage your friends to call out your vices. (In turn, steer your friends away from vice and try to be a good role model for the impressionable). Engage with good books, movies, plays etc. Virtue ethicists note that art has a great potential for exercising and training moral awareness, for providing us role models to take inspiration from, flawed characters to learn from, villains to revile. It’s critical to see what honesty, dishonesty, compassion, callousness, courage, cowardice etc. look like in detailed, complex situations. Just knowing their dictionary definitions and repeating to yourself that you will be those things won’t get you very far. To really get familiar with them, you need to encounter many examples of them, within varied normative contexts. Again, the aim is to train a sort of moral perception – the ability to recognize, in the heat of the moment, right from wrong (and to a limited extent, why it is so), and react accordingly. In that sense, VE sees developing one’s moral character as very similar (even intertwined with) developing one’s aesthetic taste. Many of the virtues are gut reactions after all – the good ones.
M. Stocker, The schizophrenia of modern ethical theories. Journal of Philosophy 73 (14) (1976), 453-466.
If we take Hedonistic Consequentialism (HC) literally, the morally ideal agent is one which outwardly pretends perfectly to care (when interacting with agents that care about being cared about) but inwardly always optimizes as rationally as possible to maximize hedon, either by directly trying to calculate the hedon maximizing action sequence (assuming the agent’s compute is much less constrained) or by invoking the rules that tend to maximize hedon (assuming the compute available to the agent is highly constrained). In other words, according to HC the ideal agent seems to be a sociopathic conartist obsessed with maximizing hedon (or obsessed with obeying the rules that tend to maximize hedon). No doubt advocates of HC have something clever to say in response, but my point stands: taking HC too literally (as SBF may have?) will turn you into a hedon monster.
G. Klein, Sources of Power: How People Make Decisions (Cambridge, MA: MIT Press, 1999).
T.D. Wilson and J.W. Schooler, “Thinking Too Much: Introspection Can Reduce the Quality of Preferences and Decisions,” Journal of Personality and Social Psychology 60(2) (1991), 181–92.
C. Ofir and I. Simonson, “In Search of Negative Customer Feedback: The Effect of Expecting to Evaluate on Satisfaction Evaluations,” Journal of Marketing Research, 38(2) (2001), 170–82.
R. Dhar, “Consumer Preference for a No-Choice Option,” Journal of Consumer Research, 24(2) (1997), 215–31.
D. Kuksov and M. Villas-Boas, “When More Alternatives Lead to Less Choice,” Marketing Science, 29(3) (2010), 507–24.
H. Simon, “Bounded Rationality in Social Science: Today and Tomorrow,” Mind & Society, 1(1) (2000), 25–39.
B. Schwartz, The Paradox of Choice: Why More is Less (New York: Harper Collins, 2005)
E. Illouz, Why love hurts: A sociological explanation (2012), ch. 3.
We know from psychology that humans struggle with indecision when they lack emotions to help motivate a choice. See A. R. Damasio, Descartes’ error: emotion, reason, and the human brain (1994)
B. Shiv, G. Loewenstein, A. Bechara, H. Damasio and A. R. Damasio, Investment Behavior and the Negative Side of Emotion. Psychological Science, 16(6) (2005), 435–439. http://www.jstor.org/stable/40064245
This is an empirical question for psychologists: in practice, does the exercise of integrating your actions and judgments into a unifying theoretical account actually correlate with being more morally consistent (e.g. in the way you treat others)? Not sure. Insofar as brainier people are, despite any rationalist convictions they might have, particularly prone to engage in certain forms of irrational behaviour (e.g. rationalization) I’m mildly doubtful.