Thank you for writing this. For a while, I have been thinking of writing a post with many similar themes and maybe I still will at some point. But this post fills a large hole.
As is obligatory for me, I must mention Derek Parfit, who tends to have already well-described many ideas that resurface later.
In Reasons and Persons, Part 1 (especially Chapter 17), Derek Parfit argues that good utilitiarians should self-efface their utilitarianism. This is because people tend to have motivated reasoning, and tend to be wrong. Under utilitarianism, it is possible to justify nearly anything, provided your epistemics are reasonably bad (your epistemics would have to be very bad to justify murder under deontological theories that prohibit murder; you would have to claim that something was not in fact murder at all). Parfit suggests adopting whatever moral system seems to be most likely to produce the highest utility for that person in the long run (perhaps some theory somewhat like virtue ethics). This wasn’t an original idea, and Mill said similar things.
One way to self-efface your utilitiarianism would be to say “yeah, I know, it makes sense under utilitarianism for me to keep my promises” (or whatever it may be). Parfit suggests that may not be enough, because deep down you still believe in utilitarianism; it will come creeping through (if not in you, in some proportion of people who self-efface this way). He says that you may instead need to forget that you ever believed in utilitarianism, even if you think it’s correct. You need to believe a lie, and perhaps even convince everyone else of this lie.
He also draws an interesting caveat: what if the generally agreed upon virtues or rules are no longer those with the highest expected utility? If nobody believed in utilitarianism, why would they ever be changed? He responds:
This suggests that the most that could be true is that C [consequentialism] is partly self-effacing. It might be better if most people caused themselves to believe some other theory, by some process of self-deception that, to succeed, must also be forgotten. But, as a precaution, a few people should continue to believe C, and should keep convincing evidence about this self-deception. These people need not live in Government House, or have any other special status. If things went well, the few would do nothing. But if the moral theory believed by most did become disastrous, the few could then produce their evidence. When most people learnt that their moral beliefs were the result of self-deception, this would undermine these beliefs, and prevent the disaster.
This wasn’t an original idea either; Parfit here is making a reference to Sidgwick’s “Government House utilitarianism,” which seemed to suggest only people in power should believe utilitarianism but not spread it. Parfit passingly suggests the utilitarians don’t need to be the most powerful ones (and indeed Sidgwick’s assertion may have been motivated by his own high position).
Sometimes I think that this is the purpose of EA. To attempt to be the “few people” to believe consequentialism in a world where commonsense morality really does need to change due to a rapidly changing world. But we should help shift commonsense morality in a better direction, not spread utilitarianism.
Maybe utilitarianism is an info hazard not worth spreading. If something is worth spreading, I suspect it’s virtues.
Parfit here is making a reference to Sidgwick’s “Government House utilitarianism,” which seemed to suggest only people in power should believe utilitarianism but not spread it.
This may be clear to you, and isn’t important for the main point of your comment, but I think that ‘Government House utilitarianism’ is a term coined by Bernard Williams in order to refer to this aspect of Sidgwick’s thought while also alluding to what Williams viewed as an objectionable feature of it.
Sigdwick himself, in The Methods of Ethics,referred to the issue as esoteric morality (pp. 489–490, emphasis mine):
the Utilitarian should consider carefully the extent to which his advice or example are likely to influence persons to whom they would be dangerous: and it is evident that the result of this consideration may depend largely on the degree of publicity which he gives to either advice or example. Thus, on Utilitarian principles, it may be right to do and privately recommend, under certain circumstances, what it would not be right to advocate openly; it may be right to teach openly to one set of persons what it would be wrong to teach to others; it may be conceivably right to do, if it can be done with comparative secrecy, what it would be wrong to do in the face of the world; and even, if perfect secrecy can be reasonably expected, what it would be wrong to recommend by private advice or example. These conclusions are all of a paradoxical character:[372] there is no doubt that the moral consciousness of a plain man broadly repudiates the general notion of an esoteric morality, differing from that popularly taught; and it would be commonly agreed that an action which would be bad if done openly is not rendered good by secrecy. We may observe, however, that there are strong utilitarian reasons for maintaining generally this latter common opinion [...]. Thus the Utilitarian conclusion, carefully stated, would seem to be this; that the opinion that secrecy may render an action right which would not otherwise be so should itself be kept comparatively secret; and similarly it seems expedient that the doctrine that esoteric morality is expedient should itself be kept esoteric. Or if this concealment be difficult to maintain, it may be desirable that Common Sense should repudiate the doctrines which it is expedient to confine to an enlightened few. And thus a Utilitarian may reasonably desire, on Utilitarian principles, that some of his conclusions should be rejected by mankind generally; or even that the vulgar should keep aloof from his system as a whole, in so far as the inevitable indefiniteness and complexity of its calculations render it likely to lead to bad results in their hands.
In his Henry Sidgwick Memorial Lecture on 18 February 1982 (or rather the version of it included in Williams’s posthumously published essay collection The Sense of the Past), after quoting roughly the above passage from Sidgwick, Williams says:
On this kind of account, Utilitarianism emerges as the morality of an élite, and the distinction between theory and practice determines a class of theorists distinct from other persons, theorists in whose hands the truth of the Utilitarian justification of non-Utilitarian dispositions will be responsibly deployed. This outlook accords well enough with the important colonial origins of Utilitarianism. This version may be called ‘Government House Utilitarianism’. It only partly deals with the problem, since it is not generally true, and it was not indeed true of Sidgwick, that Utilitarians of this type, even though they are theorists, are prepared themselves to do without the useful dispositions altogether. So they still have some problem of reconciling the two consciousnesses in their own persons—even though the vulgar are relieved of that problem, since they are not burdened with the full consciousness of the Utilitarian justification. Moreover, Government House Utilitarianism is unlikely, at least in any very overt form, to commend itself today.
I agree it may be difficult for a utilitarian to fully deceive themselves into giving up their utilitarianism. But here’s an option that might be more feasible: be uncertain about your utilitarianism (you probably already are, or if you aren’t you should be), and act according to a theory that both 1. Utilitarianism recommends you act according to, and 2. You find independently at least somewhat plausible. This could be a traditional moral theory, or it might even be the result of the moral uncertainty calculation itself.
Sometimes I think that this is the purpose of EA. To attempt to be the “few people” to believe consequentialism in a world where commonsense morality really does need to change due to a rapidly changing world. But we should help shift commonsense morality in a better direction, not spread utilitarianism.
Very interesting perspective and comment in general, thanks for sharing!
“utilitiarians should self-efface their utilitarianism” “Parfit suggests adopting whatever moral system seems to be most likely to produce the highest utility” “you may instead need to forget that you ever believed in utilitarianism” This sounds plausible: you orient yourself towards the good and backpropagate over time how things play out and then learn through it which system and policies are reliable and truly produce good results (in the context and world you find yourself). This is also exactly what has played out in my own development, by orienting toward what produces good consequences and understanding how uncertain the world is (and how easily I fooled myself by saying I was doing the thing with the best consequences when I didn’t) I came out with virtue ethics myself.
”For a while, I have been thinking of writing a post with many similar themes and maybe I still will at some point.” I would read it with joy and endorse a full post being devoted to this topic (happy to read drafts and provide thoughts)
Thank you for writing this. For a while, I have been thinking of writing a post with many similar themes and maybe I still will at some point. But this post fills a large hole.
As is obligatory for me, I must mention Derek Parfit, who tends to have already well-described many ideas that resurface later.
In Reasons and Persons, Part 1 (especially Chapter 17), Derek Parfit argues that good utilitiarians should self-efface their utilitarianism. This is because people tend to have motivated reasoning, and tend to be wrong. Under utilitarianism, it is possible to justify nearly anything, provided your epistemics are reasonably bad (your epistemics would have to be very bad to justify murder under deontological theories that prohibit murder; you would have to claim that something was not in fact murder at all). Parfit suggests adopting whatever moral system seems to be most likely to produce the highest utility for that person in the long run (perhaps some theory somewhat like virtue ethics). This wasn’t an original idea, and Mill said similar things.
One way to self-efface your utilitiarianism would be to say “yeah, I know, it makes sense under utilitarianism for me to keep my promises” (or whatever it may be). Parfit suggests that may not be enough, because deep down you still believe in utilitarianism; it will come creeping through (if not in you, in some proportion of people who self-efface this way). He says that you may instead need to forget that you ever believed in utilitarianism, even if you think it’s correct. You need to believe a lie, and perhaps even convince everyone else of this lie.
He also draws an interesting caveat: what if the generally agreed upon virtues or rules are no longer those with the highest expected utility? If nobody believed in utilitarianism, why would they ever be changed? He responds:
This wasn’t an original idea either; Parfit here is making a reference to Sidgwick’s “Government House utilitarianism,” which seemed to suggest only people in power should believe utilitarianism but not spread it. Parfit passingly suggests the utilitarians don’t need to be the most powerful ones (and indeed Sidgwick’s assertion may have been motivated by his own high position).
Sometimes I think that this is the purpose of EA. To attempt to be the “few people” to believe consequentialism in a world where commonsense morality really does need to change due to a rapidly changing world. But we should help shift commonsense morality in a better direction, not spread utilitarianism.
Maybe utilitarianism is an info hazard not worth spreading. If something is worth spreading, I suspect it’s virtues.
Which virtues? Some have suggestions.
This may be clear to you, and isn’t important for the main point of your comment, but I think that ‘Government House utilitarianism’ is a term coined by Bernard Williams in order to refer to this aspect of Sidgwick’s thought while also alluding to what Williams viewed as an objectionable feature of it.
Sigdwick himself, in The Methods of Ethics, referred to the issue as esoteric morality (pp. 489–490, emphasis mine):
In his Henry Sidgwick Memorial Lecture on 18 February 1982 (or rather the version of it included in Williams’s posthumously published essay collection The Sense of the Past), after quoting roughly the above passage from Sidgwick, Williams says:
There has since been the occasional paper mentioning or commenting on the issue, including a defense of esoteric morality by Katarzyna De Lazari-Radek and Peter Singer (2010).
Thanks for the background on esoteric morality!
Yes, I perhaps should have been more clear that “Government House” was not Sidgwick’s term, but a somewhat derogatory term levied against him.
I agree it may be difficult for a utilitarian to fully deceive themselves into giving up their utilitarianism. But here’s an option that might be more feasible: be uncertain about your utilitarianism (you probably already are, or if you aren’t you should be), and act according to a theory that both 1. Utilitarianism recommends you act according to, and 2. You find independently at least somewhat plausible. This could be a traditional moral theory, or it might even be the result of the moral uncertainty calculation itself.
Very interesting perspective and comment in general, thanks for sharing!
“utilitiarians should self-efface their utilitarianism” “Parfit suggests adopting whatever moral system seems to be most likely to produce the highest utility” “you may instead need to forget that you ever believed in utilitarianism”
This sounds plausible: you orient yourself towards the good and backpropagate over time how things play out and then learn through it which system and policies are reliable and truly produce good results (in the context and world you find yourself). This is also exactly what has played out in my own development, by orienting toward what produces good consequences and understanding how uncertain the world is (and how easily I fooled myself by saying I was doing the thing with the best consequences when I didn’t) I came out with virtue ethics myself.
”For a while, I have been thinking of writing a post with many similar themes and maybe I still will at some point.” I would read it with joy and endorse a full post being devoted to this topic (happy to read drafts and provide thoughts)