Please May I Have Reading Suggestions on Consistency in Ethical Frameworks
I have just started a 6 month residency at Newspeak House, a political technology college in London. It’s gonna be a period of upskilling, networking and research. Also I’d like to find an ethical career. I intend to research/upskill as follows:
Consistency in ethical systems
I hypothesise that internal consistency and agreement with our deepest moral intuitions are the two most important features of any ethical system. I’d like to hear suggestions of any other necessary and sufficient characteristics of a good ethical system. Does anyone have suggestions of books to read or thoughts to consider? A corollary is that bad ethical systems are those which are inconsistent with themselves or our moral intuitions. Does anyone think they have a solid counterexample?
I am looking forward to trying to understand other people’s ethical systems. Why do people make the decisions they do? What makes people change their minds? What allows people to ignore conflicting claims in their own beliefs?
A lot of rational, impactful people flow through Newspeak House so I’m also curious about their criticism of EA, since:
If we are wrong about this movement we should want to move to something better
It is good for us to understand our flaws so we can grow
It is good to learn how we can convince people to join EA, if it is the best choice for the future of consciousness
Are there ways people perceive EA which are skin deep but nonetheless turn people off. eg a friend said “I don’t think earning to give is good advice” even though most EAs today agree with this.
Patterns in emerging communities of practise
This is Newspeak House’s main interest: Learning about growing communities and building a library of best practices. In London, there are many growing organisations which are duplicating the same effort. By meeting with the organisers and seeing common patterns, Newspeak Fellows can learn and share these commonalities. This will help organisations grow faster. There is a risk that by making all organisations better bad organisations will be empowered, but most people are trying to do good things and if rationality can be empowered alongside this, it will create a better ecosystem for growing orgs. I intend to feedback here anything which I think will be useful to EA since I hope some of the information I’ll be learning will be new to you all.
Machine learning
All hail our AI overlords. #Sarcasm.
Conclusion
I’m interested to hear your thoughts and suggestions. I would like to use this time well and open my future plans up to your reasoned empathetic criticism.
Also I’m looking for a bit of work so if anyone needs any web dev doing, please get in touch. I intend to come to some EA meetups so see you there. Hope you are well. Thanks for reading.
This is a pretty standard view in philosophical ethics, reflective equillibrium.
For a somewhat opposed approach, you might examine moral particularism (opposed to moral generalism), which roughly holds that we should make moral judgments about particular cases without (necessarily) applying moral principles, so while the particularist might care about coherence in some sense (when responding to moral reasons) they needn’t be concerned with ensuring coherence between moral principles and between principles and our judgments about cases. You might separately wonder about how much weight should be given to our judgements about particular cases vs our judgements about general principles on a spectrum from hyper-particularism to hyper-methodism.
In terms of other characteristics of a good ethical system, I think it’s worth considering that coherence doesn’t necessarily get you very far. It seems possible, in principle, to have coherent views which are very bad (of course, this is controversial, and may depend in part on empirical facts about human moral psychology, alongside conceptual truths about morality). One might think that one needs an appropriate correspondence between one’s (initial) moral views and the moral facts. Separately, one might think that it is more important to cultivate appropriate kinds of moral dispositions than to have coherent views.
Related to the last point, there is a long tradition of viewing ethical theorising (and in particular attempts to reason about morality) sceptically, especially associated with Nietzsche, according to which moral reasoning is more often rationalisation for more dubious impulses, in which case, again, one might be less concerned with trying to make one’s moral views coherent and more in applying some other kind of procedure (e.g. a Critical or Negative one).
There is a lot of empirical moral psychology on these questions. I’m not sure specifically what you’re interested in, otherwise I would be able to make more specific suggestions.
I think more applied messaging work about the reception of EA and receptivity to different messages would also be valuable to explore this and would likely help reduce risks which EAs run when engaging in outreach or conducting activities which are going to be perceived a certain way by the world.
Could this be considered similar to the bias/variance tradeoff in machine learning?
David Moss mentioned a “long tradition of viewing ethical theorising (and in particular attempts to reason about morality) sceptically.” Aside from Nietzsche, another very well-known proponent of this tradition is Bernard Williams. Take a look at his page in the Stanford Encyclopedia of Philosophy, and if it looks promising check out his book Ethics and the Limits of Philosophy. You might also check out his essays “Ethical Consistency” (which I haven’t read; in his essay collection Problems of the Self) and “Conflicts of Values” (in Moral Luck). There are probably lots of other essays of his that are relevant that I just don’t know about. Another essay you might read is Steven Lukes’ “Making Sense of Moral Conflict” in his book Moral Conflict and Politics. On the question of whether there can ever be impossible moral demands (that is, situations where all of the available options are morally wrong, potentially because of conflicting moral requirements), one recent book (which I haven’t read, but sounds good) is Lisa Tessman’s Moral Failure: On the Impossible Demands of Morality (see also the SEP article here). Don Loeb has an essay called “Moral Incoherentism,” which despite its title seems to deal with something slightly different than what you’re talking about, but might still be of interest.
The piece that comes the closest to speaking directly to what you’re talking about here, that I know of, is Richard Ngo’s blog post “Arguments for Moral Indefinability”. He also has a post on “realism about rationality” which is probably also related.
On “consistency with our intuitions,” a book to check out might be Michael Huemer’s Ethical Intuitionism. And of course the SEP article on ethical intuitionism. Though of course intuitionism isn’t the only metaethical theory that takes consistency with our intuitions as a criterion; David Moss mentioned reflective equilibrium—and I definitely second his recommendation to look into this further—and Constructivism also has some of this flavor, for instance. Also check out this paper on Moorean arguments in ethics (“Moorean arguments” in reference to G.E. Moore’s famous “here is one hand” argument).
David Moss also mentioned “hyper-methodism and hyper-particularism.” Another paper that touches on that distinction, and on Moorean arguments (though not specifically in ethics) is Thomas Kelly’s “Moorean Facts and Belief Revision.”
I’m not sure that internal consistency should be the highest priority. If it is, that implies constraints on the applicability of a moral theory (ie some questions will be undecidable). Which may be fine, just be aware of that tradeoff.
Impossibility theorems are pretty common in mathematics. Arrow’s Impossibility Theorem will apply to many ethical frameworks. Where it doesn’t, other impossibility theorems (Godel, Gibbard, Holmstrom) are likely to apply. If nothing else, reading up on this class of theorems may be interesting.
To me, the most relevant of these impossibility theorems is the Arrhenius paradox (relevant to population ethics). Unfortunately, I don’t know of any good public explanation of it.