Academic philosopher, co-editor of utilitarianism.net, writes goodthoughts.blog
Richard Y Chappellđ¸
DeonÂtolÂoÂgists Shouldnât Vote*
The Curse of Deontology
LimitÂing Reason
MoÂral TheÂoÂries Lack Confidence
#25 - Richard ChapÂpell: effecÂtive alÂtruÂism, norÂmaÂtivity and moral realism
I wonder whether CEA or someone could fruitfully run (and share the results of) an anonymous survey of some suitably knowledgeable and diverse group of EA insiders, regarding their confidence in various âEA adjacentâ orgs?
He expresses similar views in his recent interview with Peter Singer:
RUTGER: I see myself as a pluralist. Itâs fine to rely on the full spectrum of human emotions and motivations. Humans are a mixed bag, right? So, we are partially motivated sometimes by things such as compassion, empathy, and altruism, which is wonderful. But we canât solely rely on that to make this world a wildly better place.
Peter, youâre obviously the founder of the Effective Altruism movement, a movement that I admire. At the same time, though, I feel itâs a bit limited in its reach because many of the effective altruists Iâve spoken to are a bit strange and weird. Theyâre mainly motivated by this yearning to do good and help others. They are born altruists. A lot of them became vegan when they were very young. Many of them reacted instantly when they read your essay, Famine, Affluence and Morality, and I think what happened in the years around 2010 is that these people discovered one another on social media, and they realised, âHey, Iâm not alone.â But theyâve always been quite weird, which is fine, donât get me wrong. Iâm happy for them to do their work, but at the same time, I thought, perhaps thereâs also a place for a broader movement for more âneurotypical peopleâ that relies on other sources of motivation.
- 10 May 2025 0:48 UTC; 3 points) 's comment on Thoughts on MoÂral AmÂbiÂtion by RutÂger Bregman by (
If it was so straightforwardly irrational (dare I say itâinsensible), Le Guin would presumably never have written the story in the first place!
This is bad reasoning. People vary radically in their ability to recognize irrationality (of various sorts). In the same way that we shouldnât be surprised if a popular story involves mathematical assumptions that are obviously incoherent to a mathematician, we shouldnât be surprised if a popular story involves normative assumptions that others can recognize as obviously wrong. (Consider how Gone with the Wind glorifies Confederate slavery, etc.)
Itâs a basic and undeniable fact of life that people are swayed by bad reasoning all the time (e.g. when it is emotionally compelling, some interests are initially more salient to us than others, etc.).
You have your intuitions and I have mineâwe can each say theyâre obvious to us and it gets us no further, surely?
Correct; you are not my target audience. Iâm responding here because you seemed to think that there was something wrong with my post because it took for granted something that you happen not to accept. Iâm trying to explain why thatâs an absurd standard. Plenty of others could find what I wrote both accurate and illuminating. It doesnât have to convince you (or any other particular individual) in order to be epistemically valuable to the broader community.
If you find that a post starts from philosophical assumptions that you reject, I think the reasonable options available to you are:
(1) Engage in a first-order dispute, explaining why you think different assumptions are more likely to be true; or
(2) Ignore it and move on.
I do not think it is reasonable to engage in
silencingprocedural criticism, claiming that nobody should post things (including claims about what they take to be obvious) that you happen to disagree with.[Update: struck-through a word that was somewhat too strong. But ânot the sort of thing I usually expect to find on the forumâ implicates more than just âI happen to disagree with this,â and something closer to âyou should not have written this.â]
To be clear: the view I argued against was not âpets have net negative lives,â but rather, âpets ought not to exist even if they have net positive lives, because we violate their rights by owning/âcontrolling them.â (Beneficentrism makes no empirical claims about whether pets have positive or negative lives on net, so it would make no sense to interpret me as suggesting that it supports any such empirical claim.)
Itâs not âcircular reasoningâ to note that plausible implications are a count in favor of a theory. Thatâs normal philosophical reasoningâreflective equilibrium. (Though we can distinguish âsensible-soundingâ from actually sensible. Not everything that sounds sensible at first glance will prove to be so on further reflection. But youâd need to provide some argument to undermine the claim; it isnât inherently objectionable to pass judgment on what is or isnât sensible, so objecting to that argumentative structure is really odd.)
I think itâs very strange to say that a premise that doesnât feel obvious to you âis not the sort of thing [you] usually expect to find on the forum.â (Especially when the premise in question would seem obvious common sense to, like, 99% of people.)
If an analogy helps, imagine a post where someone points out that commonsense requires us to reject SBF-style âdouble or nothingâ existence gambles, and that this is a good reason to like some particular anti-fanatical decision theory. One may of course disagree with the reasoning, but I think it would be very strange for a bullet-biting Benthamite to object that this invocation of common sense was ânot the sort of thing I usually expect to find on the forum.â (If true, that would suggest that their views were not being challenged enough!)
(I also donât think it would be a norm violation to, say, argue that naive instrumentalism is a kind of âphilosophical pathologyâ that people should try to build up some memetic resistance against. Or if it is, Iâd want to question that norm. Itâs important to be able to honestly discuss when we think philosophical views are deeply harmful, and while one generally wants to encourage âgenerousâ engagement with alternative views, an indiscriminate demand for universal generosity would make it impossible to frankly discuss the exceptions. We should be respectful to individual interlocutors, but itâs just not true that every view warrants respect. An important part of the open exchange of ideas is openness to the question of which views are, and which are not, respectable.)
You think itâs a norm violation for me to say that itâs âsensibleâ to allow happy pets to exist? Or, more abstractly, that itâs good for a theory to have sensible implications?
Sure, in principle. (Though Iâd use a different term, like âhumane farmsâ, to contrast with the awful conditions on what we call âfactory farmsâ.) The only question is whether second-order effects from accepting such a norm might generally make it harder for people to take animal interests sufficiently seriouslyâsee John & Sebo (2020).
The same logic would, of course, suggest thereâs no intrinsic objection to humanely farming extra humans for their organs, etc. (But I think itâs clearly good for us to be appalled by that prospect: such revulsion seems part of a good moral psychology for protecting against gross mistreatment of people in other contexts. If Iâm right about that, then utilitarianism will endorse our opposition to humane human farming on second-order grounds. Maybe something similar is true for non-humans, tooâthough I regard that as more of an open question.)
Yeah, insofar as we accept biased norms of that sort, itâs really important to recognize that they are merely heuristics. Reifying (or, as Scott Alexander calls it, âcrystallizingâ) such heuristics into foundational moral principles risks a lot of harm.
(This is one of the themes Iâm hoping to hammer home to philosophers in my next book. Besides deontic constraints, risk aversion offers another nice example.)
Donât Void Your Pets
This is great!
One minor clarification (that I guess you are taking as âgivenâ for this audience, but doesnât hurt to make explicit) is that the kind of âWithin-Cause Prioritizationâ found within EA is very different from that found elsewhere, insofar as it is still done in service of the ultimate goal of âcross-cause prioritizationâ. This jumped out at me when reading the following sentence:
A quick reading of EA history suggests that when the movement was born, it focused primarily on identifying the most cost-effective interventions within pre-existing cause-specific areas (e.g. the early work of GiveWell and Giving What We Can)
I think an important part of the story here is that early GiveWell (et al.) found that a lot of âstandardâ charitable cause areas (e.g. education) didnât look to be very promising given the available evidence. So they actually started with a kind of âcause prioritizationâ, and simply very quickly settled on global poverty as the most promising area. This was maybe too quick, as later expansions into animal welfare and x-risk suggest. But itâs still very different from the standard (non-EA) attitude of âdifferent cause areas are incommensurable; just try to find the best charity within whatever area you happen to be personally passionate about, and donât care about how it would compare to competing cause areas.â
That said, I agree with your general lesson that both broad cause prioritization and specific cross-cause prioritization plausibly still warrant more attention than theyâre currently getting!
Fun stuff!
The key question to assess is just: what credence should we give to Religious Catastrophe?
I think the right answer, as in Pascalâs Mugging, is: vanishingly small. Do the arguments of the paper show that Iâm wrong? I donât think so. There is no philosophical argument that favors believing in Hell. There are philosophical arguments for the existence of God. But from there, the argument relies purely on sociological evidence: many of the apes on our planet happen to accept a religious creed according to which there is Hell.
Hereâs a question to consider: is it conceivable that a bunch of apes might believe something that a rational being ought to give vanishingly low credence to?
I think itâs very obvious that the answer to this question is yes. Ape beliefs arenât evidence of anything much beyond ape psychology.
So to really show that itâs unreasonable to give a vanishingly low credence to Religious Catastrophe, it isnât enough to just point to some apes. One has to say more about the actual proposition in question to make it credible.
In what other context do philosophers think that philosophical arguments provide justified certainty (or near-certainty) that a widely believed philosophical thesis is false?
It probably depends who you ask, but fwiw, I think that many philosophical theses warrant extremely low credence. (And again, the mere fact of being âwidely heldâ is not evidence of philosophical truth.)
No worries at all (and best wishes to you too!).
One last clarification Iâd want to add is just the distinction between uncertainty and cluelessness. Thereâs immense uncertainty about the future: many different possibilities, varying in valence from very good to very bad. But appreciating that uncertainty is compatible with having (very) confident views about whether the continuation of humanity is good or bad in expectation, and thus not being utterly âcluelessâ about how the various prospects balance out.
It depends what constraints you put on what can qualify as a âgood reasonâ. If you think that a good reason has to be âneutrally recognizableâ as such, then thereâll be no good reason to prefer any internally-coherent worldview over any other. That includes some really crazy (by our lights) worldviews. So we may instead allow that good reasons arenât always recognizable by others. Each person may then take themselves to have good reason to stick with their starting points, though perhaps only one is actually right about thisâand since it isnât independently verifiable which, there would seem an element of epistemic luck to it all. (A disheartening result, if you had hoped that rational argumentation could guarantee that we would all converge on the truth!)
I discuss this epistemic picture in a bit more detail in âKnowing What Mattersâ.
Probably nothing left to discuss, period. (Which judgment calls we take to correlate with the truth will simply depend on what we take the truth to be, which is just whatâs in dispute. I donât think thereâs any neutral way to establish whose starting points are more intrinsically credible.)
Yeah, I think thatâs broadly right. Most ethical theorists are engaged in âideal theoryâ, so thatâs the frame Iâm working within here. And I find it notable that many deontologists seem to find utilitarianism repugnant, which doesnât seem warranted if you (should) actually want people to successfully perform the actions it identifies as ârightâ.
But itâs certainly true that quiet deontologists couldâlike âgovernment houseâ consequentialistsâpredict that, due to widespread agential incompetence, their desired (consequentialist) goals would be better achieved by most people believing deontology instead. They could then coherently advocate their deontology in certain contexts, on ânon-ideal theoryâ grounds.
Care would need to be taken to determine in which contexts oneâs goals are better achieved by urging people to aim at something completely different. It seems pretty unlikely to extend to public policy, for example, especially as regards the high-stakes issues discussed in the
mainfollow-up post. Insofar as most real-life deontologists donât seem especially careful about any of this, I think itâs still true that my theoretical arguments should prompt them to rethink their moral advocacy. In particular, they should probably end up much happier with âtwo-level consequentialismâ (the branch of consequentialism that really takes seriously human incompetence and related ânon-ideal theoryâ considerations) than is typical for deontologists.[Updated to fix reference to post discussing âhigh stakesâ policy issues.]