Academic philosopher, co-editor of utilitarianism.net, writes goodthoughts.blog
Richard Y Chappellšø
If it was so straightforwardly irrational (dare I say itāinsensible), Le Guin would presumably never have written the story in the first place!
This is bad reasoning. People vary radically in their ability to recognize irrationality (of various sorts). In the same way that we shouldnāt be surprised if a popular story involves mathematical assumptions that are obviously incoherent to a mathematician, we shouldnāt be surprised if a popular story involves normative assumptions that others can recognize as obviously wrong. (Consider how Gone with the Wind glorifies Confederate slavery, etc.)
Itās a basic and undeniable fact of life that people are swayed by bad reasoning all the time (e.g. when it is emotionally compelling, some interests are initially more salient to us than others, etc.).
You have your intuitions and I have mineāwe can each say theyāre obvious to us and it gets us no further, surely?
Correct; you are not my target audience. Iām responding here because you seemed to think that there was something wrong with my post because it took for granted something that you happen not to accept. Iām trying to explain why thatās an absurd standard. Plenty of others could find what I wrote both accurate and illuminating. It doesnāt have to convince you (or any other particular individual) in order to be epistemically valuable to the broader community.
If you find that a post starts from philosophical assumptions that you reject, I think the reasonable options available to you are:
(1) Engage in a first-order dispute, explaining why you think different assumptions are more likely to be true; or
(2) Ignore it and move on.
I do not think it is reasonable to engage in
silencingprocedural criticism, claiming that nobody should post things (including claims about what they take to be obvious) that you happen to disagree with.[Update: struck-through a word that was somewhat too strong. But ānot the sort of thing I usually expect to find on the forumā implicates more than just āI happen to disagree with this,ā and something closer to āyou should not have written this.ā]
To be clear: the view I argued against was not āpets have net negative lives,ā but rather, āpets ought not to exist even if they have net positive lives, because we violate their rights by owning/ācontrolling them.ā (Beneficentrism makes no empirical claims about whether pets have positive or negative lives on net, so it would make no sense to interpret me as suggesting that it supports any such empirical claim.)
Itās not ācircular reasoningā to note that plausible implications are a count in favor of a theory. Thatās normal philosophical reasoningāreflective equilibrium. (Though we can distinguish āsensible-soundingā from actually sensible. Not everything that sounds sensible at first glance will prove to be so on further reflection. But youād need to provide some argument to undermine the claim; it isnāt inherently objectionable to pass judgment on what is or isnāt sensible, so objecting to that argumentative structure is really odd.)
I think itās very strange to say that a premise that doesnāt feel obvious to you āis not the sort of thing [you] usually expect to find on the forum.ā (Especially when the premise in question would seem obvious common sense to, like, 99% of people.)
If an analogy helps, imagine a post where someone points out that commonsense requires us to reject SBF-style ādouble or nothingā existence gambles, and that this is a good reason to like some particular anti-fanatical decision theory. One may of course disagree with the reasoning, but I think it would be very strange for a bullet-biting Benthamite to object that this invocation of common sense was ānot the sort of thing I usually expect to find on the forum.ā (If true, that would suggest that their views were not being challenged enough!)
(I also donāt think it would be a norm violation to, say, argue that naive instrumentalism is a kind of āphilosophical pathologyā that people should try to build up some memetic resistance against. Or if it is, Iād want to question that norm. Itās important to be able to honestly discuss when we think philosophical views are deeply harmful, and while one generally wants to encourage āgenerousā engagement with alternative views, an indiscriminate demand for universal generosity would make it impossible to frankly discuss the exceptions. We should be respectful to individual interlocutors, but itās just not true that every view warrants respect. An important part of the open exchange of ideas is openness to the question of which views are, and which are not, respectable.)
You think itās a norm violation for me to say that itās āsensibleā to allow happy pets to exist? Or, more abstractly, that itās good for a theory to have sensible implications?
Sure, in principle. (Though Iād use a different term, like āhumane farmsā, to contrast with the awful conditions on what we call āfactory farmsā.) The only question is whether second-order effects from accepting such a norm might generally make it harder for people to take animal interests sufficiently seriouslyāsee John & Sebo (2020).
The same logic would, of course, suggest thereās no intrinsic objection to humanely farming extra humans for their organs, etc. (But I think itās clearly good for us to be appalled by that prospect: such revulsion seems part of a good moral psychology for protecting against gross mistreatment of people in other contexts. If Iām right about that, then utilitarianism will endorse our opposition to humane human farming on second-order grounds. Maybe something similar is true for non-humans, tooāthough I regard that as more of an open question.)
Yeah, insofar as we accept biased norms of that sort, itās really important to recognize that they are merely heuristics. Reifying (or, as Scott Alexander calls it, ācrystallizingā) such heuristics into foundational moral principles risks a lot of harm.
(This is one of the themes Iām hoping to hammer home to philosophers in my next book. Besides deontic constraints, risk aversion offers another nice example.)
ļDonāt Void Your Pets
This is great!
One minor clarification (that I guess you are taking as āgivenā for this audience, but doesnāt hurt to make explicit) is that the kind of āWithin-Cause Prioritizationā found within EA is very different from that found elsewhere, insofar as it is still done in service of the ultimate goal of ācross-cause prioritizationā. This jumped out at me when reading the following sentence:
A quick reading of EA history suggests that when the movement was born, it focused primarily on identifying the most cost-effective interventions within pre-existing cause-specific areas (e.g. the early work of GiveWell and Giving What We Can)
I think an important part of the story here is that early GiveWell (et al.) found that a lot of āstandardā charitable cause areas (e.g. education) didnāt look to be very promising given the available evidence. So they actually started with a kind of ācause prioritizationā, and simply very quickly settled on global poverty as the most promising area. This was maybe too quick, as later expansions into animal welfare and x-risk suggest. But itās still very different from the standard (non-EA) attitude of ādifferent cause areas are incommensurable; just try to find the best charity within whatever area you happen to be personally passionate about, and donāt care about how it would compare to competing cause areas.ā
That said, I agree with your general lesson that both broad cause prioritization and specific cross-cause prioritization plausibly still warrant more attention than theyāre currently getting!
Fun stuff!
The key question to assess is just: what credence should we give to Religious Catastrophe?
I think the right answer, as in Pascalās Mugging, is: vanishingly small. Do the arguments of the paper show that Iām wrong? I donāt think so. There is no philosophical argument that favors believing in Hell. There are philosophical arguments for the existence of God. But from there, the argument relies purely on sociological evidence: many of the apes on our planet happen to accept a religious creed according to which there is Hell.
Hereās a question to consider: is it conceivable that a bunch of apes might believe something that a rational being ought to give vanishingly low credence to?
I think itās very obvious that the answer to this question is yes. Ape beliefs arenāt evidence of anything much beyond ape psychology.
So to really show that itās unreasonable to give a vanishingly low credence to Religious Catastrophe, it isnāt enough to just point to some apes. One has to say more about the actual proposition in question to make it credible.
In what other context do philosophers think that philosophical arguments provide justified certainty (or near-certainty) that a widely believed philosophical thesis is false?
It probably depends who you ask, but fwiw, I think that many philosophical theses warrant extremely low credence. (And again, the mere fact of being āwidely heldā is not evidence of philosophical truth.)
No worries at all (and best wishes to you too!).
One last clarification Iād want to add is just the distinction between uncertainty and cluelessness. Thereās immense uncertainty about the future: many different possibilities, varying in valence from very good to very bad. But appreciating that uncertainty is compatible with having (very) confident views about whether the continuation of humanity is good or bad in expectation, and thus not being utterly ācluelessā about how the various prospects balance out.
It depends what constraints you put on what can qualify as a āgood reasonā. If you think that a good reason has to be āneutrally recognizableā as such, then thereāll be no good reason to prefer any internally-coherent worldview over any other. That includes some really crazy (by our lights) worldviews. So we may instead allow that good reasons arenāt always recognizable by others. Each person may then take themselves to have good reason to stick with their starting points, though perhaps only one is actually right about thisāand since it isnāt independently verifiable which, there would seem an element of epistemic luck to it all. (A disheartening result, if you had hoped that rational argumentation could guarantee that we would all converge on the truth!)
I discuss this epistemic picture in a bit more detail in āKnowing What Mattersā.
Probably nothing left to discuss, period. (Which judgment calls we take to correlate with the truth will simply depend on what we take the truth to be, which is just whatās in dispute. I donāt think thereās any neutral way to establish whose starting points are more intrinsically credible.)
A very important consequence of everyone simultaneously dying would be that there would not be any future people. (I didnāt mean to imply that what makes it bad is just the harm of death to the individuals directly affected. Just that it would be bad for everyone to die so.)
Philosophical truths are causally inefficacious, so we already know that there is a causal explanation for any philosophical belief you have that (one could characterize as) having ānothing to do withā the reasons why it is true. So if you accept that causal condition as sufficient for debunking, you cannot have any philosophical beliefs whatsoever.
Put another way: we should already be āquestioning our beliefsā; spinning out a causal debunking story offers nothing new. Itās just an isolated demand for rigor, when you should already be questioning everything, and forming the overall most coherent belief-set you can in light of that questioning.
Compare my response to Parfit:
We do better, I argue, to regard the causal origins of a (normative) belief as lacking intrinsic epistemic significance. The important question is instead just whether the proposition in question is itself either intrinsically credible or otherwise justified. Parfit rejects this (p.287):
Suppose we discover that we have some belief because we were hypnotized to have this belief, by some hypnotist who chose at random what to cause us to believe. One example might be the belief that incest between siblings is morally wrong. If the hypnotistās flipped coin had landed the other way up, he would have caused us to believe that such incest is not wrong. If we discovered that this was how our belief was caused, we could not justifiably assume that this belief was true.
I agree that we cannot just assume that such a belief is true (but this was just as true before we learned of its causal originsāthe hypnotist makes no difference). We need to expose it to critical reflection in light of all else that we believe. Perhaps we will find that there is no basis for believing such incest to be wrong. Or perhaps we will find a basis after all (perhaps on indirect consequentialist grounds). Either way, what matters is just whether there is a good justification to be found or not, which is a matter completely independent of us and how we originally came by the belief. Parfit commits the genetic fallacy when he asserts that the causal origins āwould cast grave doubt on the justifiability of these beliefs.ā (288)
Note that āphilosophical reasoningā governs how we update our beliefs, iron out inconsistencies, etc. But the raw starting points are not reached by āreasoningā (what would you be reasoning from, if you donāt already accept any premises?) So your assumed contrast between āgood philosophical reasoningā and āsuspicious causal forces that undermine beliefā would actually undermine all beliefs, once you trace them back to foundational premises.
The only way to actually maintain coherent beliefs is to make your peace with having starting points that were not themselves determined via a rational process. Such causal ādebunkingā gives us a reason to take another look at our starting points, and consider whether (in light of everything we now believe) we want to revise them. But if the starting points still seem right to us, in light of everything, then it has to be reasonable to stick with them whatever their original causal basis may have been.
Overall, the solution is just to assess the first-order issues on their merits. āDebunkingā arguments are a sideshow. They should never convince anyone who shouldnāt already have been equally convinced on independent (first-order) grounds.
We disagree about āwhat we have reason toā think about the value of humanityās continued existenceāthatās precisely the question in dispute. I might as well ask why you limit yourself to (widely) imprecise credences that donāt narrow things down nearly enough (or as much as we have reason to).
The topics under dispute here (e.g. whether we should think that human extinction is worse in expectation than humanityās continued existence) involve ineradicable judgment calls. The OP wants to call pro-humanity judgment calls āsuspiciousā. Iāve pointed out that I think their reasons for suspicion are insufficient to overturn such a datum of good judgment as āit would be bad if everyone died.ā (Iām not saying itās impossible to overturn this verdict, but it should take a lot more than mere debunking arguments.)
Incidentally, I think the tendency of some in the community to be swayed to ācrazy townā conclusions on the basis of such flimsy arguments is a big part of why many outsiders think EAs are unhinged. Itās a genuine failure mode thatās worth being aware of; the only way to avoid it, I suspect, is to have robustly sensible priors that are not so easily swayed without a much stronger basis.
Anyway, that was my response to the OP. You then complained that my response to the OP didnāt engage with your posts. But I donāt see why it would need to. Your post treats broad imprecision as a privileged default; my previous reply explained why I disagree with that starting point. Your own post links to further explanations Iāve given, here, about how sufficiently imprecise credences lead to crazy verdicts. Your response (in your linked post) dismisses this as āmotivated reasoning,ā which I donāt find convincing.
To mandate broadly imprecise credences on the topic at hand would be to defer overly much to a formal apparatus which, in virtue of forcing (with insufficient reason) a kind of practical neutrality about whether it would be bad for everyone to die, is manifestly unfit to guide high-stakes decision-making. Thatās my view. Youāre free to disagree with it, of course.
I think itās conceptually confused to use the term āhigh epistemic standardsā to favor imprecise credence or suspended judgment over using oneās best judgment. I donāt think the former two are automatically more epistemically responsible.
Suspended judgment may be better than forming a bad precise judgment, but worse than forming a good precise judgment. Nothing in the concept of āhigh standardsā should necessarily lead us to prioritize avoiding the risk of bad judgment over the risk of failing to form a good judgment when we could and should have.
Iāve written about this more (with practical examples from pandemic policy disputes) in āAgency and Epistemic Cheems Mindsetā
I just posted the following reply to Jesse:
I donāt think penalizing complexity is enough to escape radical skepticism in general. Consider the āuniverse popped into existence (fully-formed) 5 minutes agoā hypothesis. Itās not obvious that this is more complex than the alternative hypothesis that includes the past five minutes PLUS billions of years before that. One could try to argue for this claim, but I donāt think that our confidence in history should be *contingent* on that extremely contentious philosophical project working out successfully!
But to clarify: I donāt think I say anything much in that post about āthe reasons why we should start withā various anti-skeptical priors, and Iām certainly not committed to saying that there are āsimilar reasonsā in every anti-skeptical case. The similarity I point to is simply that we clearly should have anti-skeptical priors. āWhyā is a separate question (if it has an answer at all, the answer may vary from case to case).
On whether we agree: When I talk about exercising better rather than worse judgment, I take success here to be determined by the contents of our judgments. Some claims warrant higher credence than others, and we should try to have our credences match as close as possible to the objectively warranted level.
But thatās quite different from focusing on whether our judgments stem from a āreliable sourceā. I think thereās very little chance that you could show that almost any of your philosophical beliefs (including this very epistemic demand) stem from a source that we can independently demonstrate to be reliable. I think the kind of higher-order inquiry youāre proposing is a dead end: you canāt really judge which philosophical dispositions are reliable until youāve determined which philosophical beliefs are true.
To illustrate with a couple of concrete examples:
(1) You claim that āan evolutionary pressure toward pro-natalist beliefsā is an āunreliableā source. But that isnāt unreliable if pro-natalism is (broadly) correct.
(2) Compare evolutionary pressures to judge that pain is bad. A skeptic might claim this source is āunreliableā, but we neednāt accept that claim. Since pain is bad, when evolution disposes us to believe this, it is disposing us towards a true belief. (To simply assert this obviously wonāt suffice to convince a skeptic, but the lesson of post-Cartesian epistemology is that trying to convince skeptics is a foolās game.)
To be clear: youāre arguing that we should be agnostic (and, more strongly, take others to also be utterly clueless) about whether it would be good or bad for everyone to die?
I think this is a really good example of what I was talking about in my post, Itās Not Wise to be Clueless.
If you think that, in general, justified belief is incompatible with ājudgment callsā, then radical skepticism immediately follows. You canāt even establish, to this standard, that the external world exists. I take that to show that thereās a problem with the epistemic standards youāre assuming.
Itās OKāindeed, essentialāto make judgment calls, and we should simply try to exercise better rather than worse judgment. There are, of course, tricky questions about how best to do that. But if thereās anything that weāve learned from philosophy since Descartes, itās that skeptical calls to abjure from disputable judgments altogether are⦠not feasible.
Just sharing a quick link in case itās of interest: Many will recall Leif Wenarās WIRED article from last year, which attacked charitable giving from a philosophical perspective of valorizing status quo bias. There was plenty of discussion of his substantive arguments at the time. One thing that people mostly just politely overlooked was his very public attack on Will MacAskill as a philosopher. My latest post revisits the controversy to assess whether his charges against MacAskill were reasonable.
(The bulk of the post is paywalled, but you should be able to activate a 7-day free trial if you arenāt otherwise interested in my work.)
He expresses similar views in his recent interview with Peter Singer: