Academic philosopher, co-editor of utilitarianism.net, writes goodthoughts.blog
Richard Y Chappellšø
It depends what constraints you put on what can qualify as a āgood reasonā. If you think that a good reason has to be āneutrally recognizableā as such, then thereāll be no good reason to prefer any internally-coherent worldview over any other. That includes some really crazy (by our lights) worldviews. So we may instead allow that good reasons arenāt always recognizable by others. Each person may then take themselves to have good reason to stick with their starting points, though perhaps only one is actually right about thisāand since it isnāt independently verifiable which, there would seem an element of epistemic luck to it all. (A disheartening result, if you had hoped that rational argumentation could guarantee that we would all converge on the truth!)
I discuss this epistemic picture in a bit more detail in āKnowing What Mattersā.
Probably nothing left to discuss, period. (Which judgment calls we take to correlate with the truth will simply depend on what we take the truth to be, which is just whatās in dispute. I donāt think thereās any neutral way to establish whose starting points are more intrinsically credible.)
A very important consequence of everyone simultaneously dying would be that there would not be any future people. (I didnāt mean to imply that what makes it bad is just the harm of death to the individuals directly affected. Just that it would be bad for everyone to die so.)
Philosophical truths are causally inefficacious, so we already know that there is a causal explanation for any philosophical belief you have that (one could characterize as) having ānothing to do withā the reasons why it is true. So if you accept that causal condition as sufficient for debunking, you cannot have any philosophical beliefs whatsoever.
Put another way: we should already be āquestioning our beliefsā; spinning out a causal debunking story offers nothing new. Itās just an isolated demand for rigor, when you should already be questioning everything, and forming the overall most coherent belief-set you can in light of that questioning.
Compare my response to Parfit:
We do better, I argue, to regard the causal origins of a (normative) belief as lacking intrinsic epistemic significance. The important question is instead just whether the proposition in question is itself either intrinsically credible or otherwise justified. Parfit rejects this (p.287):
Suppose we discover that we have some belief because we were hypnotized to have this belief, by some hypnotist who chose at random what to cause us to believe. One example might be the belief that incest between siblings is morally wrong. If the hypnotistās flipped coin had landed the other way up, he would have caused us to believe that such incest is not wrong. If we discovered that this was how our belief was caused, we could not justifiably assume that this belief was true.
I agree that we cannot just assume that such a belief is true (but this was just as true before we learned of its causal originsāthe hypnotist makes no difference). We need to expose it to critical reflection in light of all else that we believe. Perhaps we will find that there is no basis for believing such incest to be wrong. Or perhaps we will find a basis after all (perhaps on indirect consequentialist grounds). Either way, what matters is just whether there is a good justification to be found or not, which is a matter completely independent of us and how we originally came by the belief. Parfit commits the genetic fallacy when he asserts that the causal origins āwould cast grave doubt on the justifiability of these beliefs.ā (288)
Note that āphilosophical reasoningā governs how we update our beliefs, iron out inconsistencies, etc. But the raw starting points are not reached by āreasoningā (what would you be reasoning from, if you donāt already accept any premises?) So your assumed contrast between āgood philosophical reasoningā and āsuspicious causal forces that undermine beliefā would actually undermine all beliefs, once you trace them back to foundational premises.
The only way to actually maintain coherent beliefs is to make your peace with having starting points that were not themselves determined via a rational process. Such causal ādebunkingā gives us a reason to take another look at our starting points, and consider whether (in light of everything we now believe) we want to revise them. But if the starting points still seem right to us, in light of everything, then it has to be reasonable to stick with them whatever their original causal basis may have been.
Overall, the solution is just to assess the first-order issues on their merits. āDebunkingā arguments are a sideshow. They should never convince anyone who shouldnāt already have been equally convinced on independent (first-order) grounds.
We disagree about āwhat we have reason toā think about the value of humanityās continued existenceāthatās precisely the question in dispute. I might as well ask why you limit yourself to (widely) imprecise credences that donāt narrow things down nearly enough (or as much as we have reason to).
The topics under dispute here (e.g. whether we should think that human extinction is worse in expectation than humanityās continued existence) involve ineradicable judgment calls. The OP wants to call pro-humanity judgment calls āsuspiciousā. Iāve pointed out that I think their reasons for suspicion are insufficient to overturn such a datum of good judgment as āit would be bad if everyone died.ā (Iām not saying itās impossible to overturn this verdict, but it should take a lot more than mere debunking arguments.)
Incidentally, I think the tendency of some in the community to be swayed to ācrazy townā conclusions on the basis of such flimsy arguments is a big part of why many outsiders think EAs are unhinged. Itās a genuine failure mode thatās worth being aware of; the only way to avoid it, I suspect, is to have robustly sensible priors that are not so easily swayed without a much stronger basis.
Anyway, that was my response to the OP. You then complained that my response to the OP didnāt engage with your posts. But I donāt see why it would need to. Your post treats broad imprecision as a privileged default; my previous reply explained why I disagree with that starting point. Your own post links to further explanations Iāve given, here, about how sufficiently imprecise credences lead to crazy verdicts. Your response (in your linked post) dismisses this as āmotivated reasoning,ā which I donāt find convincing.
To mandate broadly imprecise credences on the topic at hand would be to defer overly much to a formal apparatus which, in virtue of forcing (with insufficient reason) a kind of practical neutrality about whether it would be bad for everyone to die, is manifestly unfit to guide high-stakes decision-making. Thatās my view. Youāre free to disagree with it, of course.
I think itās conceptually confused to use the term āhigh epistemic standardsā to favor imprecise credence or suspended judgment over using oneās best judgment. I donāt think the former two are automatically more epistemically responsible.
Suspended judgment may be better than forming a bad precise judgment, but worse than forming a good precise judgment. Nothing in the concept of āhigh standardsā should necessarily lead us to prioritize avoiding the risk of bad judgment over the risk of failing to form a good judgment when we could and should have.
Iāve written about this more (with practical examples from pandemic policy disputes) in āAgency and Epistemic Cheems Mindsetā
I just posted the following reply to Jesse:
I donāt think penalizing complexity is enough to escape radical skepticism in general. Consider the āuniverse popped into existence (fully-formed) 5 minutes agoā hypothesis. Itās not obvious that this is more complex than the alternative hypothesis that includes the past five minutes PLUS billions of years before that. One could try to argue for this claim, but I donāt think that our confidence in history should be *contingent* on that extremely contentious philosophical project working out successfully!
But to clarify: I donāt think I say anything much in that post about āthe reasons why we should start withā various anti-skeptical priors, and Iām certainly not committed to saying that there are āsimilar reasonsā in every anti-skeptical case. The similarity I point to is simply that we clearly should have anti-skeptical priors. āWhyā is a separate question (if it has an answer at all, the answer may vary from case to case).
On whether we agree: When I talk about exercising better rather than worse judgment, I take success here to be determined by the contents of our judgments. Some claims warrant higher credence than others, and we should try to have our credences match as close as possible to the objectively warranted level.
But thatās quite different from focusing on whether our judgments stem from a āreliable sourceā. I think thereās very little chance that you could show that almost any of your philosophical beliefs (including this very epistemic demand) stem from a source that we can independently demonstrate to be reliable. I think the kind of higher-order inquiry youāre proposing is a dead end: you canāt really judge which philosophical dispositions are reliable until youāve determined which philosophical beliefs are true.
To illustrate with a couple of concrete examples:
(1) You claim that āan evolutionary pressure toward pro-natalist beliefsā is an āunreliableā source. But that isnāt unreliable if pro-natalism is (broadly) correct.
(2) Compare evolutionary pressures to judge that pain is bad. A skeptic might claim this source is āunreliableā, but we neednāt accept that claim. Since pain is bad, when evolution disposes us to believe this, it is disposing us towards a true belief. (To simply assert this obviously wonāt suffice to convince a skeptic, but the lesson of post-Cartesian epistemology is that trying to convince skeptics is a foolās game.)
To be clear: youāre arguing that we should be agnostic (and, more strongly, take others to also be utterly clueless) about whether it would be good or bad for everyone to die?
I think this is a really good example of what I was talking about in my post, Itās Not Wise to be Clueless.
If you think that, in general, justified belief is incompatible with ājudgment callsā, then radical skepticism immediately follows. You canāt even establish, to this standard, that the external world exists. I take that to show that thereās a problem with the epistemic standards youāre assuming.
Itās OKāindeed, essentialāto make judgment calls, and we should simply try to exercise better rather than worse judgment. There are, of course, tricky questions about how best to do that. But if thereās anything that weāve learned from philosophy since Descartes, itās that skeptical calls to abjure from disputable judgments altogether areā¦ not feasible.
Just sharing a quick link in case itās of interest: Many will recall Leif Wenarās WIRED article from last year, which attacked charitable giving from a philosophical perspective of valorizing status quo bias. There was plenty of discussion of his substantive arguments at the time. One thing that people mostly just politely overlooked was his very public attack on Will MacAskill as a philosopher. My latest post revisits the controversy to assess whether his charges against MacAskill were reasonable.
(The bulk of the post is paywalled, but you should be able to activate a 7-day free trial if you arenāt otherwise interested in my work.)
Iām also curious whether anything ever came of this.
Compare Doing Good Effectively is Unusual, for a more positive take on this phenomenon. (E.g. the abstract EA mission is actually pretty important for some to pursue, because otherwise humanity will systematically neglect causes like Shrimp Welfare that donāt have immediate emotional appeal.)
Itās sad that not many people care about doing good as such, but I still think itās worth: (i) trying to co-ordinate those who do, (ii) trying to encourage more others to join them, and (iii) co-operating with others who have more cause-specific motivations that happen to be good ones (whether thatās in global health, animal welfare, AI safety, or whatever).
Overall, Iām not sure why you would think āEA needs a cultural shiftā rather than āwe need more EA-adjacent movements/āsubcultures for people who donāt feel moved by the core EA mission but do or could care more about specific causes.ā Isnāt it better to add than to replace?
ļUtiliĀtarĀiĀanism.net UpĀdates Again
I agree with your first couple of paragraphs. Thatās why my initial reply referred to āreputable independent evaluators like GiveWellā.
Conspiracy theorists do, of course, have their own distinct (and degenerate) āwebs of trustā, which is why I also flagged that possibility. But mainstream academic opinion (not to mention the opinion of the community thatās most invested in getting these details right, i.e. effective altruists) regards GiveWell as highly reputable.
I didnāt get the sense from Johnās comment that he understands reasonable social trust of this sort. He offered a false dichotomy between āthorough and methodical researchā and āgut reactionsā, and suggested that ātrust comes fromā¦ [personally] evaluat[ing] the service through normal use and consumption.ā I think this is deeply misleading. (Note, for example, that ānormal use and consumptionā does not give you any indication of how much lead is in your turmeric, whether your medication risks birth defects if taken during pregnancy, etc etc. Social trust, esp. in reputable institutions, is absolutely ubiquitous in navigating the world.)
Youāre conflating ācharityā and ācharity evaluatorā. The whole point of independent evaluators is that other people can defer to their research. So yes, I think the answer is just ātrust evaluatorsā (not ātrust first-order charitiesā), the same way that someone wondering which supplements contain unsafe levels of lead should trust Consumer Reports.
If you are going to a priori refuse to trust research done by independent evaluators until youāve personally vetted them for yourself, then you have made yourself incapable of benefiting from their efforts. Maybe there are low-trust societies where thatās necessary. But youāre going to miss out on a lot if you actually live in a high-trust society and just refuse to believe it.
Iām sorry, but those are just excuses. Nobody requires claims to be āprovenā beyond all possible doubt before making decisions that are plausibly (but not definitely) better for themselves (like going to college). They only demand such proof to get out of making decisions that are plausibly better for others.
Unless youāre a conspiracy theorist, you should probably think it more likely than not that reputable independent evaluators like GiveWell are legit. And then a >50% chance of saving lives for something on the order of ~$5000 is plainly sufficient to justify so acting. (Assuming that saving a life with certainty for ~$10k would obviously be choice-worthy.)
If one is unusually skeptical of life-saving interventions, the benefits of direct cash transfers (e.g. GiveDirectly) are basically undeniable. No āhuge mental investmentā or āleap of faithā required. (Unless by āleap of faithā you mean perfectly ordinary sorts of trust that go without saying in every other realm of life.)
Iām open to the possibility that whatās all things considered best might take into account other kinds of values beyond traditionally welfarist ones (e.g. Nietzschean perfectionism). But standard sorts of agent-relative reasons like Wolf adverts to (reasons to want your life in particular to be more well-rounded) strike me as valid excuses rather than valid justifications. It isnāt really a better decision to do the more selfish thing, IMO.
Your second paragraph is hard to answer because different people have different moral beliefs, and (as I suggest in the OP) laxer moral beliefs often stem from motivated reasoning. So the two may be intertwined. But obviously my hope is that greater clarity of moral knowledge may help us to do more good even with limited moral motivation.
ļFacĀing up to the Price on Life
See the Theories of Well-being chapter at utilitarianism.net for a detailed philosophical overview of this topic.
The simple case against hedonism is just that it is bizarrely restrictive: many of us have non-hedonistic ultimate desires about our own lives that seem perfectly reasonable, so the burden is on the hedonist to establish that they know better than we do what is good for us, andāin particularāthat our subjective feelings are the only things that could reasonably be taken to matter for our own sakes. Thatās an extremely (and I would say implausibly) restrictive claim.
How does averting a birth cause an extra child to be born somewhere else?
No worries at all (and best wishes to you too!).
One last clarification Iād want to add is just the distinction between uncertainty and cluelessness. Thereās immense uncertainty about the future: many different possibilities, varying in valence from very good to very bad. But appreciating that uncertainty is compatible with having (very) confident views about whether the continuation of humanity is good or bad in expectation, and thus not being utterly ācluelessā about how the various prospects balance out.