To be clear: you’re arguing that we should be agnostic (and, more strongly, take others to also be utterly clueless) about whether it would be good or bad for everyone to die?
whether it would be good or bad for everyone to die
Maybe a nitpick, but I find this choice of words quite unfair as it implicitly appeals to commonsensical intuitions that seem to have nothing to do with longtermism (to implicitly back your opinion that we know X-risk reduction is good from a longtermist perspective). You do something very similar multiple times in It’s Not Wise to be Clueless.
If you think that, in general, justified belief is incompatible with “judgment calls”
EDIT: actually, not sure this is related. You don’t seem to argue that our judgment calls are truth-tracking. You argue that there is a rational requirement to start with a certain prior (i.e., you implicitly suggest that all rational agents should agree with you on X-risk reduction without having to make judgment calls, in fact).
I don’t think penalizing complexity is enough to escape radical skepticism in general. Consider the “universe popped into existence (fully-formed) 5 minutes ago” hypothesis. It’s not obvious that this is more complex than the alternative hypothesis that includes the past five minutes PLUS billions of years before that. One could try to argue for this claim, but I don’t think that our confidence in history should be *contingent* on that extremely contentious philosophical project working out successfully!
But to clarify: I don’t think I say anything much in that post about “the reasons why we should start with” various anti-skeptical priors, and I’m certainly not committed to saying that there are “similar reasons” in every anti-skeptical case. The similarity I point to is simply that we clearly should have anti-skeptical priors. “Why” is a separate question (if it has an answer at all, the answer may vary from case to case).
On whether we agree: When I talk about exercising better rather than worse judgment, I take success here to be determined by the contents of our judgments. Some claims warrant higher credence than others, and we should try to have our credences match as close as possible to the objectively warranted level.
But that’s quite different from focusing on whether our judgments stem from a “reliable source”. I think there’s very little chance that you could show that almost any of your philosophical beliefs (including this very epistemic demand) stem from a source that we can independently demonstrate to be reliable. I think the kind of higher-order inquiry you’re proposing is a dead end: you can’t really judge which philosophical dispositions are reliable until you’ve determined which philosophical beliefs are true.
To illustrate with a couple of concrete examples:
(1) You claim that “an evolutionary pressure toward pro-natalist beliefs” is an “unreliable” source. But that isn’t unreliable if pro-natalism is (broadly) correct.
(2) Compare evolutionary pressures to judge that pain is bad. A skeptic might claim this source is “unreliable”, but we needn’t accept that claim. Since pain is bad, when evolution disposes us to believe this, it is disposing us towards a true belief. (To simply assert this obviously won’t suffice to convince a skeptic, but the lesson of post-Cartesian epistemology is that trying to convince skeptics is a fool’s game.)
Re (1): I mean, say we know the reason why Alice is a pro-natalist is 100% due to the mere fact that this belief was evolutionarily advantageous for her ancestors (and 0% due to good philosophical reasoning). This would discredit her belief, right? This wouldn’t mean pro-natalism is incorrect. It would just mean that if it is correct, it is for reasons that have nothing to do with what led Alice to endorse it. She just happened to luckily be “right for the wrong reasons”. Do you at least agree with this in this particular contrived example or do you think that evolutionary pressures cannot ever be a reason to question our beliefs?
(Waiting for your answer on this before potentially responding to the rest as I think this will help us pin down the crux.)
Philosophical truths are causally inefficacious, so we already know that there is a causal explanation for any philosophical belief you have that (one could characterize as) having “nothing to do with” the reasons why it is true. So if you accept that causal condition as sufficient for debunking, you cannot have any philosophical beliefs whatsoever.
Put another way: we should already be “questioning our beliefs”; spinning out a causal debunking story offers nothing new. It’s just an isolated demand for rigor, when you should already be questioning everything, and forming the overall most coherent belief-set you can in light of that questioning.
We do better, I argue, to regard the causal origins of a (normative) belief as lacking intrinsic epistemic significance. The important question is instead just whether the proposition in question is itself either intrinsically credible or otherwise justified. Parfit rejects this (p.287):
Suppose we discover that we have some belief because we were hypnotized to have this belief, by some hypnotist who chose at random what to cause us to believe. One example might be the belief that incest between siblings is morally wrong. If the hypnotist’s flipped coin had landed the other way up, he would have caused us to believe that such incest is not wrong. If we discovered that this was how our belief was caused, we could not justifiably assume that this belief was true.
I agree that we cannot just assume that such a belief is true (but this was just as true before we learned of its causal origins—the hypnotist makes no difference). We need to expose it to critical reflection in light of all else that we believe. Perhaps we will find that there is no basis for believing such incest to be wrong. Or perhaps we will find a basis after all (perhaps on indirect consequentialist grounds). Either way, what matters is just whether there is a good justification to be found or not, which is a matter completely independent of us and how we originally came by the belief. Parfit commits the genetic fallacy when he asserts that the causal origins “would cast grave doubt on the justifiability of these beliefs.” (288)
Note that “philosophical reasoning” governs how we update our beliefs, iron out inconsistencies, etc. But the raw starting points are not reached by “reasoning” (what would you be reasoning from, if you don’t already accept any premises?) So your assumed contrast between “good philosophical reasoning” and “suspicious causal forces that undermine belief” would actually undermine all beliefs, once you trace them back to foundational premises.
The only way to actually maintain coherent beliefs is to make your peace with having starting points that were not themselves determined via a rational process. Such causal “debunking” gives us a reason to take another look at our starting points, and consider whether (in light of everything we now believe) we want to revise them. But if the starting points still seem right to us, in light of everything, then it has to be reasonable to stick with them whatever their original causal basis may have been.
Overall, the solution is just to assess the first-order issues on their merits. “Debunking” arguments are a sideshow. They should never convince anyone who shouldn’t already have been equally convinced on independent (first-order) grounds.
Imagine you and I have laid out all the possible considerations for and against reducing X-risks and still disagree (because we make different opaque judgment calls when weighing these considerations against one another). Then, do you agree that we have nothing left to discuss other than whether any of our judgment calls correlate with the truth?
(This, on its own, doesn’t prove anything about whether EDAs can ever help us; I’m just trying to pin down which assumption I’m making that you don’t or vice versa).
Probably nothing left to discuss, period. (Which judgment calls we take to correlate with the truth will simply depend on what we take the truth to be, which is just what’s in dispute. I don’t think there’s any neutral way to establish whose starting points are more intrinsically credible.)
> I don’t think there’s any neutral way to establish whose starting points are more intrinsically credible.
So do I have any good reason to favor my starting points (/judgment calls) over yours, then? Whether to keep mine or to adopt yours becomes an arbitrary choice, no?
It depends what constraints you put on what can qualify as a “good reason”. If you think that a good reason has to be “neutrally recognizable” as such, then there’ll be no good reason to prefer any internally-coherent worldview over any other. That includes some really crazy (by our lights) worldviews. So we may instead allow that good reasons aren’t always recognizable by others. Each person may then take themselves to have good reason to stick with their starting points, though perhaps only one is actually right about this—and since it isn’t independently verifiable which, there would seem an element of epistemic luck to it all. (A disheartening result, if you had hoped that rational argumentation could guarantee that we would all converge on the truth!)
I discuss this epistemic picture in a bit more detail in ‘Knowing What Matters’.
Thanks for engaging with this, Richard!
I think I am making a much weaker claim than this. While I suggest that the EDA argument I raise is valid, I do not argue that it is strong to the point where optimistic longtermism is unwarranted. Also, the argument itself does not say what people should believe if they do not endorse optimistic longtermism (an alternative to cluelessness is pessimistic longtermism—I do not say anything about which one is the most appropriate alternative to optimistic longtermism if the EDA argument is strong enough). Sorry if my writing was unclear.
Maybe a nitpick, but I find this choice of words quite unfair as it implicitly appeals to commonsensical intuitions that seem to have nothing to do with longtermism (to implicitly back your opinion that we know X-risk reduction is good from a longtermist perspective). You do something very similar multiple times in It’s Not Wise to be Clueless.
I didn’t say that. I said that we ought to wonder whether these judgment calls are reliable, claim which you seem to agree with when you write:
Now, you seem much more convinced than me that our judgment calls with regard to the long-term value of X-risk reduction come from a reliable source (such as an evolutionary pressure selecting correct longtermist beliefs, whether directly or indirectly) rather than from evolutionary pressures towards pro-natalist beliefs. In It’s Not Wise to be Clueless, the justification you provide for something in this vicinity[1] is that we ought to start with the prior that something like X-risk reduction is good for the similar reasons why we should start with the prior that the sun will rise tomorrow. But I think Jesse quite accurately pointed out the disanalogy and the problem with your argument in his comment. Do you have another argument and/or an objection to Jesse’s reply that you are happy to share?
EDIT: actually, not sure this is related. You don’t seem to argue that our judgment calls are truth-tracking. You argue that there is a rational requirement to start with a certain prior (i.e., you implicitly suggest that all rational agents should agree with you on X-risk reduction without having to make judgment calls, in fact).
I just posted the following reply to Jesse:
But to clarify: I don’t think I say anything much in that post about “the reasons why we should start with” various anti-skeptical priors, and I’m certainly not committed to saying that there are “similar reasons” in every anti-skeptical case. The similarity I point to is simply that we clearly should have anti-skeptical priors. “Why” is a separate question (if it has an answer at all, the answer may vary from case to case).
On whether we agree: When I talk about exercising better rather than worse judgment, I take success here to be determined by the contents of our judgments. Some claims warrant higher credence than others, and we should try to have our credences match as close as possible to the objectively warranted level.
But that’s quite different from focusing on whether our judgments stem from a “reliable source”. I think there’s very little chance that you could show that almost any of your philosophical beliefs (including this very epistemic demand) stem from a source that we can independently demonstrate to be reliable. I think the kind of higher-order inquiry you’re proposing is a dead end: you can’t really judge which philosophical dispositions are reliable until you’ve determined which philosophical beliefs are true.
To illustrate with a couple of concrete examples:
(1) You claim that “an evolutionary pressure toward pro-natalist beliefs” is an “unreliable” source. But that isn’t unreliable if pro-natalism is (broadly) correct.
(2) Compare evolutionary pressures to judge that pain is bad. A skeptic might claim this source is “unreliable”, but we needn’t accept that claim. Since pain is bad, when evolution disposes us to believe this, it is disposing us towards a true belief. (To simply assert this obviously won’t suffice to convince a skeptic, but the lesson of post-Cartesian epistemology is that trying to convince skeptics is a fool’s game.)
Re (1): I mean, say we know the reason why Alice is a pro-natalist is 100% due to the mere fact that this belief was evolutionarily advantageous for her ancestors (and 0% due to good philosophical reasoning). This would discredit her belief, right? This wouldn’t mean pro-natalism is incorrect. It would just mean that if it is correct, it is for reasons that have nothing to do with what led Alice to endorse it. She just happened to luckily be “right for the wrong reasons”. Do you at least agree with this in this particular contrived example or do you think that evolutionary pressures cannot ever be a reason to question our beliefs?
(Waiting for your answer on this before potentially responding to the rest as I think this will help us pin down the crux.)
Philosophical truths are causally inefficacious, so we already know that there is a causal explanation for any philosophical belief you have that (one could characterize as) having “nothing to do with” the reasons why it is true. So if you accept that causal condition as sufficient for debunking, you cannot have any philosophical beliefs whatsoever.
Put another way: we should already be “questioning our beliefs”; spinning out a causal debunking story offers nothing new. It’s just an isolated demand for rigor, when you should already be questioning everything, and forming the overall most coherent belief-set you can in light of that questioning.
Compare my response to Parfit:
Note that “philosophical reasoning” governs how we update our beliefs, iron out inconsistencies, etc. But the raw starting points are not reached by “reasoning” (what would you be reasoning from, if you don’t already accept any premises?) So your assumed contrast between “good philosophical reasoning” and “suspicious causal forces that undermine belief” would actually undermine all beliefs, once you trace them back to foundational premises.
The only way to actually maintain coherent beliefs is to make your peace with having starting points that were not themselves determined via a rational process. Such causal “debunking” gives us a reason to take another look at our starting points, and consider whether (in light of everything we now believe) we want to revise them. But if the starting points still seem right to us, in light of everything, then it has to be reasonable to stick with them whatever their original causal basis may have been.
Overall, the solution is just to assess the first-order issues on their merits. “Debunking” arguments are a sideshow. They should never convince anyone who shouldn’t already have been equally convinced on independent (first-order) grounds.
Imagine you and I have laid out all the possible considerations for and against reducing X-risks and still disagree (because we make different opaque judgment calls when weighing these considerations against one another). Then, do you agree that we have nothing left to discuss other than whether any of our judgment calls correlate with the truth?
(This, on its own, doesn’t prove anything about whether EDAs can ever help us; I’m just trying to pin down which assumption I’m making that you don’t or vice versa).
Probably nothing left to discuss, period. (Which judgment calls we take to correlate with the truth will simply depend on what we take the truth to be, which is just what’s in dispute. I don’t think there’s any neutral way to establish whose starting points are more intrinsically credible.)
Oh interesting.
> I don’t think there’s any neutral way to establish whose starting points are more intrinsically credible.
So do I have any good reason to favor my starting points (/judgment calls) over yours, then? Whether to keep mine or to adopt yours becomes an arbitrary choice, no?
It depends what constraints you put on what can qualify as a “good reason”. If you think that a good reason has to be “neutrally recognizable” as such, then there’ll be no good reason to prefer any internally-coherent worldview over any other. That includes some really crazy (by our lights) worldviews. So we may instead allow that good reasons aren’t always recognizable by others. Each person may then take themselves to have good reason to stick with their starting points, though perhaps only one is actually right about this—and since it isn’t independently verifiable which, there would seem an element of epistemic luck to it all. (A disheartening result, if you had hoped that rational argumentation could guarantee that we would all converge on the truth!)
I discuss this epistemic picture in a bit more detail in ‘Knowing What Matters’.