Re (1): I mean, say we know the reason why Alice is a pro-natalist is 100% due to the mere fact that this belief was evolutionarily advantageous for her ancestors (and 0% due to good philosophical reasoning). This would discredit her belief, right? This wouldn’t mean pro-natalism is incorrect. It would just mean that if it is correct, it is for reasons that have nothing to do with what led Alice to endorse it. She just happened to luckily be “right for the wrong reasons”. Do you at least agree with this in this particular contrived example or do you think that evolutionary pressures cannot ever be a reason to question our beliefs?
(Waiting for your answer on this before potentially responding to the rest as I think this will help us pin down the crux.)
Philosophical truths are causally inefficacious, so we already know that there is a causal explanation for any philosophical belief you have that (one could characterize as) having “nothing to do with” the reasons why it is true. So if you accept that causal condition as sufficient for debunking, you cannot have any philosophical beliefs whatsoever.
Put another way: we should already be “questioning our beliefs”; spinning out a causal debunking story offers nothing new. It’s just an isolated demand for rigor, when you should already be questioning everything, and forming the overall most coherent belief-set you can in light of that questioning.
We do better, I argue, to regard the causal origins of a (normative) belief as lacking intrinsic epistemic significance. The important question is instead just whether the proposition in question is itself either intrinsically credible or otherwise justified. Parfit rejects this (p.287):
Suppose we discover that we have some belief because we were hypnotized to have this belief, by some hypnotist who chose at random what to cause us to believe. One example might be the belief that incest between siblings is morally wrong. If the hypnotist’s flipped coin had landed the other way up, he would have caused us to believe that such incest is not wrong. If we discovered that this was how our belief was caused, we could not justifiably assume that this belief was true.
I agree that we cannot just assume that such a belief is true (but this was just as true before we learned of its causal origins—the hypnotist makes no difference). We need to expose it to critical reflection in light of all else that we believe. Perhaps we will find that there is no basis for believing such incest to be wrong. Or perhaps we will find a basis after all (perhaps on indirect consequentialist grounds). Either way, what matters is just whether there is a good justification to be found or not, which is a matter completely independent of us and how we originally came by the belief. Parfit commits the genetic fallacy when he asserts that the causal origins “would cast grave doubt on the justifiability of these beliefs.” (288)
Note that “philosophical reasoning” governs how we update our beliefs, iron out inconsistencies, etc. But the raw starting points are not reached by “reasoning” (what would you be reasoning from, if you don’t already accept any premises?) So your assumed contrast between “good philosophical reasoning” and “suspicious causal forces that undermine belief” would actually undermine all beliefs, once you trace them back to foundational premises.
The only way to actually maintain coherent beliefs is to make your peace with having starting points that were not themselves determined via a rational process. Such causal “debunking” gives us a reason to take another look at our starting points, and consider whether (in light of everything we now believe) we want to revise them. But if the starting points still seem right to us, in light of everything, then it has to be reasonable to stick with them whatever their original causal basis may have been.
Overall, the solution is just to assess the first-order issues on their merits. “Debunking” arguments are a sideshow. They should never convince anyone who shouldn’t already have been equally convinced on independent (first-order) grounds.
Imagine you and I have laid out all the possible considerations for and against reducing X-risks and still disagree (because we make different opaque judgment calls when weighing these considerations against one another). Then, do you agree that we have nothing left to discuss other than whether any of our judgment calls correlate with the truth?
(This, on its own, doesn’t prove anything about whether EDAs can ever help us; I’m just trying to pin down which assumption I’m making that you don’t or vice versa).
Probably nothing left to discuss, period. (Which judgment calls we take to correlate with the truth will simply depend on what we take the truth to be, which is just what’s in dispute. I don’t think there’s any neutral way to establish whose starting points are more intrinsically credible.)
> I don’t think there’s any neutral way to establish whose starting points are more intrinsically credible.
So do I have any good reason to favor my starting points (/judgment calls) over yours, then? Whether to keep mine or to adopt yours becomes an arbitrary choice, no?
It depends what constraints you put on what can qualify as a “good reason”. If you think that a good reason has to be “neutrally recognizable” as such, then there’ll be no good reason to prefer any internally-coherent worldview over any other. That includes some really crazy (by our lights) worldviews. So we may instead allow that good reasons aren’t always recognizable by others. Each person may then take themselves to have good reason to stick with their starting points, though perhaps only one is actually right about this—and since it isn’t independently verifiable which, there would seem an element of epistemic luck to it all. (A disheartening result, if you had hoped that rational argumentation could guarantee that we would all converge on the truth!)
I discuss this epistemic picture in a bit more detail in ‘Knowing What Matters’.
Re (1): I mean, say we know the reason why Alice is a pro-natalist is 100% due to the mere fact that this belief was evolutionarily advantageous for her ancestors (and 0% due to good philosophical reasoning). This would discredit her belief, right? This wouldn’t mean pro-natalism is incorrect. It would just mean that if it is correct, it is for reasons that have nothing to do with what led Alice to endorse it. She just happened to luckily be “right for the wrong reasons”. Do you at least agree with this in this particular contrived example or do you think that evolutionary pressures cannot ever be a reason to question our beliefs?
(Waiting for your answer on this before potentially responding to the rest as I think this will help us pin down the crux.)
Philosophical truths are causally inefficacious, so we already know that there is a causal explanation for any philosophical belief you have that (one could characterize as) having “nothing to do with” the reasons why it is true. So if you accept that causal condition as sufficient for debunking, you cannot have any philosophical beliefs whatsoever.
Put another way: we should already be “questioning our beliefs”; spinning out a causal debunking story offers nothing new. It’s just an isolated demand for rigor, when you should already be questioning everything, and forming the overall most coherent belief-set you can in light of that questioning.
Compare my response to Parfit:
Note that “philosophical reasoning” governs how we update our beliefs, iron out inconsistencies, etc. But the raw starting points are not reached by “reasoning” (what would you be reasoning from, if you don’t already accept any premises?) So your assumed contrast between “good philosophical reasoning” and “suspicious causal forces that undermine belief” would actually undermine all beliefs, once you trace them back to foundational premises.
The only way to actually maintain coherent beliefs is to make your peace with having starting points that were not themselves determined via a rational process. Such causal “debunking” gives us a reason to take another look at our starting points, and consider whether (in light of everything we now believe) we want to revise them. But if the starting points still seem right to us, in light of everything, then it has to be reasonable to stick with them whatever their original causal basis may have been.
Overall, the solution is just to assess the first-order issues on their merits. “Debunking” arguments are a sideshow. They should never convince anyone who shouldn’t already have been equally convinced on independent (first-order) grounds.
Imagine you and I have laid out all the possible considerations for and against reducing X-risks and still disagree (because we make different opaque judgment calls when weighing these considerations against one another). Then, do you agree that we have nothing left to discuss other than whether any of our judgment calls correlate with the truth?
(This, on its own, doesn’t prove anything about whether EDAs can ever help us; I’m just trying to pin down which assumption I’m making that you don’t or vice versa).
Probably nothing left to discuss, period. (Which judgment calls we take to correlate with the truth will simply depend on what we take the truth to be, which is just what’s in dispute. I don’t think there’s any neutral way to establish whose starting points are more intrinsically credible.)
Oh interesting.
> I don’t think there’s any neutral way to establish whose starting points are more intrinsically credible.
So do I have any good reason to favor my starting points (/judgment calls) over yours, then? Whether to keep mine or to adopt yours becomes an arbitrary choice, no?
It depends what constraints you put on what can qualify as a “good reason”. If you think that a good reason has to be “neutrally recognizable” as such, then there’ll be no good reason to prefer any internally-coherent worldview over any other. That includes some really crazy (by our lights) worldviews. So we may instead allow that good reasons aren’t always recognizable by others. Each person may then take themselves to have good reason to stick with their starting points, though perhaps only one is actually right about this—and since it isn’t independently verifiable which, there would seem an element of epistemic luck to it all. (A disheartening result, if you had hoped that rational argumentation could guarantee that we would all converge on the truth!)
I discuss this epistemic picture in a bit more detail in ‘Knowing What Matters’.