Imagine you and I have laid out all the possible considerations for and against reducing X-risks and still disagree (because we make different opaque judgment calls when weighing these considerations against one another). Then, do you agree that we have nothing left to discuss other than whether any of our judgment calls correlate with the truth?
(This, on its own, doesn’t prove anything about whether EDAs can ever help us; I’m just trying to pin down which assumption I’m making that you don’t or vice versa).
Probably nothing left to discuss, period. (Which judgment calls we take to correlate with the truth will simply depend on what we take the truth to be, which is just what’s in dispute. I don’t think there’s any neutral way to establish whose starting points are more intrinsically credible.)
> I don’t think there’s any neutral way to establish whose starting points are more intrinsically credible.
So do I have any good reason to favor my starting points (/judgment calls) over yours, then? Whether to keep mine or to adopt yours becomes an arbitrary choice, no?
It depends what constraints you put on what can qualify as a “good reason”. If you think that a good reason has to be “neutrally recognizable” as such, then there’ll be no good reason to prefer any internally-coherent worldview over any other. That includes some really crazy (by our lights) worldviews. So we may instead allow that good reasons aren’t always recognizable by others. Each person may then take themselves to have good reason to stick with their starting points, though perhaps only one is actually right about this—and since it isn’t independently verifiable which, there would seem an element of epistemic luck to it all. (A disheartening result, if you had hoped that rational argumentation could guarantee that we would all converge on the truth!)
I discuss this epistemic picture in a bit more detail in ‘Knowing What Matters’.
Imagine you and I have laid out all the possible considerations for and against reducing X-risks and still disagree (because we make different opaque judgment calls when weighing these considerations against one another). Then, do you agree that we have nothing left to discuss other than whether any of our judgment calls correlate with the truth?
(This, on its own, doesn’t prove anything about whether EDAs can ever help us; I’m just trying to pin down which assumption I’m making that you don’t or vice versa).
Probably nothing left to discuss, period. (Which judgment calls we take to correlate with the truth will simply depend on what we take the truth to be, which is just what’s in dispute. I don’t think there’s any neutral way to establish whose starting points are more intrinsically credible.)
Oh interesting.
> I don’t think there’s any neutral way to establish whose starting points are more intrinsically credible.
So do I have any good reason to favor my starting points (/judgment calls) over yours, then? Whether to keep mine or to adopt yours becomes an arbitrary choice, no?
It depends what constraints you put on what can qualify as a “good reason”. If you think that a good reason has to be “neutrally recognizable” as such, then there’ll be no good reason to prefer any internally-coherent worldview over any other. That includes some really crazy (by our lights) worldviews. So we may instead allow that good reasons aren’t always recognizable by others. Each person may then take themselves to have good reason to stick with their starting points, though perhaps only one is actually right about this—and since it isn’t independently verifiable which, there would seem an element of epistemic luck to it all. (A disheartening result, if you had hoped that rational argumentation could guarantee that we would all converge on the truth!)
I discuss this epistemic picture in a bit more detail in ‘Knowing What Matters’.