Admittedly, this somewhat merges the evolutionary debunking argument with the argument from widespread moral disagreement. In future posts, I will argue that we don’t need to envision aliens. Even among humans, we can observe unbridgeable disagreements between reasoners whose thinking is—as far as we can tell—maximally nuanced and versatile.)
It seems odd to me to suggest we have anyexamples of maximally nuanced and versatile reasoners. It seems like all humans are quite flawed thinkers.
I suspect this is a somewhat minor point, and that differences in moral views between humans who are quite smart and have reflected quite a bit are still sufficient to support certain important arguments. But if an argument was premised on the claim “we can observe unbridgeable disagreements between reasoners whose thinking is—as far as we can tell—maximally nuanced and versatile”, I think I’d be quite skeptical of that argument, at least until it’s shown that the argument holds given only a weaker version of that claim.
One example of why: I don’t think we yet have a compelling demonstration that, given something like coherent extrapolated volition, humans wouldn’t converge on the same set of values. So I think we need to rely on arguments, speculations, etc. for matters like that, rather than the answer already being very clear.
(Or maybe I misunderstood what you meant. And probably you’ll go into more detail in those future posts.)
It seems odd to me to suggest we have any examples of maximally nuanced and versatile reasoners. It seems like all humans are quite flawed thinkers.
Sorry, bad phrasing on my part! I didn’t mean to suggest that there are perfect human reasoners. :)
The context of my remark was this argument by Richard Yetter-Chappell. He thinks that as humans, we can use our inside view to disqualify hypothetical reasoners who don’t even change their minds in the light of new evidence, or don’t use induction. We can disqualify them from the class of agents who might be correctly predisposed to apprehend normative truths. We can do this because compared to those crappy alien ways of reasoning, ours feels undoubtedly “more nuanced and versatile.”
And so I’m replying to Yetter-Chappell that as far as inside-view criteria for disqualifying people from the class of promising candidates for the correct psychology goes, we probably can’t find differences among humans that would rule out everyone except a select few reasoners who will all agree on the right morality. Insofar as we try to construct a non-gerrymandered reference class of “humans who reason in really great ways,” that reference class will still contain unbridgeable disagreement.
One example of why: I don’t think we yet have a compelling demonstration that, given something like coherent extrapolated volition, humans wouldn’t converge on the same set of values. So I think we need to rely on arguments, speculations, etc. for matters like that, rather than the answer already being very clear.
I haven’t yet made any arguments about this (because this is the topic of future posts in the sequence), but my argument will be that we don’t necessarily need a compelling demonstration, because we know enough about why people disagree to tell that they are aren’t always answering the same question and/or paying attention to the same evaluation criteria.
I still perhaps feel unsure what you mean by “unbridgeable disagreement”, and how we’d know that disagreements we observe are indeed unbridgeable rather than things that might go away given more idealisation or reflection or the like. (I’m also not saying I’m confident the disagreements we observe will go away with further idealisation etc.) But maybe future posts will address that.
And in relation to your last sentence, a quick thought in that perhaps, given more idealisation or reflection or the like, people would switch to answering the same questions, paying attention to the same evaluation criteria, etc. (But again, maybe future posts will address that.)
And yes, I didn’t mean to imply you had made arguments directly about coherent extrapolated volition yet—I just highlighted that as one reason why the lack of maximally nuanced and versatile reasons to date seems potentially important.
It seems odd to me to suggest we have any examples of maximally nuanced and versatile reasoners. It seems like all humans are quite flawed thinkers.
I suspect this is a somewhat minor point, and that differences in moral views between humans who are quite smart and have reflected quite a bit are still sufficient to support certain important arguments. But if an argument was premised on the claim “we can observe unbridgeable disagreements between reasoners whose thinking is—as far as we can tell—maximally nuanced and versatile”, I think I’d be quite skeptical of that argument, at least until it’s shown that the argument holds given only a weaker version of that claim.
One example of why: I don’t think we yet have a compelling demonstration that, given something like coherent extrapolated volition, humans wouldn’t converge on the same set of values. So I think we need to rely on arguments, speculations, etc. for matters like that, rather than the answer already being very clear.
(Or maybe I misunderstood what you meant. And probably you’ll go into more detail in those future posts.)
Sorry, bad phrasing on my part! I didn’t mean to suggest that there are perfect human reasoners. :)
The context of my remark was this argument by Richard Yetter-Chappell. He thinks that as humans, we can use our inside view to disqualify hypothetical reasoners who don’t even change their minds in the light of new evidence, or don’t use induction. We can disqualify them from the class of agents who might be correctly predisposed to apprehend normative truths. We can do this because compared to those crappy alien ways of reasoning, ours feels undoubtedly “more nuanced and versatile.”
And so I’m replying to Yetter-Chappell that as far as inside-view criteria for disqualifying people from the class of promising candidates for the correct psychology goes, we probably can’t find differences among humans that would rule out everyone except a select few reasoners who will all agree on the right morality. Insofar as we try to construct a non-gerrymandered reference class of “humans who reason in really great ways,” that reference class will still contain unbridgeable disagreement.
I haven’t yet made any arguments about this (because this is the topic of future posts in the sequence), but my argument will be that we don’t necessarily need a compelling demonstration, because we know enough about why people disagree to tell that they are aren’t always answering the same question and/or paying attention to the same evaluation criteria.
Ok, that helps me see what you meant.
I still perhaps feel unsure what you mean by “unbridgeable disagreement”, and how we’d know that disagreements we observe are indeed unbridgeable rather than things that might go away given more idealisation or reflection or the like. (I’m also not saying I’m confident the disagreements we observe will go away with further idealisation etc.) But maybe future posts will address that.
And in relation to your last sentence, a quick thought in that perhaps, given more idealisation or reflection or the like, people would switch to answering the same questions, paying attention to the same evaluation criteria, etc. (But again, maybe future posts will address that.)
And yes, I didn’t mean to imply you had made arguments directly about coherent extrapolated volition yet—I just highlighted that as one reason why the lack of maximally nuanced and versatile reasons to date seems potentially important.