Hello John (and Michaelânever quite how to manage these sorts of âtwo to oneâ replies)
I would reject epistemic chauvinism. In the cases where you disagree on P with your epistemic peer, and you take some set of object reasons x, y, and z to support P, the right approach is to downgrade your confidence in the strength of these reasons rather than demote them from epistemic peerhood. Iâd want to support that using some set of considerations about [2]: among others, the reference class where you demote people from peerhood (or superiority) on disagreement goes predictably much worse in the âtruly modestâ one where you downgrade your confidence in the reasons that lead you to disagree (consider a typical crackpot who thinks the real numbers have the same cardinality as the natural for whatever reason, and then infers from disagreement mathematicians are all fools)
For the supervaluation case, I donât know whether it is the majority view on vagueness, but pretend it was a consensus. Iâd say the right thing in such a situation is to be a supervaluationist yourself, even if it appears to you it is false. Indicting apparent peers/âsuperiors for object level disagreement involves retrenchment, and so seems to go poorly.
In the AI case, Iâd say youâd have to weigh up (which is tricky) degrees of expertise re. AI. I donât see it as a cost for my view to update towards the more sceptical AI researchers even if you donât think the object level reasons warrant it, as in plausible reference classes the strategy of going with the experts beats going with the non-expert opinion.
In essence, the challenge modesty would make is, âWhy do you back yourself to have the right grasp on the object level reasons?â Returning to a supervaluation consensus, it seems one needs to offer a story as to why the object level reasons that convincingly refute the view are not appreciated by the philosophers who specialise in the subject. It could be the case theyâre all going systemically wrong (and so you should demote them), but it seems more likely that you have mistaken the object level balance of reason. Using the former as an assumption looks overconfident.
What I take Sumner to be saying is to take the agnosticism you suggest he should, maybe something like this:
My impression is that my theory is right, but I donât believe its more likely my impression is more likely to be right than Paul Krugmanâs (or others). So if you put a gun to my head and I had to give my best guess on economics, I would take an intermediate view, and not follow the theory I espouse. In my day to day work, though, I use this impression to argue in support of this view, so it can contribute to our mutual knowledge.
Of course, maybe you can investigate the object level reasons, per Michaelâs example. In the Adam and Beatrice case, Oliver could start talking to them about the reasons, and maybe find one of them isnât an epistemic peer to the other (or to him). Yet in cases where Oliver forms his own view about the object level considerations, he should still be modest across the impressions of Adam, Beatrice, and himself, for parallel reasons to the original case where he was an outsider (suppose we imagine Penelope who is an outsider to this conversation, etc.)
Hi Greg,
So, your view is that itâs ok to demote people from my peer group when I not only disagree with them about p but also when I have an explanation of why they would be biased that doesnât apply to me. And on your view their verdict on p could never be evidence of their bias. This last seems wrong in many cases.
Consider some obvious truth P (e.g. if a, then a; if a or b, then a and b canât both not be true; itâs wrong to torture people for fun etc.). Myself and some other equally intelligent person have been thinking about P for an equal amount of time. I learn that she believes that not-P. It seems entirely appropriate for me to demote them in this case. If you deny this, suppose now we are deciding on some proposition Q and I knew only that they had got P wrong. As you would agree, their past performance (on P) is pro tanto reason to demote with respect to Q. How can it then not also be pro tanto reason to demote with respect to P? [aside: the second example of an obvious truth I gave is denied by supervaluationists]. In short, how could epistemic peerhood not be in part determined by performance on the object level reasons?
In some of these cases, it also seems that in order to justifiably demote, one doesnât need to offer an account of why the other party is biased that is independent of the object-level reasons.
A separate point, it seems like today and historically there are and have been pockets of severe epistemic error. e.g. in the 19th century, almost all of the worldâs most intelligent philosophers thought that idealism is true; a large chunk of political philosophers believe that public reason is true; Iâm sure there are lots of examples outside philosophy.
In this context, selective epistemic exceptionalism seems appropriate for a community that has taken lots of steps to debias. Thereâs still very good reason to be aware of what the rest of the epistemic community thinks and why they think it, and this is a (weaker) form of modesty.
Minor point: epistemic peer judgements are independent of whether you disagree with them or not. Iâm happy to indict people who are epistemically unvirtuous even if they happen to agree with me.
I generally think one should not use object level disagreement to judge peerhood, given the risk of entrenchment (i.e. everyone else thinks Iâm wrong, so I conclude everyone else is wrong and an idiot).
For âobvious truthsâ like P, thereâs usually a lot of tacit peer agreement in background knowledge. So the disagreement with you and these other people provides some evidence for demotion, rather than disagreeing with you alone. I find it hard to disentangle intuitions where one removes this rider, and in these cases Iâm not so sure about whether steadfastness + demotion is the appropriate response. Demoting supervaluationaists as peers re. supervaluationism because they disagree with you about it, for example, seems a bad idea.
In any case, almost by definition it would be extraordinarily rare people we think prima facie are epistemic peers disagree on something sufficiently obvious. In real world cases where its some contentious topic where reasonable people disagree, one should not demote people based on their disagreement with you (or, perhaps, in these cases the evidence for demotion is sufficiently trivial that it is heuristically better ignored).
Modest accounts shouldnât be surprised by expert error. Yet being able to determine these instances ex post gives little steer as to what to do ex ante. Random renegade schools of thought assuredly have an even poorer track record. If it was the case the EA/ârationalist community had a good track record of out performing expert classes in their field, that would be a good reason for epistemic exceptionalism. Yet I donât see it.
Hello John (and Michaelânever quite how to manage these sorts of âtwo to oneâ replies)
I would reject epistemic chauvinism. In the cases where you disagree on P with your epistemic peer, and you take some set of object reasons x, y, and z to support P, the right approach is to downgrade your confidence in the strength of these reasons rather than demote them from epistemic peerhood. Iâd want to support that using some set of considerations about [2]: among others, the reference class where you demote people from peerhood (or superiority) on disagreement goes predictably much worse in the âtruly modestâ one where you downgrade your confidence in the reasons that lead you to disagree (consider a typical crackpot who thinks the real numbers have the same cardinality as the natural for whatever reason, and then infers from disagreement mathematicians are all fools)
For the supervaluation case, I donât know whether it is the majority view on vagueness, but pretend it was a consensus. Iâd say the right thing in such a situation is to be a supervaluationist yourself, even if it appears to you it is false. Indicting apparent peers/âsuperiors for object level disagreement involves retrenchment, and so seems to go poorly.
In the AI case, Iâd say youâd have to weigh up (which is tricky) degrees of expertise re. AI. I donât see it as a cost for my view to update towards the more sceptical AI researchers even if you donât think the object level reasons warrant it, as in plausible reference classes the strategy of going with the experts beats going with the non-expert opinion.
In essence, the challenge modesty would make is, âWhy do you back yourself to have the right grasp on the object level reasons?â Returning to a supervaluation consensus, it seems one needs to offer a story as to why the object level reasons that convincingly refute the view are not appreciated by the philosophers who specialise in the subject. It could be the case theyâre all going systemically wrong (and so you should demote them), but it seems more likely that you have mistaken the object level balance of reason. Using the former as an assumption looks overconfident.
What I take Sumner to be saying is to take the agnosticism you suggest he should, maybe something like this:
Of course, maybe you can investigate the object level reasons, per Michaelâs example. In the Adam and Beatrice case, Oliver could start talking to them about the reasons, and maybe find one of them isnât an epistemic peer to the other (or to him). Yet in cases where Oliver forms his own view about the object level considerations, he should still be modest across the impressions of Adam, Beatrice, and himself, for parallel reasons to the original case where he was an outsider (suppose we imagine Penelope who is an outsider to this conversation, etc.)
Hi Greg, So, your view is that itâs ok to demote people from my peer group when I not only disagree with them about p but also when I have an explanation of why they would be biased that doesnât apply to me. And on your view their verdict on p could never be evidence of their bias. This last seems wrong in many cases.
Consider some obvious truth P (e.g. if a, then a; if a or b, then a and b canât both not be true; itâs wrong to torture people for fun etc.). Myself and some other equally intelligent person have been thinking about P for an equal amount of time. I learn that she believes that not-P. It seems entirely appropriate for me to demote them in this case. If you deny this, suppose now we are deciding on some proposition Q and I knew only that they had got P wrong. As you would agree, their past performance (on P) is pro tanto reason to demote with respect to Q. How can it then not also be pro tanto reason to demote with respect to P? [aside: the second example of an obvious truth I gave is denied by supervaluationists]. In short, how could epistemic peerhood not be in part determined by performance on the object level reasons?
In some of these cases, it also seems that in order to justifiably demote, one doesnât need to offer an account of why the other party is biased that is independent of the object-level reasons.
A separate point, it seems like today and historically there are and have been pockets of severe epistemic error. e.g. in the 19th century, almost all of the worldâs most intelligent philosophers thought that idealism is true; a large chunk of political philosophers believe that public reason is true; Iâm sure there are lots of examples outside philosophy.
In this context, selective epistemic exceptionalism seems appropriate for a community that has taken lots of steps to debias. Thereâs still very good reason to be aware of what the rest of the epistemic community thinks and why they think it, and this is a (weaker) form of modesty.
Minor point: epistemic peer judgements are independent of whether you disagree with them or not. Iâm happy to indict people who are epistemically unvirtuous even if they happen to agree with me.
I generally think one should not use object level disagreement to judge peerhood, given the risk of entrenchment (i.e. everyone else thinks Iâm wrong, so I conclude everyone else is wrong and an idiot).
For âobvious truthsâ like P, thereâs usually a lot of tacit peer agreement in background knowledge. So the disagreement with you and these other people provides some evidence for demotion, rather than disagreeing with you alone. I find it hard to disentangle intuitions where one removes this rider, and in these cases Iâm not so sure about whether steadfastness + demotion is the appropriate response. Demoting supervaluationaists as peers re. supervaluationism because they disagree with you about it, for example, seems a bad idea.
In any case, almost by definition it would be extraordinarily rare people we think prima facie are epistemic peers disagree on something sufficiently obvious. In real world cases where its some contentious topic where reasonable people disagree, one should not demote people based on their disagreement with you (or, perhaps, in these cases the evidence for demotion is sufficiently trivial that it is heuristically better ignored).
Modest accounts shouldnât be surprised by expert error. Yet being able to determine these instances ex post gives little steer as to what to do ex ante. Random renegade schools of thought assuredly have an even poorer track record. If it was the case the EA/ârationalist community had a good track record of out performing expert classes in their field, that would be a good reason for epistemic exceptionalism. Yet I donât see it.