Hi Greg, thanks for this post, it was very good. I thought it would help future discussion to separate these claims, which leave your argument ambiguous:
You should give equal weight to your own credences and those of epistemic peers on all propositions for which you and they are epistemic peers.
Claims about the nature of the community of epistemic peers and our ability to reliably identify them.
In places, you seem to identify modesty with 1, in others with the conjunction of 1 and a subset of claims in 2. 1 doesn’t seem sufficient on its own for modesty, for if 1 is true but I have no epistemic peers or can’t reliably identify them, then I should pay lots of attention to my own inside view of an issue. Similarly, if EAs have no epistemic peers or superiors, then they should ignore everyone else. This is compatible with conciliationism but seems immodest. The relevant claim in 2 seems to be that for most people, including EAs, with beliefs about practically important propostions, there are epistemic peers and superiors who can be reliably identified.
This noted, I wonder how different the conjunction of 1 and 2 is to epistemic chauvinism. It seems to me that I could accept 1 and 2, but demote people from my epistemic peer group with respect to a proposition p if they disagree with me about p. If I have read all of the object-level arguments on p and someone else has as well and we disagree on p, then demotion seems appropriate at least in some cases. To give an example, I’ve read and thought less about vagueness less than lots of much cleverer philosophers who hold a view called supervaluationism, which I believe to be extremely implausible. I believe I can explain why they are wrong with the object-level arguments about vagueness. I received the evidence that they disagree. Very good, I reply, they are not my epistemic peers with respect to this question for object level reasons x, y, and z. (Note that my reasons for demoting them are the object-level reasons; they are not that I believe that supervaluationism is false. Generally, the fact that I believe p is usually not my reason to believe that p.) This is entirely compatible with the view that I should be modest with respect to my epistemic peers.
In this spirit, I find Scott Sumner’s quote deeply strange. If he thinks that “there is no objective reason to favor my view over Krugman’s”, then he shouldn’t believe his view over Krugman’s (even though he (Sumner) does). If I were in Sumner’s shoes after reasoning about p and reading the object level reasons about p, then I would EITHER become agnostic or demote krugman from my epistemic peer group.
I thought I’d offer up more object-level examples to try to push against your view. AI risk is a case in which EAs disagree with the consensus among numerous AI researchers and other intelligent people. In my view, a lot of the arguments I’ve heard from AI researchers have been very weak and haven’t shifted my credence all that much. But modesty here seems to push me toward the consensus to a greater extent than the object-level reasons warrant.
With respect to the question of AI risk, it seems to me that I should demote these people from my epistemic peer group because they disagree with me on the subject of AI risk. If you accept this, then its hard to see what difference there is between immodesty and modesty
The difference in many object level claims, like the probability that there will be an intelligence explosion and so on, is not very much between EAs and AI researchers. This survey demonstrated it: https://arxiv.org/abs/1705.08807
AI researchers are just more likely to have an attitude that anything less than ~10% likely to occur should be ignored, or existential risks are not orders of magnitude more important than other things, or similar kinds of judgement calls.
The one major technical issue where EAs might be systematically different from AI researchers would be the validity of current research in addressing the problem.
Is there any data on how likely EAs think that explosive progress after HLMI will happen? I would have thought it more than 10%?
I would also have expected more debate about explosive progress, more than just the recent Hanson-Yudkowski flair up, if there was as much doubt in the community as that survey suggests.
Gregory, thanks for writing this up. Your writing style is charming and I really enjoy reading the many deft turns of phrase.
Moving on to the substance, I think I share JH’s worries. What seems missing from your account is why people have the credences they have. Wouldn’t it be easiest just to go and assess the object level reasons people have for their credences? For instance, with your Beatrice and Adam example, one (better?) way to make progress on finding out whether it’s an oak or not is ask them for their reasons, rather than ask them to state their credences and take those on trust. If Beatrice says “I am tree expert but I’ve left my glasses at home so can’t see the leaves” (or something) whereas Adam gives a terrible explanation (“I decided every fifth tree I see must be an oak tree”), that would tell us quite a lot.
Perhaps, we should defer to others either when we don’t know what their reasons are but need to make a decision quickly, or we think they have the same access to object levels reasons as we do (potential example: two philosophers who’ve read everything but still disagree).
Hello John (and Michael—never quite how to manage these sorts of ‘two to one’ replies)
I would reject epistemic chauvinism. In the cases where you disagree on P with your epistemic peer, and you take some set of object reasons x, y, and z to support P, the right approach is to downgrade your confidence in the strength of these reasons rather than demote them from epistemic peerhood. I’d want to support that using some set of considerations about [2]: among others, the reference class where you demote people from peerhood (or superiority) on disagreement goes predictably much worse in the ‘truly modest’ one where you downgrade your confidence in the reasons that lead you to disagree (consider a typical crackpot who thinks the real numbers have the same cardinality as the natural for whatever reason, and then infers from disagreement mathematicians are all fools)
For the supervaluation case, I don’t know whether it is the majority view on vagueness, but pretend it was a consensus. I’d say the right thing in such a situation is to be a supervaluationist yourself, even if it appears to you it is false. Indicting apparent peers/superiors for object level disagreement involves retrenchment, and so seems to go poorly.
In the AI case, I’d say you’d have to weigh up (which is tricky) degrees of expertise re. AI. I don’t see it as a cost for my view to update towards the more sceptical AI researchers even if you don’t think the object level reasons warrant it, as in plausible reference classes the strategy of going with the experts beats going with the non-expert opinion.
In essence, the challenge modesty would make is, “Why do you back yourself to have the right grasp on the object level reasons?” Returning to a supervaluation consensus, it seems one needs to offer a story as to why the object level reasons that convincingly refute the view are not appreciated by the philosophers who specialise in the subject. It could be the case they’re all going systemically wrong (and so you should demote them), but it seems more likely that you have mistaken the object level balance of reason. Using the former as an assumption looks overconfident.
What I take Sumner to be saying is to take the agnosticism you suggest he should, maybe something like this:
My impression is that my theory is right, but I don’t believe its more likely my impression is more likely to be right than Paul Krugman’s (or others). So if you put a gun to my head and I had to give my best guess on economics, I would take an intermediate view, and not follow the theory I espouse. In my day to day work, though, I use this impression to argue in support of this view, so it can contribute to our mutual knowledge.
Of course, maybe you can investigate the object level reasons, per Michael’s example. In the Adam and Beatrice case, Oliver could start talking to them about the reasons, and maybe find one of them isn’t an epistemic peer to the other (or to him). Yet in cases where Oliver forms his own view about the object level considerations, he should still be modest across the impressions of Adam, Beatrice, and himself, for parallel reasons to the original case where he was an outsider (suppose we imagine Penelope who is an outsider to this conversation, etc.)
Hi Greg,
So, your view is that it’s ok to demote people from my peer group when I not only disagree with them about p but also when I have an explanation of why they would be biased that doesn’t apply to me. And on your view their verdict on p could never be evidence of their bias. This last seems wrong in many cases.
Consider some obvious truth P (e.g. if a, then a; if a or b, then a and b can’t both not be true; it’s wrong to torture people for fun etc.). Myself and some other equally intelligent person have been thinking about P for an equal amount of time. I learn that she believes that not-P. It seems entirely appropriate for me to demote them in this case. If you deny this, suppose now we are deciding on some proposition Q and I knew only that they had got P wrong. As you would agree, their past performance (on P) is pro tanto reason to demote with respect to Q. How can it then not also be pro tanto reason to demote with respect to P? [aside: the second example of an obvious truth I gave is denied by supervaluationists]. In short, how could epistemic peerhood not be in part determined by performance on the object level reasons?
In some of these cases, it also seems that in order to justifiably demote, one doesn’t need to offer an account of why the other party is biased that is independent of the object-level reasons.
A separate point, it seems like today and historically there are and have been pockets of severe epistemic error. e.g. in the 19th century, almost all of the world’s most intelligent philosophers thought that idealism is true; a large chunk of political philosophers believe that public reason is true; I’m sure there are lots of examples outside philosophy.
In this context, selective epistemic exceptionalism seems appropriate for a community that has taken lots of steps to debias. There’s still very good reason to be aware of what the rest of the epistemic community thinks and why they think it, and this is a (weaker) form of modesty.
Minor point: epistemic peer judgements are independent of whether you disagree with them or not. I’m happy to indict people who are epistemically unvirtuous even if they happen to agree with me.
I generally think one should not use object level disagreement to judge peerhood, given the risk of entrenchment (i.e. everyone else thinks I’m wrong, so I conclude everyone else is wrong and an idiot).
For ‘obvious truths’ like P, there’s usually a lot of tacit peer agreement in background knowledge. So the disagreement with you and these other people provides some evidence for demotion, rather than disagreeing with you alone. I find it hard to disentangle intuitions where one removes this rider, and in these cases I’m not so sure about whether steadfastness + demotion is the appropriate response. Demoting supervaluationaists as peers re. supervaluationism because they disagree with you about it, for example, seems a bad idea.
In any case, almost by definition it would be extraordinarily rare people we think prima facie are epistemic peers disagree on something sufficiently obvious. In real world cases where its some contentious topic where reasonable people disagree, one should not demote people based on their disagreement with you (or, perhaps, in these cases the evidence for demotion is sufficiently trivial that it is heuristically better ignored).
Modest accounts shouldn’t be surprised by expert error. Yet being able to determine these instances ex post gives little steer as to what to do ex ante. Random renegade schools of thought assuredly have an even poorer track record. If it was the case the EA/rationalist community had a good track record of out performing expert classes in their field, that would be a good reason for epistemic exceptionalism. Yet I don’t see it.
Hi Greg, thanks for this post, it was very good. I thought it would help future discussion to separate these claims, which leave your argument ambiguous:
You should give equal weight to your own credences and those of epistemic peers on all propositions for which you and they are epistemic peers.
Claims about the nature of the community of epistemic peers and our ability to reliably identify them.
In places, you seem to identify modesty with 1, in others with the conjunction of 1 and a subset of claims in 2. 1 doesn’t seem sufficient on its own for modesty, for if 1 is true but I have no epistemic peers or can’t reliably identify them, then I should pay lots of attention to my own inside view of an issue. Similarly, if EAs have no epistemic peers or superiors, then they should ignore everyone else. This is compatible with conciliationism but seems immodest. The relevant claim in 2 seems to be that for most people, including EAs, with beliefs about practically important propostions, there are epistemic peers and superiors who can be reliably identified.
This noted, I wonder how different the conjunction of 1 and 2 is to epistemic chauvinism. It seems to me that I could accept 1 and 2, but demote people from my epistemic peer group with respect to a proposition p if they disagree with me about p. If I have read all of the object-level arguments on p and someone else has as well and we disagree on p, then demotion seems appropriate at least in some cases. To give an example, I’ve read and thought less about vagueness less than lots of much cleverer philosophers who hold a view called supervaluationism, which I believe to be extremely implausible. I believe I can explain why they are wrong with the object-level arguments about vagueness. I received the evidence that they disagree. Very good, I reply, they are not my epistemic peers with respect to this question for object level reasons x, y, and z. (Note that my reasons for demoting them are the object-level reasons; they are not that I believe that supervaluationism is false. Generally, the fact that I believe p is usually not my reason to believe that p.) This is entirely compatible with the view that I should be modest with respect to my epistemic peers.
In this spirit, I find Scott Sumner’s quote deeply strange. If he thinks that “there is no objective reason to favor my view over Krugman’s”, then he shouldn’t believe his view over Krugman’s (even though he (Sumner) does). If I were in Sumner’s shoes after reasoning about p and reading the object level reasons about p, then I would EITHER become agnostic or demote krugman from my epistemic peer group.
I thought I’d offer up more object-level examples to try to push against your view. AI risk is a case in which EAs disagree with the consensus among numerous AI researchers and other intelligent people. In my view, a lot of the arguments I’ve heard from AI researchers have been very weak and haven’t shifted my credence all that much. But modesty here seems to push me toward the consensus to a greater extent than the object-level reasons warrant.
With respect to the question of AI risk, it seems to me that I should demote these people from my epistemic peer group because they disagree with me on the subject of AI risk. If you accept this, then its hard to see what difference there is between immodesty and modesty
The difference in many object level claims, like the probability that there will be an intelligence explosion and so on, is not very much between EAs and AI researchers. This survey demonstrated it: https://arxiv.org/abs/1705.08807
AI researchers are just more likely to have an attitude that anything less than ~10% likely to occur should be ignored, or existential risks are not orders of magnitude more important than other things, or similar kinds of judgement calls.
The one major technical issue where EAs might be systematically different from AI researchers would be the validity of current research in addressing the problem.
Is there any data on how likely EAs think that explosive progress after HLMI will happen? I would have thought it more than 10%?
I would also have expected more debate about explosive progress, more than just the recent Hanson-Yudkowski flair up, if there was as much doubt in the community as that survey suggests.
Gregory, thanks for writing this up. Your writing style is charming and I really enjoy reading the many deft turns of phrase.
Moving on to the substance, I think I share JH’s worries. What seems missing from your account is why people have the credences they have. Wouldn’t it be easiest just to go and assess the object level reasons people have for their credences? For instance, with your Beatrice and Adam example, one (better?) way to make progress on finding out whether it’s an oak or not is ask them for their reasons, rather than ask them to state their credences and take those on trust. If Beatrice says “I am tree expert but I’ve left my glasses at home so can’t see the leaves” (or something) whereas Adam gives a terrible explanation (“I decided every fifth tree I see must be an oak tree”), that would tell us quite a lot.
Perhaps, we should defer to others either when we don’t know what their reasons are but need to make a decision quickly, or we think they have the same access to object levels reasons as we do (potential example: two philosophers who’ve read everything but still disagree).
Hello John (and Michael—never quite how to manage these sorts of ‘two to one’ replies)
I would reject epistemic chauvinism. In the cases where you disagree on P with your epistemic peer, and you take some set of object reasons x, y, and z to support P, the right approach is to downgrade your confidence in the strength of these reasons rather than demote them from epistemic peerhood. I’d want to support that using some set of considerations about [2]: among others, the reference class where you demote people from peerhood (or superiority) on disagreement goes predictably much worse in the ‘truly modest’ one where you downgrade your confidence in the reasons that lead you to disagree (consider a typical crackpot who thinks the real numbers have the same cardinality as the natural for whatever reason, and then infers from disagreement mathematicians are all fools)
For the supervaluation case, I don’t know whether it is the majority view on vagueness, but pretend it was a consensus. I’d say the right thing in such a situation is to be a supervaluationist yourself, even if it appears to you it is false. Indicting apparent peers/superiors for object level disagreement involves retrenchment, and so seems to go poorly.
In the AI case, I’d say you’d have to weigh up (which is tricky) degrees of expertise re. AI. I don’t see it as a cost for my view to update towards the more sceptical AI researchers even if you don’t think the object level reasons warrant it, as in plausible reference classes the strategy of going with the experts beats going with the non-expert opinion.
In essence, the challenge modesty would make is, “Why do you back yourself to have the right grasp on the object level reasons?” Returning to a supervaluation consensus, it seems one needs to offer a story as to why the object level reasons that convincingly refute the view are not appreciated by the philosophers who specialise in the subject. It could be the case they’re all going systemically wrong (and so you should demote them), but it seems more likely that you have mistaken the object level balance of reason. Using the former as an assumption looks overconfident.
What I take Sumner to be saying is to take the agnosticism you suggest he should, maybe something like this:
Of course, maybe you can investigate the object level reasons, per Michael’s example. In the Adam and Beatrice case, Oliver could start talking to them about the reasons, and maybe find one of them isn’t an epistemic peer to the other (or to him). Yet in cases where Oliver forms his own view about the object level considerations, he should still be modest across the impressions of Adam, Beatrice, and himself, for parallel reasons to the original case where he was an outsider (suppose we imagine Penelope who is an outsider to this conversation, etc.)
Hi Greg, So, your view is that it’s ok to demote people from my peer group when I not only disagree with them about p but also when I have an explanation of why they would be biased that doesn’t apply to me. And on your view their verdict on p could never be evidence of their bias. This last seems wrong in many cases.
Consider some obvious truth P (e.g. if a, then a; if a or b, then a and b can’t both not be true; it’s wrong to torture people for fun etc.). Myself and some other equally intelligent person have been thinking about P for an equal amount of time. I learn that she believes that not-P. It seems entirely appropriate for me to demote them in this case. If you deny this, suppose now we are deciding on some proposition Q and I knew only that they had got P wrong. As you would agree, their past performance (on P) is pro tanto reason to demote with respect to Q. How can it then not also be pro tanto reason to demote with respect to P? [aside: the second example of an obvious truth I gave is denied by supervaluationists]. In short, how could epistemic peerhood not be in part determined by performance on the object level reasons?
In some of these cases, it also seems that in order to justifiably demote, one doesn’t need to offer an account of why the other party is biased that is independent of the object-level reasons.
A separate point, it seems like today and historically there are and have been pockets of severe epistemic error. e.g. in the 19th century, almost all of the world’s most intelligent philosophers thought that idealism is true; a large chunk of political philosophers believe that public reason is true; I’m sure there are lots of examples outside philosophy.
In this context, selective epistemic exceptionalism seems appropriate for a community that has taken lots of steps to debias. There’s still very good reason to be aware of what the rest of the epistemic community thinks and why they think it, and this is a (weaker) form of modesty.
Minor point: epistemic peer judgements are independent of whether you disagree with them or not. I’m happy to indict people who are epistemically unvirtuous even if they happen to agree with me.
I generally think one should not use object level disagreement to judge peerhood, given the risk of entrenchment (i.e. everyone else thinks I’m wrong, so I conclude everyone else is wrong and an idiot).
For ‘obvious truths’ like P, there’s usually a lot of tacit peer agreement in background knowledge. So the disagreement with you and these other people provides some evidence for demotion, rather than disagreeing with you alone. I find it hard to disentangle intuitions where one removes this rider, and in these cases I’m not so sure about whether steadfastness + demotion is the appropriate response. Demoting supervaluationaists as peers re. supervaluationism because they disagree with you about it, for example, seems a bad idea.
In any case, almost by definition it would be extraordinarily rare people we think prima facie are epistemic peers disagree on something sufficiently obvious. In real world cases where its some contentious topic where reasonable people disagree, one should not demote people based on their disagreement with you (or, perhaps, in these cases the evidence for demotion is sufficiently trivial that it is heuristically better ignored).
Modest accounts shouldn’t be surprised by expert error. Yet being able to determine these instances ex post gives little steer as to what to do ex ante. Random renegade schools of thought assuredly have an even poorer track record. If it was the case the EA/rationalist community had a good track record of out performing expert classes in their field, that would be a good reason for epistemic exceptionalism. Yet I don’t see it.