I think we disagree. I’m not sure why you think that even for decisions with large effects one should only or mostly take into account specific facts or arguments, and am curious about your reasoning here.
I do think it will often be even more valuable to understand someone’s specific reasons for having a belief. However, (i) in complex domains achieving a full understanding would be a lot of work, (ii) people usually have incomplete insight into the specific reasons for why they hold a certain belief themselves and instead might appeal to intuition, (iii) in practice you only have so much time and thus can’t fully pursue all disagreements.
So yes, always stopping at “person X thinks that p” and never trying to understand why would be a poor policy. But never stopping at that seems infeasible to me, and I don’t see the benefits from always throwing away the information that X believes p in situations where you don’t fully understand why.
For instance, imagine I pointed a gun to your head and forced you to now choose between two COVID mitigation policies for the US for the next 6 months. I offer you to give you additional information of the type “X thinks that p” with some basic facts on X but no explanation for why they hold this belief. Would you refuse to view that information? If someone else was in that situation, would you pay for me not giving them this information? How much?
There is a somewhat different failure mode where person X’s view isn’t particularly informative compared to the view of other people Y, Z, etc., and so by considerung just X’s view you give it undue weight. But I don’t think you’re talking about that?
I’m partly puzzled by your reaction because the basic phenomenon of deferring to the output of others’ reasoning processes without understanding the underlying facts or arguments strikes me as not unusual at all. For example, I believe that the Earth orbits the Sun rather than the other way around. But I couldn’t give you any very specific argument for this like “on the geocentric hypothesis, the path of this body across the sky would look like this”. Instead, the reason for my belief is that the heliocentric worldview is scientific consensus, i.e. epistemic deference to others without understanding their reasoning.
This also happens when the view in question makes a difference in practice. For instance, as I’m sure you’re aware, hierarchical organizations work (among other things) because managers don’t have to recapitulate every specific argument behind the conclusions of their reports.
To sum up, a very large amount of division of epistemic labor seems like the norm rather than the exception to me, just as for the division of manual labor. The main thing that seems somewhat unusual is making that explicit.
I note that the framing / example case has changed a lot between your original comment / my reply (making a $5m grant and writing “person X is skeptical of MIRI” in the “cons” column) and this parent comment (“imagine I pointed a gun to your head and… offer you to give you additional information;” “never stopping at [person X thinks that p]”). I’m not arguing for entirely refusing to trust other people or dividing labor, as you implied there. I specifically object to giving weight to other people’s top-line views on questions where there’s substantial disagreement, based on your overall assessment of that particular person’s credibility / quality of intuition / whatever, separately from your evaluation of their finer-grained sub-claims.
If you are staking $5m on something, it’s hard for me to imagine a case where it makes sense to end up with an important node in your tree of claims whose justification is “opinions diverge on this but the people I think are smartest tend to believe p.” The reason I think this is usually bad is that (a) it’s actually impossible to know how much weight it’s rational to give someone else’s opinion without inspecting their sub-claims, and (b) it leads to groupthink/herding/information cascades.
As a toy example to illustrate (a): suppose that for MIRI to be the optimal grant recipient, it both needs to be the case that AI risk is high (A) and that MIRI is the Best organization working to mitigate it (B). A and B are independent. The prior is (P(A) = 50, P(B) = 50). Alice and Bob have observed evidence with a 9:1 odds ratio in favor of A, so think (P(A) = 90, P(B) = 50). Carol has observed evidence with a 9:1 odds ratio in favor of B. Alice, Bob and Carol all have the same top-line view of MIRI (P(A and B) = 0.45), but the rational aggregation of Alice and Bob’s “view” is much less positive than the rational aggregation of Bob and Carol’s.
It’s interesting that you mention hierarchical organizations because I think they usually follow a better process for dividing up epistemic labor, which is to assign different sub-problems to different people rather than by averaging a large number of people’s beliefs on a single question. This works better because the sub-problems are more likely to be independent from each other, so they don’t require as much communication / model-sharing to aggregate their results.
In fact, when hierarchical organizations do the other thing—”brute force” aggregate others’ beliefs in situations of disagreement—it usually indicates an organizational failure. My own experience is that I often see people do something a particular way, even though they disagree with it, because they think that’s my preference; but it turns out they had a bad model of my preferences (often because they observed a contextual preference in a different context) and would have been better off using their own judgment.
I think I perceive less of a difference between the examples we’ve been discussing, but after reading your reply I’m also less sure if and where we disagree significantly.
I read your previous claim as essentially saying “it would always be bad to include the information that some person X is skeptical about MIRI when making the decision whether to give MIRI a $5M grant, unless you understand more details about why X has this view”.
I still think this view basically commits you to refusing to see information of that type in the COVID policy thought experiment. This is essentially for the reasons (i)-(iii) I listed above: I think that in practice it will be too costly to understand the views of each such person X in more detail.
(But usually it will be worth it to do this for some people, for instance for the reason spelled out in your toy model. As I said: I do think it will often be even more valuable to understand someone’s specific reasons for having a belief.)
Instead, I suspect you will need to focus on the few highest-priority cases, and in the end you’ll end up with people X1,…,Xl whose views you understand in great detail, people Y1,…,Ym where your understanding stops at other fairly high-level/top-line views (e.g. maybe you know what they think about “will AGI be developed this century?” but not much about why), and people Z1,…,Zn of whom you only know the top-line view of how much funding they’d want to give to MIRI.
At that point, I think you’re basically in a similar situation. There is no gun pointed at your head, but you still want to make a decision right now, and so you can either throw away the information about the views of person Zi or use it without understanding their arguments.
Furthermore, I don’t think your situation with respect to person Yj is that different: if you take their view on “AGI this century?” into account for the decision whether to fund MIRI but have a policy of never using “bare top-level views”, this would commit to to ignoring the same information in a different situation, e.g. the decision whether to place a large bet on whether AGI will be developed this century (purely because what’s a top-level view in one situation will be an argument or “specific” fact in another); this seems odd.
(This is also why I’m not sure I understand the relevance of your point on hierarchical organizations. I agree that usually sub-problems will be assigned to different employees. But e.g. if I assign “AGI this century?” to one employee and “is MIRI well run?” to another employee, why am I justified in believing their conclusions on these fairly high-level questions but not justified in believing anyone’s view on whether MIRI is worth funding?)
Note that thus far I’m mainly arguing against taking into account no-one’s top-level views. Your most recent claim involving “the people I think are smartest” suggests that maybe you mainly object to using a lot of discretion in which particular people’s top-level views to use.
I think my reaction to this is mixed: On one hand, I certainly agree that there is a danger involved here (e.g. in fact I think that many EAs defer too much to others EAs relative to non-EA experts), and that it’s impossible to assess with perfect accuracy how much weight to give to each person. On the other hand, I think it is often possible to assess this with limited but still useful accuracy, both based on subjective and hard-to-justify assessments of how good someone’s judgment seemed in the past (cf. how senior politicians often work with advisors they’ve had a long work relationship with) and on crude objectives proxies (e.g. ‘has a PhD in computer science’).
On the latter, you said that specifically you object to allocating weight to someone’s top-line opinion “separately from your evaluation of their finer-grained sub-claims”. If that means their finer-grained sub-claims on the particular question under consideration, then I disagree for the reasons explained so far. If that means “separately from your evaluation of any finer-grained sub-claim they ever made on anything”, then I agree more with this, though still think this is both common and justified in some cases (e.g. if I learn that I have rare disease A for which specialists universally recommend drug B as treatment, I’ll probably happily take drug B without having ever heard of any specific sub-claim made by any disease-A specialist).
Similarly, I agree that information cascades and groupthink are dangers/downsides, but that they will sometimes be outweighed by the benefits.
I think we disagree. I’m not sure why you think that even for decisions with large effects one should only or mostly take into account specific facts or arguments, and am curious about your reasoning here.
I do think it will often be even more valuable to understand someone’s specific reasons for having a belief. However, (i) in complex domains achieving a full understanding would be a lot of work, (ii) people usually have incomplete insight into the specific reasons for why they hold a certain belief themselves and instead might appeal to intuition, (iii) in practice you only have so much time and thus can’t fully pursue all disagreements.
So yes, always stopping at “person X thinks that p” and never trying to understand why would be a poor policy. But never stopping at that seems infeasible to me, and I don’t see the benefits from always throwing away the information that X believes p in situations where you don’t fully understand why.
For instance, imagine I pointed a gun to your head and forced you to now choose between two COVID mitigation policies for the US for the next 6 months. I offer you to give you additional information of the type “X thinks that p” with some basic facts on X but no explanation for why they hold this belief. Would you refuse to view that information? If someone else was in that situation, would you pay for me not giving them this information? How much?
There is a somewhat different failure mode where person X’s view isn’t particularly informative compared to the view of other people Y, Z, etc., and so by considerung just X’s view you give it undue weight. But I don’t think you’re talking about that?
I’m partly puzzled by your reaction because the basic phenomenon of deferring to the output of others’ reasoning processes without understanding the underlying facts or arguments strikes me as not unusual at all. For example, I believe that the Earth orbits the Sun rather than the other way around. But I couldn’t give you any very specific argument for this like “on the geocentric hypothesis, the path of this body across the sky would look like this”. Instead, the reason for my belief is that the heliocentric worldview is scientific consensus, i.e. epistemic deference to others without understanding their reasoning.
This also happens when the view in question makes a difference in practice. For instance, as I’m sure you’re aware, hierarchical organizations work (among other things) because managers don’t have to recapitulate every specific argument behind the conclusions of their reports.
To sum up, a very large amount of division of epistemic labor seems like the norm rather than the exception to me, just as for the division of manual labor. The main thing that seems somewhat unusual is making that explicit.
I note that the framing / example case has changed a lot between your original comment / my reply (making a $5m grant and writing “person X is skeptical of MIRI” in the “cons” column) and this parent comment (“imagine I pointed a gun to your head and… offer you to give you additional information;” “never stopping at [person X thinks that p]”). I’m not arguing for entirely refusing to trust other people or dividing labor, as you implied there. I specifically object to giving weight to other people’s top-line views on questions where there’s substantial disagreement, based on your overall assessment of that particular person’s credibility / quality of intuition / whatever, separately from your evaluation of their finer-grained sub-claims.
If you are staking $5m on something, it’s hard for me to imagine a case where it makes sense to end up with an important node in your tree of claims whose justification is “opinions diverge on this but the people I think are smartest tend to believe p.” The reason I think this is usually bad is that (a) it’s actually impossible to know how much weight it’s rational to give someone else’s opinion without inspecting their sub-claims, and (b) it leads to groupthink/herding/information cascades.
As a toy example to illustrate (a): suppose that for MIRI to be the optimal grant recipient, it both needs to be the case that AI risk is high (A) and that MIRI is the Best organization working to mitigate it (B). A and B are independent. The prior is (P(A) = 50, P(B) = 50). Alice and Bob have observed evidence with a 9:1 odds ratio in favor of A, so think (P(A) = 90, P(B) = 50). Carol has observed evidence with a 9:1 odds ratio in favor of B. Alice, Bob and Carol all have the same top-line view of MIRI (P(A and B) = 0.45), but the rational aggregation of Alice and Bob’s “view” is much less positive than the rational aggregation of Bob and Carol’s.
It’s interesting that you mention hierarchical organizations because I think they usually follow a better process for dividing up epistemic labor, which is to assign different sub-problems to different people rather than by averaging a large number of people’s beliefs on a single question. This works better because the sub-problems are more likely to be independent from each other, so they don’t require as much communication / model-sharing to aggregate their results.
In fact, when hierarchical organizations do the other thing—”brute force” aggregate others’ beliefs in situations of disagreement—it usually indicates an organizational failure. My own experience is that I often see people do something a particular way, even though they disagree with it, because they think that’s my preference; but it turns out they had a bad model of my preferences (often because they observed a contextual preference in a different context) and would have been better off using their own judgment.
I think I perceive less of a difference between the examples we’ve been discussing, but after reading your reply I’m also less sure if and where we disagree significantly.
I read your previous claim as essentially saying “it would always be bad to include the information that some person X is skeptical about MIRI when making the decision whether to give MIRI a $5M grant, unless you understand more details about why X has this view”.
I still think this view basically commits you to refusing to see information of that type in the COVID policy thought experiment. This is essentially for the reasons (i)-(iii) I listed above: I think that in practice it will be too costly to understand the views of each such person X in more detail.
(But usually it will be worth it to do this for some people, for instance for the reason spelled out in your toy model. As I said: I do think it will often be even more valuable to understand someone’s specific reasons for having a belief.)
Instead, I suspect you will need to focus on the few highest-priority cases, and in the end you’ll end up with people X1,…,Xl whose views you understand in great detail, people Y1,…,Ym where your understanding stops at other fairly high-level/top-line views (e.g. maybe you know what they think about “will AGI be developed this century?” but not much about why), and people Z1,…,Zn of whom you only know the top-line view of how much funding they’d want to give to MIRI.
(Note that I don’t think this is hypothetical. My impression is that there are in fact long-standing disagreements about MIRI’s work that can’t be fully resolved or even broken down into very precise subclaims/cruxes, despite many people having spent probably hundreds of hours on this. For instance, in the writeups to their first grants to MIRI, Open Phil remark that “We found MIRI’s work especially difficult to evaluate”, and the most recent grant amount was set by a committee that “average[s] individuals’ allocations” . See also this post by Open Phil’s Daniel Dewey and comments.)
At that point, I think you’re basically in a similar situation. There is no gun pointed at your head, but you still want to make a decision right now, and so you can either throw away the information about the views of person Zi or use it without understanding their arguments.
Furthermore, I don’t think your situation with respect to person Yj is that different: if you take their view on “AGI this century?” into account for the decision whether to fund MIRI but have a policy of never using “bare top-level views”, this would commit to to ignoring the same information in a different situation, e.g. the decision whether to place a large bet on whether AGI will be developed this century (purely because what’s a top-level view in one situation will be an argument or “specific” fact in another); this seems odd.
(This is also why I’m not sure I understand the relevance of your point on hierarchical organizations. I agree that usually sub-problems will be assigned to different employees. But e.g. if I assign “AGI this century?” to one employee and “is MIRI well run?” to another employee, why am I justified in believing their conclusions on these fairly high-level questions but not justified in believing anyone’s view on whether MIRI is worth funding?)
Note that thus far I’m mainly arguing against taking into account no-one’s top-level views. Your most recent claim involving “the people I think are smartest” suggests that maybe you mainly object to using a lot of discretion in which particular people’s top-level views to use.
I think my reaction to this is mixed: On one hand, I certainly agree that there is a danger involved here (e.g. in fact I think that many EAs defer too much to others EAs relative to non-EA experts), and that it’s impossible to assess with perfect accuracy how much weight to give to each person. On the other hand, I think it is often possible to assess this with limited but still useful accuracy, both based on subjective and hard-to-justify assessments of how good someone’s judgment seemed in the past (cf. how senior politicians often work with advisors they’ve had a long work relationship with) and on crude objectives proxies (e.g. ‘has a PhD in computer science’).
On the latter, you said that specifically you object to allocating weight to someone’s top-line opinion “separately from your evaluation of their finer-grained sub-claims”. If that means their finer-grained sub-claims on the particular question under consideration, then I disagree for the reasons explained so far. If that means “separately from your evaluation of any finer-grained sub-claim they ever made on anything”, then I agree more with this, though still think this is both common and justified in some cases (e.g. if I learn that I have rare disease A for which specialists universally recommend drug B as treatment, I’ll probably happily take drug B without having ever heard of any specific sub-claim made by any disease-A specialist).
Similarly, I agree that information cascades and groupthink are dangers/downsides, but that they will sometimes be outweighed by the benefits.