This question is studied in veritistic social epistemology. I recommend playing around with the Laputa network epistemology simulation to get some practical model feedback to notice how it’s similar and dissimilar to your model of how the real world community behaves. Here are some of my independent impressions on the topic:
Distinguish between testimonialand technicalevidence. The former is what you take on trust (epistemic deference, Aumann-agreement stuff), and the latter is everything else (argument, observation, math).
Under certain conditions, there’s a trade-off between the accuracy of crowdsourced estimates (e.g. surveys on AI risk) and the widespread availability of decision-relevant current best guesses (cf. simulations of the “Zollman effect”).
Personally, I think simulations plausibly underestimate the effect. Think of it like doing Monte-Carlo Tree Search over ideaspace, where we want to have a certain level of randomness to decide which branches of the tree to go down. And we arguably can’t achieve that randomness if we get stuck in certain paradigms due to the Einstellung effect (sorry for jargon). Communicating paradigms can be destructive of underdeveloped paradigms.
To increase the breadth of exploration over ideaspace, we can encourage “community bubbliness” among researchers (aka “small-world network”), where communication inside bubbles is high, and communication between them is limited. There’s a trade-off between the speed of research progress (for any given paradigm) and the breadth and rigour of the progress. Your preference for how to make this trade-off could depend on your view of AI timelines.
How much you should update on someone’s testimony depends on your trust function relative to that person. Understanding trust functions is one of the most underappreciated leverage points for improving epistemic communities and “raising sanity waterlines”, imo.
If a community has a habit of updating trust functions naively (e.g. increase or decrease your trust towards someone based on whether they give you confirmatory testimonies), it can lead to premature convergence and polarisation of group beliefs. And on a personal level, it can indefinitely lock you out of areas in ideaspace/branches on the ideatree you could have benefited from exploring. [Laputa example] [example 2]
Committing to only updating trust functions based on direct evidence of reasoning ability and sincerity, and never on object-level beliefs, can be a usefwl start. But all evidence is entangled, and personally, I’m ok with locking myself out of some areas in ideaspace because I’m sufficiently pessimistic about there being any value there. So I will use some object-level beliefs as evidence of reasoning-ability and sincerity and therefore use them to update my trust functions.
Deferring to academic research can have the bandwidth problem[1] you’re talking about, and this is especially a problem when the research has been optimised for non-EA relevant criteria. Holden’s History is a good example: he shouldn’t defer to expert historians on questions related to welfare throughout history, because most academics are optimising their expertise for entirely different things.
Deferring to experts can also be a problem when experts have been selected for their beliefs to some extent. This is most likely true of experts on existential risk.
Deferring to community members you think know better than you is fairly harmless if no one defers to you in turn. I think a healthy epistemic community has roles for people to play for each area of expertise.
Decision-maker: If you make really high-stakes decisions, you should use all the evidence you can, testimonial or otherwise, in order to make better decisions.
Expert: Your role is to be safe to defer to. You realise that crowdsourced expert beliefs provide more value to the community if you try to maintain the purity of your independent impressions, so you focus on technical evidence and you’re very reluctant to update on testimonial evidence even from other experts.
Explorer: If most of your contributions come from contributing with novel ideas, perhaps consider taking risks by exploring neglected areas in ideaspace at the cost of potentially making your independent impressions less accurate on average compared to the wisdom of the crowd.
Honestly, my take on the EA community is that it’s surprisingly healthy. It wouldn’t be terrible if EA kept doing whatever it’s doing right now. I think it ranks unreasonably high in the possible ways of arranging epistemic communities. :p
Well, I’ve been thinking about these things precisely in order to make top-level posts, but then my priorities shifted because I ended up thinking that the EA epistemic community was doing fine without my interventions, and all that remained in my toolkit was cool ideas that weren’t necessarily usefwl. I might reconsider it. :p
Keep in mind that in my own framework, I’m an Explorer, not an Expert. Not safe to defer to.
On my impressions: relative to most epistemic communities I think EA is doing pretty well. Relatively to a hypothetical ideal I think we’ve got a way to go. And I think the thing is good enough to be worth spending perfectionist attention on trying to make excellent.
Some (controversial) reasons I’m surprisingly optimistic about the community:
1) It’s already geographically and social-network bubbly and explores various paradigms.
2) The social status gradient is aligned with deference at the lower levels, and differentiation at the higher levels (to some extent). And as long as testimonial evidence/deference flows downwards (where they’re likely to improve opinions), and the top-level tries to avoid conforming, there’s a status push towards exploration and confidence in independent impressions.
3) As long as deference is mostly unidirectional (downwards in social status) there are fewer loops/information cascades (less double-counting of evidence), and epistemic bubbles are harder to form and easier to pop (from above). And social status isn’t that hard to attain for conscientious smart people, I think, so smart people aren’t stuck at the bottom where their opinions are under-utilised? Idk.
Probably more should go here, but I forget. The community could definitely be better, and it’s worth exploring how to optimise it (any clever norms we can spread about trust functions?), so I’m not sure we disagree except you happen to look like the grumpy one because I started the chain by speaking optimistically. :3
Hi Emrik, wow, I thought this was a genuinely great comment deserving of its own top-level post. From your response to Owen above and your recent lack of top-level posting history it doesn’t seem like you’ll do it anytime soon, so I’m hoping to nudge you to reconsider just in case you’ve warmed to the idea since :) (of course feel free to say no)
Alas, I’m unlikely to prioritize writing except when I lose control of my motivations and I can’t help it.[1] But there’s nothing stopping someone else extracting what they learn from my other comments¹²³ re deference and making post(s) from it, no attribution required.
(Arguably it’s often more educational to learn something from somebody who’s freshly gone through the process of learning it. Knowledge-of-transition can supplement knowledge-of-target-state.)
Haphazardly selected additional points on deference:
“They say O(H)=1:2. | Then I can infer that they updated from 1:6 to 1:2 by multiplying with a likelihood ratio of 3:1. And because C and D, I can update on that likelihood ratio in order to end up with a posterior of O(H)=6:1. | The equal weight view would have me adjust down, whereas Bayes tells me to adjust up.”
“Ask the experts. They’re likely the most informed on the issue. Unfortunately, they’re also among the groups most heavily selected for belief in the hypothesis.”
It’s sort of paradoxical. As a result of my investigations into social epistemology 2 years ago, I came away with the conclusion that I ought to focus ~all my learning-efforts on trying to (recursively) improve my own cognition, with ~no consideration for my ability to teach anyone anything of what I learn. My motivation to share my ideas is an impurity that I’ve been trying hard to extinguish. Writing is not useless, but progress toward my goal is much faster when I automatically think in the language I construct purely to communicate with myself.
Thanks for the thoughtful & generous response and interesting links Emrik :) The natural cluster of questions that include deference has been on my mind ever since I learned about epistemic learned helplessness years ago, so I appreciate the pointers.
I confess to being a bit alarmed by your footnote. For reasoning transparency’s sake, would you be willing to share how you were led to the conclusion to turn inward? I have in my own way been trying to improve clarity of thought, although my reasons include an extrinsic component (e.g. I really like helping people figure out their problems, or fail productively in trying), and even the intrinsic component (clarity makes my heart sing) often points me outward (cf. steps 3 and 8 here) and can also look like teaching others. And I’ve noticed that both can speed up my progress greatly despite reducing time spent just thinking, the former akin to being Alice not Bob, and the latter in a way a bit like “pruning the branching factor” or making me realize I had been overlooking fruitful branches or just modeling the whole thing wrongly. This is the overall “vibe” from which I doubt the effectiveness of your inward turn.
But that’s admittedly not the real reason I’m writing this; my real reason echoes Julia’s comment.
This question is studied in veritistic social epistemology. I recommend playing around with the Laputa network epistemology simulation to get some practical model feedback to notice how it’s similar and dissimilar to your model of how the real world community behaves. Here are some of my independent impressions on the topic:
Distinguish between testimonial and technical evidence. The former is what you take on trust (epistemic deference, Aumann-agreement stuff), and the latter is everything else (argument, observation, math).
Under certain conditions, there’s a trade-off between the accuracy of crowdsourced estimates (e.g. surveys on AI risk) and the widespread availability of decision-relevant current best guesses (cf. simulations of the “Zollman effect”).
Personally, I think simulations plausibly underestimate the effect. Think of it like doing Monte-Carlo Tree Search over ideaspace, where we want to have a certain level of randomness to decide which branches of the tree to go down. And we arguably can’t achieve that randomness if we get stuck in certain paradigms due to the Einstellung effect (sorry for jargon). Communicating paradigms can be destructive of underdeveloped paradigms.
To increase the breadth of exploration over ideaspace, we can encourage “community bubbliness” among researchers (aka “small-world network”), where communication inside bubbles is high, and communication between them is limited. There’s a trade-off between the speed of research progress (for any given paradigm) and the breadth and rigour of the progress. Your preference for how to make this trade-off could depend on your view of AI timelines.
How much you should update on someone’s testimony depends on your trust function relative to that person. Understanding trust functions is one of the most underappreciated leverage points for improving epistemic communities and “raising sanity waterlines”, imo.
If a community has a habit of updating trust functions naively (e.g. increase or decrease your trust towards someone based on whether they give you confirmatory testimonies), it can lead to premature convergence and polarisation of group beliefs. And on a personal level, it can indefinitely lock you out of areas in ideaspace/branches on the ideatree you could have benefited from exploring. [Laputa example] [example 2]
Committing to only updating trust functions based on direct evidence of reasoning ability and sincerity, and never on object-level beliefs, can be a usefwl start. But all evidence is entangled, and personally, I’m ok with locking myself out of some areas in ideaspace because I’m sufficiently pessimistic about there being any value there. So I will use some object-level beliefs as evidence of reasoning-ability and sincerity and therefore use them to update my trust functions.
Deferring to academic research can have the bandwidth problem[1] you’re talking about, and this is especially a problem when the research has been optimised for non-EA relevant criteria. Holden’s History is a good example: he shouldn’t defer to expert historians on questions related to welfare throughout history, because most academics are optimising their expertise for entirely different things.
Deferring to experts can also be a problem when experts have been selected for their beliefs to some extent. This is most likely true of experts on existential risk.
Deferring to community members you think know better than you is fairly harmless if no one defers to you in turn. I think a healthy epistemic community has roles for people to play for each area of expertise.
Decision-maker: If you make really high-stakes decisions, you should use all the evidence you can, testimonial or otherwise, in order to make better decisions.
Expert: Your role is to be safe to defer to. You realise that crowdsourced expert beliefs provide more value to the community if you try to maintain the purity of your independent impressions, so you focus on technical evidence and you’re very reluctant to update on testimonial evidence even from other experts.
Explorer: If most of your contributions come from contributing with novel ideas, perhaps consider taking risks by exploring neglected areas in ideaspace at the cost of potentially making your independent impressions less accurate on average compared to the wisdom of the crowd.
Honestly, my take on the EA community is that it’s surprisingly healthy. It wouldn’t be terrible if EA kept doing whatever it’s doing right now. I think it ranks unreasonably high in the possible ways of arranging epistemic communities. :p
I like this term for it! It’s better than calling it the “Daddy-is-a-doctor problem”.
[Without implying I agree with everything …]
This comment was awesome, super high density of useful stuff. I wonder if you’d consider making it a top level post?
Thanks<3
Well, I’ve been thinking about these things precisely in order to make top-level posts, but then my priorities shifted because I ended up thinking that the EA epistemic community was doing fine without my interventions, and all that remained in my toolkit was cool ideas that weren’t necessarily usefwl. I might reconsider it. :p
Keep in mind that in my own framework, I’m an Explorer, not an Expert. Not safe to defer to.
On my impressions: relative to most epistemic communities I think EA is doing pretty well. Relatively to a hypothetical ideal I think we’ve got a way to go. And I think the thing is good enough to be worth spending perfectionist attention on trying to make excellent.
Some (controversial) reasons I’m surprisingly optimistic about the community:
1) It’s already geographically and social-network bubbly and explores various paradigms.
2) The social status gradient is aligned with deference at the lower levels, and differentiation at the higher levels (to some extent). And as long as testimonial evidence/deference flows downwards (where they’re likely to improve opinions), and the top-level tries to avoid conforming, there’s a status push towards exploration and confidence in independent impressions.
3) As long as deference is mostly unidirectional (downwards in social status) there are fewer loops/information cascades (less double-counting of evidence), and epistemic bubbles are harder to form and easier to pop (from above). And social status isn’t that hard to attain for conscientious smart people, I think, so smart people aren’t stuck at the bottom where their opinions are under-utilised? Idk.
Probably more should go here, but I forget. The community could definitely be better, and it’s worth exploring how to optimise it (any clever norms we can spread about trust functions?), so I’m not sure we disagree except you happen to look like the grumpy one because I started the chain by speaking optimistically. :3
Hi Emrik, wow, I thought this was a genuinely great comment deserving of its own top-level post. From your response to Owen above and your recent lack of top-level posting history it doesn’t seem like you’ll do it anytime soon, so I’m hoping to nudge you to reconsider just in case you’ve warmed to the idea since :) (of course feel free to say no)
Thank you for appreciating! 🕊️
Alas, I’m unlikely to prioritize writing except when I lose control of my motivations and I can’t help it.[1] But there’s nothing stopping someone else extracting what they learn from my other comments¹ ² ³ re deference and making post(s) from it, no attribution required.
(Arguably it’s often more educational to learn something from somebody who’s freshly gone through the process of learning it. Knowledge-of-transition can supplement knowledge-of-target-state.)
Haphazardly selected additional points on deference:
Succinctly, the difference between Equal-Weight deference and Bayes
“They say O(H)=1:2. | Then I can infer that they updated from 1:6 to 1:2 by multiplying with a likelihood ratio of 3:1. And because C and D, I can update on that likelihood ratio in order to end up with a posterior of O(H)=6:1. | The equal weight view would have me adjust down, whereas Bayes tells me to adjust up.”
Paradox of Expert Opinion
“Ask the experts. They’re likely the most informed on the issue. Unfortunately, they’re also among the groups most heavily selected for belief in the hypothesis.”
It’s sort of paradoxical. As a result of my investigations into social epistemology 2 years ago, I came away with the conclusion that I ought to focus ~all my learning-efforts on trying to (recursively) improve my own cognition, with ~no consideration for my ability to teach anyone anything of what I learn. My motivation to share my ideas is an impurity that I’ve been trying hard to extinguish. Writing is not useless, but progress toward my goal is much faster when I automatically think in the language I construct purely to communicate with myself.
Thanks for the thoughtful & generous response and interesting links Emrik :) The natural cluster of questions that include deference has been on my mind ever since I learned about epistemic learned helplessness years ago, so I appreciate the pointers.
I confess to being a bit alarmed by your footnote. For reasoning transparency’s sake, would you be willing to share how you were led to the conclusion to turn inward? I have in my own way been trying to improve clarity of thought, although my reasons include an extrinsic component (e.g. I really like helping people figure out their problems, or fail productively in trying), and even the intrinsic component (clarity makes my heart sing) often points me outward (cf. steps 3 and 8 here) and can also look like teaching others. And I’ve noticed that both can speed up my progress greatly despite reducing time spent just thinking, the former akin to being Alice not Bob, and the latter in a way a bit like “pruning the branching factor” or making me realize I had been overlooking fruitful branches or just modeling the whole thing wrongly. This is the overall “vibe” from which I doubt the effectiveness of your inward turn.
But that’s admittedly not the real reason I’m writing this; my real reason echoes Julia’s comment.