Whether you should defer or not depends not only on your estimation of relative expertise but also on what kind of role you want to fill in the community, in order to increase the altruistic impact of the community. I call it role-based social epistemology, and I really should write it longly at some point.
You can think of the roles as occupying different points on the production possibilities frontier for the explore-exploit trade-off. If you think of rationality as an individual project, you might reason that you should aim for a healthy balance between exploring and exploiting due to potential diminishing returns to either one. But if you instead take the perspective of “how can I coordinate with my community in order to maximize the impact we produce?” you start to see why specializing could be optimal.
If you are a Decision-Maker, you’re optimizing for allocating resources efficiently (e.g. money, work, power, etc.), and the impact of your allocation depends on how accurate your related beliefs are. And because accurate beliefs are so important to your decisions, you should opportunistically defer to people whenever you think they might have better information than you (Aumann-agreement style), as long as you think you’re decently calibrated and you’re deferring to advice with sufficient bandwidth. You should be Exploiting existing knowledge and expertise by deferring to it. But because you frequently defer to others, you may not be safe to defer to in turn due to potential negative externalities associated with information cascades that can be hard to correct.
If you are an Explorer, your job is to optimize for the chance of discovering important insights that can help the community make progress on important open problems. This is fundamentally a different project compared to just trying to acquire accurate beliefs. Now, you want to actively avoidending up with the same belief states as other people to some extent. Notice that the problems are still open, which means that existing tools and angles-of-attack may be insufficient for the task. Evaluate paradigms/approaches for how neglected they are. Remember, it doesn’t matter whether you’re right about what other people are right about as long as you are extremely right about what other people are wrong about. So if you want to maximize the chance that the community ends up solving the problem, you want to coordinate with other explorers in order to search separate parts of the idea-tree. What matters is that the right fruits are picked, not that you end up picking them. We’re in a parallel tree search paradigm, and this has implications for how we individually should balance the explore-exploit trade-off.
If you are an Expert/Forecaster, your job is to acquire accurate beliefs that are safe to defer to. If there’s a difficult and important question (crucial consideration) for which better forecasts could marginally improve the careers/donations of a lot of people, this could be an important way to produce impact. Your impact here depends on the accuracy of your beliefs, so unlike the Explorer, you don’t have strong reasons to avoid common belief states. Your impact also depends on how safe you are to defer to, because you can potentially do a lot of harm by reinforcing false information cascades. And these considerations are newcomblike, so you should act by that rule which, when followed by the proportion of other experts you predict will follow it due to the same reasoning as you, maximizes community impact. Sometimes that means you want to report your independent impressions, and sometimes that means you want to share and elicit likelihood ratios instead of posterior beliefs. A common failure mode here is to over-optimize for making your beliefs legible, which in extreme cases turns into a race to the bottom, and in median cases turns into myopic empiricism where you predictably end up astray because you refuse to update on a large class of illegible (but Bayesian) evidence.
The limiting case of a Decision-Maker always reporting their independent impressions is (roughly) an Expert. But only insofar as it’s psychologically feasible to maintain a long-term separation between independent and all-things-considered impressions, and I have my doubts.
What kind of knowledge-work you want to do depends not only on your comparative advantages but also on your model of how the community produces altruistic impact. If on your model community impact is marginally bottlenecked by insights, you should probably consider aiming for ambitious insight-production. If on the other hand, you think you can have more impact by contributing to marginally better forecasts about what problems are most important to work on, maybe consider aiming for producing deference-safe predictions. And if you just happen to have a bunch of money lying around, you just don’t have the luxury of recklessly diverging from expert consensus, and you should use everything in your toolbox to make sure you’re allocating them efficiently.
No one is pure any of these. The roles are separated by what optimization criteria they use, and you optimize for different things in different areas of your life, and over your lifetime. But I think it’s usefwl to carve out the roles, so you can notice when you need to put which hat on, and the different things that implies for how you should play.
Here are choice parts of my model of deference:
Whether you should defer or not depends not only on your estimation of relative expertise but also on what kind of role you want to fill in the community, in order to increase the altruistic impact of the community. I call it role-based social epistemology, and I really should write it longly at some point.
You can think of the roles as occupying different points on the production possibilities frontier for the explore-exploit trade-off. If you think of rationality as an individual project, you might reason that you should aim for a healthy balance between exploring and exploiting due to potential diminishing returns to either one. But if you instead take the perspective of “how can I coordinate with my community in order to maximize the impact we produce?” you start to see why specializing could be optimal.
If you are a Decision-Maker, you’re optimizing for allocating resources efficiently (e.g. money, work, power, etc.), and the impact of your allocation depends on how accurate your related beliefs are. And because accurate beliefs are so important to your decisions, you should opportunistically defer to people whenever you think they might have better information than you (Aumann-agreement style), as long as you think you’re decently calibrated and you’re deferring to advice with sufficient bandwidth. You should be Exploiting existing knowledge and expertise by deferring to it. But because you frequently defer to others, you may not be safe to defer to in turn due to potential negative externalities associated with information cascades that can be hard to correct.
If you are an Explorer, your job is to optimize for the chance of discovering important insights that can help the community make progress on important open problems. This is fundamentally a different project compared to just trying to acquire accurate beliefs. Now, you want to actively avoid ending up with the same belief states as other people to some extent. Notice that the problems are still open, which means that existing tools and angles-of-attack may be insufficient for the task. Evaluate paradigms/approaches for how neglected they are. Remember, it doesn’t matter whether you’re right about what other people are right about as long as you are extremely right about what other people are wrong about. So if you want to maximize the chance that the community ends up solving the problem, you want to coordinate with other explorers in order to search separate parts of the idea-tree. What matters is that the right fruits are picked, not that you end up picking them. We’re in a parallel tree search paradigm, and this has implications for how we individually should balance the explore-exploit trade-off.
If you are an Expert/Forecaster, your job is to acquire accurate beliefs that are safe to defer to. If there’s a difficult and important question (crucial consideration) for which better forecasts could marginally improve the careers/donations of a lot of people, this could be an important way to produce impact. Your impact here depends on the accuracy of your beliefs, so unlike the Explorer, you don’t have strong reasons to avoid common belief states. Your impact also depends on how safe you are to defer to, because you can potentially do a lot of harm by reinforcing false information cascades. And these considerations are newcomblike, so you should act by that rule which, when followed by the proportion of other experts you predict will follow it due to the same reasoning as you, maximizes community impact. Sometimes that means you want to report your independent impressions, and sometimes that means you want to share and elicit likelihood ratios instead of posterior beliefs. A common failure mode here is to over-optimize for making your beliefs legible, which in extreme cases turns into a race to the bottom, and in median cases turns into myopic empiricism where you predictably end up astray because you refuse to update on a large class of illegible (but Bayesian) evidence.
The limiting case of a Decision-Maker always reporting their independent impressions is (roughly) an Expert. But only insofar as it’s psychologically feasible to maintain a long-term separation between independent and all-things-considered impressions, and I have my doubts.
What kind of knowledge-work you want to do depends not only on your comparative advantages but also on your model of how the community produces altruistic impact. If on your model community impact is marginally bottlenecked by insights, you should probably consider aiming for ambitious insight-production. If on the other hand, you think you can have more impact by contributing to marginally better forecasts about what problems are most important to work on, maybe consider aiming for producing deference-safe predictions. And if you just happen to have a bunch of money lying around, you just don’t have the luxury of recklessly diverging from expert consensus, and you should use everything in your toolbox to make sure you’re allocating them efficiently.
No one is pure any of these. The roles are separated by what optimization criteria they use, and you optimize for different things in different areas of your life, and over your lifetime. But I think it’s usefwl to carve out the roles, so you can notice when you need to put which hat on, and the different things that implies for how you should play.
I found this to be an interesting way to think about this that I hadn’t considered before—thanks for taking the time to write it up.