I agree with your second and third arguments and your two rules of thumb. (And I thought about those second and third arguments when posting this and felt tempted to note them, but ultimately decided to not in order to keep this more concise and keep chugging with my other work. So Iām glad you raised them in your comment.)
I partially disagree with your first argument, for three main reasons:
People have very different comparative advantages (in other words, peopleās labour is way lessfungible than their donations).
Imagine Aliceās independent impression is that X is super important, but she trusts Bobās judgement a fair bit and knows B thinks Y is super important, and Alice is way more suited to doing Y. Meanwhile, Bob trusts Aliceās judgement a fair bit. And they both know all of this. In some cases, itāll be best from everyoneās perspective if Alice does Y and Bob does X. (This is sort of analogous to moral trade, but here the differences in views arenāt just moral.)
Not in all cases! Largely for the other two reasons you note. All else held constant, itās good for people to work on things they themselves really understand and buy the case for. But I think this can be outweighed by other sources of comparative advantage.
As another analogy, imagine how much the economy would be impeded if people decided whether they overall think plumbing or politics or or physics research are the most important thing in general and then they pursue that, regardless of their personal skill profiles.
I also think it makes sense for some people to specialise much more than others for working out what our all-things-considered beliefs should be on specific things.
Some people should do macrostrategy reseach, others should learn how US politics works and what we should do about that, others should learn about specific cause areas, etc.
I think it would be very inefficient and ineffective to try to get everyone to have well-informed independent impressions of all topics that are highly relevant to the question āWhat career/āresearch decisions should I make?ā
I think this becomes all the more true as the EA community grows, as we have more people focused on more specific things and on doingthings (vs more high-level prioritisation research and things like that), and as we move into more and more areas.
So I donāt really agree that āour distribution of research and career decisions will look like the aggregate of everyoneās independent impressions, which is a decent first approximation for what our all-things-considered belief should be as a communityā, or at least I donāt think thatās a healthy way for our community to be.
I think itās true that, āif everyone acts based on a similar all-things-considered belief, we could overweight the modal scenarioā (emphasis added), but I think that need not happen. We should try to track the uncertainty in our all-things-considered beliefs, and we should take a portfolio approach.
(I wrote this comment quickly, and this is a big and complex topic where much more could be said. I really donāt want readers to round this off as me saying something like āEveryone should just do what 80,000 Hours says without thinking or questioning itā.)
We should try to track the uncertainty in our all-things-considered beliefs, and we should take a portfolio approach.
Itās not enough to just track the uncertainty, you also have to have visibility into current resource allocation. The ādefer if thereās an incentive to do soā idea helps here, because if thereās an incentive, that suggests someone with such visibility thinks there is an under-allocation.
I agree with your second and third arguments and your two rules of thumb. (And I thought about those second and third arguments when posting this and felt tempted to note them, but ultimately decided to not in order to keep this more concise and keep chugging with my other work. So Iām glad you raised them in your comment.)
I partially disagree with your first argument, for three main reasons:
People have very different comparative advantages (in other words, peopleās labour is way less fungible than their donations).
Imagine Aliceās independent impression is that X is super important, but she trusts Bobās judgement a fair bit and knows B thinks Y is super important, and Alice is way more suited to doing Y. Meanwhile, Bob trusts Aliceās judgement a fair bit. And they both know all of this. In some cases, itāll be best from everyoneās perspective if Alice does Y and Bob does X. (This is sort of analogous to moral trade, but here the differences in views arenāt just moral.)
Not in all cases! Largely for the other two reasons you note. All else held constant, itās good for people to work on things they themselves really understand and buy the case for. But I think this can be outweighed by other sources of comparative advantage.
As another analogy, imagine how much the economy would be impeded if people decided whether they overall think plumbing or politics or or physics research are the most important thing in general and then they pursue that, regardless of their personal skill profiles.
I also think it makes sense for some people to specialise much more than others for working out what our all-things-considered beliefs should be on specific things.
Some people should do macrostrategy reseach, others should learn how US politics works and what we should do about that, others should learn about specific cause areas, etc.
I think it would be very inefficient and ineffective to try to get everyone to have well-informed independent impressions of all topics that are highly relevant to the question āWhat career/āresearch decisions should I make?ā
I think this becomes all the more true as the EA community grows, as we have more people focused on more specific things and on doing things (vs more high-level prioritisation research and things like that), and as we move into more and more areas.
So I donāt really agree that āour distribution of research and career decisions will look like the aggregate of everyoneās independent impressions, which is a decent first approximation for what our all-things-considered belief should be as a communityā, or at least I donāt think thatās a healthy way for our community to be.
See also
I think itās true that, āif everyone acts based on a similar all-things-considered belief, we could overweight the modal scenarioā (emphasis added), but I think that need not happen. We should try to track the uncertainty in our all-things-considered beliefs, and we should take a portfolio approach.
(I wrote this comment quickly, and this is a big and complex topic where much more could be said. I really donāt want readers to round this off as me saying something like āEveryone should just do what 80,000 Hours says without thinking or questioning itā.)
Good points.
Itās not enough to just track the uncertainty, you also have to have visibility into current resource allocation. The ādefer if thereās an incentive to do soā idea helps here, because if thereās an incentive, that suggests someone with such visibility thinks there is an under-allocation.