I agree that this is the key question. It’s not clear to me that “effectiveness” scales superlinearly with “expertness”. With things where aptitude is distributed according to a normal curve (maybe intelligence), I suspect the top 0.1% are not adding much more value than the top 1% in general.
There are probably niche cases where having the top 0.1% really really matters. For example, in situations where you are competing with other top people, like football leagues or TV stations paying $millions for famous anchors.
But when I think about mainstream EA jobs: researchers, ops people, grantmakers, etc., it doesn’t feel like paying 25% more than the going rate (and thereby reducing the size of your team by 25%) makes sense.
For research, at least, it probably depends on the nature of the problem: whether you can just “brute force” it with a sufficient amount of normal science, or if you need rare new insights (which are perhaps unlikely to occur for any given researcher, but are vastly more likely to be found by the very best).
Certainly within philosophy, I think quality trumps quantity by a mile. Median research has very little value. It’s the rare breakthroughs that matter. Presumably funders think the same is true of, e.g., AI safety research.
I agree that this is the key question. It’s not clear to me that “effectiveness” scales superlinearly with “expertness”. With things where aptitude is distributed according to a normal curve (maybe intelligence), I suspect the top 0.1% are not adding much more value than the top 1% in general.
There are probably niche cases where having the top 0.1% really really matters. For example, in situations where you are competing with other top people, like football leagues or TV stations paying $millions for famous anchors.
But when I think about mainstream EA jobs: researchers, ops people, grantmakers, etc., it doesn’t feel like paying 25% more than the going rate (and thereby reducing the size of your team by 25%) makes sense.
For research, at least, it probably depends on the nature of the problem: whether you can just “brute force” it with a sufficient amount of normal science, or if you need rare new insights (which are perhaps unlikely to occur for any given researcher, but are vastly more likely to be found by the very best).
Certainly within philosophy, I think quality trumps quantity by a mile. Median research has very little value. It’s the rare breakthroughs that matter. Presumably funders think the same is true of, e.g., AI safety research.