Presumably it just depends upon how much greater impact the very top candidates have over the merely good? In many fields, I’d expect the top expert in the world to have vastly more impact than ten people who are at the 90th percentile of ability (i.e. just making the top 10%). And the world’s richest person has much more wealth than ten people at the 90th percentile of wealth, etc.
Also there’s a post on “vetting-constrained” I can’t recall off the top of my head. The gist is that funders are risk-adverse (not in the moral sense, but in the relying on elite signals sense) because Program Officers don’t have enough time / knowledge as they’d like for evaluating grant opportunities. So they rely more on credentials than ideal
I agree that this is the key question. It’s not clear to me that “effectiveness” scales superlinearly with “expertness”. With things where aptitude is distributed according to a normal curve (maybe intelligence), I suspect the top 0.1% are not adding much more value than the top 1% in general.
There are probably niche cases where having the top 0.1% really really matters. For example, in situations where you are competing with other top people, like football leagues or TV stations paying $millions for famous anchors.
But when I think about mainstream EA jobs: researchers, ops people, grantmakers, etc., it doesn’t feel like paying 25% more than the going rate (and thereby reducing the size of your team by 25%) makes sense.
For research, at least, it probably depends on the nature of the problem: whether you can just “brute force” it with a sufficient amount of normal science, or if you need rare new insights (which are perhaps unlikely to occur for any given researcher, but are vastly more likely to be found by the very best).
Certainly within philosophy, I think quality trumps quantity by a mile. Median research has very little value. It’s the rare breakthroughs that matter. Presumably funders think the same is true of, e.g., AI safety research.
Presumably it just depends upon how much greater impact the very top candidates have over the merely good? In many fields, I’d expect the top expert in the world to have vastly more impact than ten people who are at the 90th percentile of ability (i.e. just making the top 10%). And the world’s richest person has much more wealth than ten people at the 90th percentile of wealth, etc.
How Much Does Performance Differ Between People by Max Daniel and Benjamin Todd goes into this
Also there’s a post on “vetting-constrained” I can’t recall off the top of my head. The gist is that funders are risk-adverse (not in the moral sense, but in the relying on elite signals sense) because Program Officers don’t have enough time / knowledge as they’d like for evaluating grant opportunities. So they rely more on credentials than ideal
Thanks, this is the kind of source I’m excited about!
I agree that this is the key question. It’s not clear to me that “effectiveness” scales superlinearly with “expertness”. With things where aptitude is distributed according to a normal curve (maybe intelligence), I suspect the top 0.1% are not adding much more value than the top 1% in general.
There are probably niche cases where having the top 0.1% really really matters. For example, in situations where you are competing with other top people, like football leagues or TV stations paying $millions for famous anchors.
But when I think about mainstream EA jobs: researchers, ops people, grantmakers, etc., it doesn’t feel like paying 25% more than the going rate (and thereby reducing the size of your team by 25%) makes sense.
For research, at least, it probably depends on the nature of the problem: whether you can just “brute force” it with a sufficient amount of normal science, or if you need rare new insights (which are perhaps unlikely to occur for any given researcher, but are vastly more likely to be found by the very best).
Certainly within philosophy, I think quality trumps quantity by a mile. Median research has very little value. It’s the rare breakthroughs that matter. Presumably funders think the same is true of, e.g., AI safety research.