Presumably it just depends upon how much greater impact the very top candidates have over the merely good? In many fields, Iād expect the top expert in the world to have vastly more impact than ten people who are at the 90th percentile of ability (i.e. just making the top 10%). And the worldās richest person has much more wealth than ten people at the 90th percentile of wealth, etc.
Also thereās a post on āvetting-constrainedā I canāt recall off the top of my head. The gist is that funders are risk-adverse (not in the moral sense, but in the relying on elite signals sense) because Program Officers donāt have enough time /ā knowledge as theyād like for evaluating grant opportunities. So they rely more on credentials than ideal
I agree that this is the key question. Itās not clear to me that āeffectivenessā scales superlinearly with āexpertnessā. With things where aptitude is distributed according to a normal curve (maybe intelligence), I suspect the top 0.1% are not adding much more value than the top 1% in general.
There are probably niche cases where having the top 0.1% really really matters. For example, in situations where you are competing with other top people, like football leagues or TV stations paying $millions for famous anchors.
But when I think about mainstream EA jobs: researchers, ops people, grantmakers, etc., it doesnāt feel like paying 25% more than the going rate (and thereby reducing the size of your team by 25%) makes sense.
For research, at least, it probably depends on the nature of the problem: whether you can just ābrute forceā it with a sufficient amount of normal science, or if you need rare new insights (which are perhaps unlikely to occur for any given researcher, but are vastly more likely to be found by the very best).
Certainly within philosophy, I think quality trumps quantity by a mile. Median research has very little value. Itās the rare breakthroughs that matter. Presumably funders think the same is true of, e.g., AI safety research.
Presumably it just depends upon how much greater impact the very top candidates have over the merely good? In many fields, Iād expect the top expert in the world to have vastly more impact than ten people who are at the 90th percentile of ability (i.e. just making the top 10%). And the worldās richest person has much more wealth than ten people at the 90th percentile of wealth, etc.
How Much Does Performance Differ Between People by Max Daniel and Benjamin Todd goes into this
Also thereās a post on āvetting-constrainedā I canāt recall off the top of my head. The gist is that funders are risk-adverse (not in the moral sense, but in the relying on elite signals sense) because Program Officers donāt have enough time /ā knowledge as theyād like for evaluating grant opportunities. So they rely more on credentials than ideal
Thanks, this is the kind of source Iām excited about!
I agree that this is the key question. Itās not clear to me that āeffectivenessā scales superlinearly with āexpertnessā. With things where aptitude is distributed according to a normal curve (maybe intelligence), I suspect the top 0.1% are not adding much more value than the top 1% in general.
There are probably niche cases where having the top 0.1% really really matters. For example, in situations where you are competing with other top people, like football leagues or TV stations paying $millions for famous anchors.
But when I think about mainstream EA jobs: researchers, ops people, grantmakers, etc., it doesnāt feel like paying 25% more than the going rate (and thereby reducing the size of your team by 25%) makes sense.
For research, at least, it probably depends on the nature of the problem: whether you can just ābrute forceā it with a sufficient amount of normal science, or if you need rare new insights (which are perhaps unlikely to occur for any given researcher, but are vastly more likely to be found by the very best).
Certainly within philosophy, I think quality trumps quantity by a mile. Median research has very little value. Itās the rare breakthroughs that matter. Presumably funders think the same is true of, e.g., AI safety research.