When Ben asks people why they write for the EA Forum they often say something like “because everyone reads the Forum”; N people each writing because N people will read each thing — that’s quadratic value
I think both exponential and quadratic are too fast, although it’s still plausibly superlinear. You used N∗log(N), which seems more reasonable.
Exponential seems pretty crazy (btw, that link is broken; looks like you double-pasted it). Surely we don’t have the number of (impactful) subgroups growing this quickly.
Quadratic also seems unlikely. The number of people or things a person can and is willing to interact with (much) is capped, and the average EA will try to prioritize somewhat. So, when at their limit and unwilling to increase their limit, the marginal value is what they got out of the marginal stuff minus the value of their additional attention on what they would have attended to otherwise.
As an example, consider the case of hiring. Suppose you’re looking to fill exactly one position. Unless the marginal applicant is better than the average in expectation, you should expect decreasing marginal returns to increasing your applicant pool size. If you’re looking to hire someone with some set of qualities (passing some thresholds, say), with the extra applicant as likely to have them as the average applicant, with independent probability p and N applicants, then the probability of finding someone with those qualities is 1−(1−p)N, which is bounded above by 1 and so grows even more slowly than log(N) for large enough N. Of course, the quality of your hire could also increase with a larger pool, so you could instead model this with the expected value of the maximum of iid random variables. The expected value of the max of bounded random variables, will also be bounded above by the max of each. The expected value of the max of iid uniform random variables over [0,1] is NN+1 (source), so pretty close to constant. For the normal distribution, it’s roughly proportional to √log(N) (source).
It should be similar for connections and posts, if you’re limiting the number of people/posts you substantially interact with and don’t increase that limit with the size of the community.
Furthermore, I expect the marginal post to be worse than the average, because people prioritize what they write. Also, I think some EA Forum users have had the impression that the quality of the posts and discussion has decreased as the number of active EA Forum members has increased. This could mean the value of the EA Forum for the average user decreases with the size of the community.
Similarly, extra community members from marginal outreach work could be decreasingly dedicated to EA work (potentially causing value drift and making things worse for the average EA, and at the extreme, grifters and bad actors) or generally lower priority targets for outreach on the basis of their expected contributions or the costs to bring them in.
Brand recognition or reputation could be a reason to expect the extra applicants to EA jobs to be better than the average ones, though.
Brand recognition can help get things done, and larger groups have more brand recognition
Is growing the EA community a good way to increase useful brand recognition? The EA brand seems less important than the brands of specific organizations if you’re trying to do things like influence policy or attract talent.
Thanks Michael! This is a great comment. (And I fixed the link, thanks for noting that.)
My anecdotal experience with hiring is that you are right asymptotically, but not practically. E.g. if you want to hire for some skill that only one in 10,000 people have, you get approximately linear returns to growth for the size of community that EA is considering:
And you can get to very low probabilities easily: most jobs are looking for candidates with a combination of: a somewhat rare skill, willingness to work in an unusual cause area, willingness to work in a specific geographic location, etc. and multiplying these all together gets small quickly.
It does feel intuitively right that there are diminishing returns to scale here though.
I would guess that for the biggest EA causes (other than EA meta/community), you can often hire people who aren’t part of the EA community. For animal welfare, there’s a much larger animal advocacy movement and far more veg*ns, although probably harder to find people to work on invertebrate welfare and maybe few economists. For technical AI safety, there are many ML, CS (and math) PhDs, although the most promising ones may not be cheap. Global health and biorisk are not unusual causes at all. Invertebrate welfare is pretty unusual, though.
However, for more senior/management roles, you’d want some value alignment to ensure they prioritize well and avoid causing harm (e.g. significantly advancing AI capabilities).
I think both exponential and quadratic are too fast, although it’s still plausibly superlinear. You used N∗log(N), which seems more reasonable.
Exponential seems pretty crazy (btw, that link is broken; looks like you double-pasted it). Surely we don’t have the number of (impactful) subgroups growing this quickly.
Quadratic also seems unlikely. The number of people or things a person can and is willing to interact with (much) is capped, and the average EA will try to prioritize somewhat. So, when at their limit and unwilling to increase their limit, the marginal value is what they got out of the marginal stuff minus the value of their additional attention on what they would have attended to otherwise.
As an example, consider the case of hiring. Suppose you’re looking to fill exactly one position. Unless the marginal applicant is better than the average in expectation, you should expect decreasing marginal returns to increasing your applicant pool size. If you’re looking to hire someone with some set of qualities (passing some thresholds, say), with the extra applicant as likely to have them as the average applicant, with independent probability p and N applicants, then the probability of finding someone with those qualities is 1−(1−p)N, which is bounded above by 1 and so grows even more slowly than log(N) for large enough N. Of course, the quality of your hire could also increase with a larger pool, so you could instead model this with the expected value of the maximum of iid random variables. The expected value of the max of bounded random variables, will also be bounded above by the max of each. The expected value of the max of iid uniform random variables over [0,1] is NN+1 (source), so pretty close to constant. For the normal distribution, it’s roughly proportional to √log(N) (source).
It should be similar for connections and posts, if you’re limiting the number of people/posts you substantially interact with and don’t increase that limit with the size of the community.
Furthermore, I expect the marginal post to be worse than the average, because people prioritize what they write. Also, I think some EA Forum users have had the impression that the quality of the posts and discussion has decreased as the number of active EA Forum members has increased. This could mean the value of the EA Forum for the average user decreases with the size of the community.
Similarly, extra community members from marginal outreach work could be decreasingly dedicated to EA work (potentially causing value drift and making things worse for the average EA, and at the extreme, grifters and bad actors) or generally lower priority targets for outreach on the basis of their expected contributions or the costs to bring them in.
Brand recognition or reputation could be a reason to expect the extra applicants to EA jobs to be better than the average ones, though.
Is growing the EA community a good way to increase useful brand recognition? The EA brand seems less important than the brands of specific organizations if you’re trying to do things like influence policy or attract talent.
Thanks Michael! This is a great comment. (And I fixed the link, thanks for noting that.)
My anecdotal experience with hiring is that you are right asymptotically, but not practically. E.g. if you want to hire for some skill that only one in 10,000 people have, you get approximately linear returns to growth for the size of community that EA is considering:
And you can get to very low probabilities easily: most jobs are looking for candidates with a combination of: a somewhat rare skill, willingness to work in an unusual cause area, willingness to work in a specific geographic location, etc. and multiplying these all together gets small quickly.
It does feel intuitively right that there are diminishing returns to scale here though.
I would guess that for the biggest EA causes (other than EA meta/community), you can often hire people who aren’t part of the EA community. For animal welfare, there’s a much larger animal advocacy movement and far more veg*ns, although probably harder to find people to work on invertebrate welfare and maybe few economists. For technical AI safety, there are many ML, CS (and math) PhDs, although the most promising ones may not be cheap. Global health and biorisk are not unusual causes at all. Invertebrate welfare is pretty unusual, though.
However, for more senior/management roles, you’d want some value alignment to ensure they prioritize well and avoid causing harm (e.g. significantly advancing AI capabilities).