You could try to model by estimating how (i) the talent needs and (ii) the talent availability will be distributed if we further scale the community.
(i) If you assume that the EA community grows, you may think that the percentage of different skillsets that we need in the community will be different. E.g. you might believe that if the community grows by a factor of 10, we don’t need 10x as many people thinking about movement building strategy (the problems size increases not linearly with the number of people) or entrepreneurial skills (as the average org will be larger and more established), but an increase by a factor of say 2-5 might be sufficient. On the other hand, you’d quite likely need ~10x as many ops people.
(ii) For the talent distribution, one could model this using one of the following assumptions:
1) Linearly scale the current talent distribution (i.e. assume that the distribution of skillsets in the future community would be the same as today).
2) Assume that the future talent distribution will become more similar to a relevant reference class (e.g. talent distribution for graduates from top unis)
A few conclusions e.g. I’d get from this
weak point against skills building in start-ups—if you’re great at this, start stuff now
weak point in favour of building management skills, especially with assumption 1), but less so with assumption 2)
weak point against specialising in areas where EA would really benefit from having just 2-3 experts but unlikely need many more (e.g. history, psychology, institutional decision making, nanotech, geoengineering) if you’re also a good fit for sth else, as we might just find them along the way
esp. if 2), weak points against working on biorisk (or investing substantially in skills building in bio) if you might be an equal fit for technical AI safety, as the maths/computer science : biologists ratio at most unis is more 1 : 1 (see https://www.hesa.ac.uk/news/11-01-2018/sfr247-higher-education-student-statistics/subjects), but we probably want to have 5-10x as many people working on AI rather than biorisk.
[The naive view using current talent distribution might suggest that you should work on bio rather than AI if you’re an equal fit, as the current AI : bio talent ratio seems to be > 10: 1]
All of this is less relevant if you believe in high discount rates on work done now rather than in 5-10 years.
You could try to model by estimating how (i) the talent needs and (ii) the talent availability will be distributed if we further scale the community.
(i) If you assume that the EA community grows, you may think that the percentage of different skillsets that we need in the community will be different. E.g. you might believe that if the community grows by a factor of 10, we don’t need 10x as many people thinking about movement building strategy (the problems size increases not linearly with the number of people) or entrepreneurial skills (as the average org will be larger and more established), but an increase by a factor of say 2-5 might be sufficient. On the other hand, you’d quite likely need ~10x as many ops people.
(ii) For the talent distribution, one could model this using one of the following assumptions:
1) Linearly scale the current talent distribution (i.e. assume that the distribution of skillsets in the future community would be the same as today).
2) Assume that the future talent distribution will become more similar to a relevant reference class (e.g. talent distribution for graduates from top unis)
A few conclusions e.g. I’d get from this
weak point against skills building in start-ups—if you’re great at this, start stuff now
weak point in favour of building management skills, especially with assumption 1), but less so with assumption 2)
weak point against specialising in areas where EA would really benefit from having just 2-3 experts but unlikely need many more (e.g. history, psychology, institutional decision making, nanotech, geoengineering) if you’re also a good fit for sth else, as we might just find them along the way
esp. if 2), weak points against working on biorisk (or investing substantially in skills building in bio) if you might be an equal fit for technical AI safety, as the maths/computer science : biologists ratio at most unis is more 1 : 1 (see https://www.hesa.ac.uk/news/11-01-2018/sfr247-higher-education-student-statistics/subjects), but we probably want to have 5-10x as many people working on AI rather than biorisk. [The naive view using current talent distribution might suggest that you should work on bio rather than AI if you’re an equal fit, as the current AI : bio talent ratio seems to be > 10: 1]
All of this is less relevant if you believe in high discount rates on work done now rather than in 5-10 years.
I really like that idea. It might also be useful to check whether this model would have predicted past changes of career recommendations.