After thinking about it for a while I’m still a bit puzzled by the rated-100 or rated-1000 plan changes, and their expressed value in donor dollars. What exactly is here the counterfactual? As I read it, it seems based just on comparing “the person not changing their career path”. However, with some of the examples of the most valued changes, leading to people landing in EA organizations it seems the counterfactual state “of the world” would be “someone else doing a similar work in a central EA organization”. As AFAIK recruitment process for positions at places like central EA organizations is competitive, why don’t count as the real impact just the marginal improvement of the 80k hours influenced candidate over the next best candidate?
Another question is how do you estimate your uncertainty in valuing something rate-n?
We basically just do our best to think about what the counterfactual would have been without 80k, and then subtract that from our impact. We tend to break this into two components: (i) the value of the new option compared to what they would have done otherwise (ii) the influence of others in the community, who might have brought about similar changes soon afterwards.
The value of their next best alternative matters a little less than it might first seem because we think the impact of different options is fat-tailed i.e. someone switching to a higher-impact option might well 2x or even 10x their impact, which means you only need to reduce the estimate by 10-50%, which is a comparatively small adjustment given other huge uncertainties.
With the value of working at EA organisations, because they’re talent constrained additional staff can have a big impact, even taking account of the fact that someone else could have been hired anyway. For more on this, see our recent talent survey: https://80000hours.org/2017/11/talent-gaps-survey-2017/
This showed that EA orgs highly value marginal staff, even taking account of replaceability.
Sure, first 80k thought your counterfactual impact is “often negligible” due to replaceability, then they changed position toward replaceability being “very uncertain” in general. I don’t think you can just remove it from the model completely.
I also don’t think in the particular case of central EA organizations hiring the uncertainty is as big as in general / I’m uncertain about this, but my vague impression was there is a usually a selection of good candidates to choose from when they are hiring.
After thinking about it for a while I’m still a bit puzzled by the rated-100 or rated-1000 plan changes, and their expressed value in donor dollars. What exactly is here the counterfactual? As I read it, it seems based just on comparing “the person not changing their career path”. However, with some of the examples of the most valued changes, leading to people landing in EA organizations it seems the counterfactual state “of the world” would be “someone else doing a similar work in a central EA organization”. As AFAIK recruitment process for positions at places like central EA organizations is competitive, why don’t count as the real impact just the marginal improvement of the 80k hours influenced candidate over the next best candidate?
Another question is how do you estimate your uncertainty in valuing something rate-n?
Hi Jan,
We basically just do our best to think about what the counterfactual would have been without 80k, and then subtract that from our impact. We tend to break this into two components: (i) the value of the new option compared to what they would have done otherwise (ii) the influence of others in the community, who might have brought about similar changes soon afterwards.
The value of their next best alternative matters a little less than it might first seem because we think the impact of different options is fat-tailed i.e. someone switching to a higher-impact option might well 2x or even 10x their impact, which means you only need to reduce the estimate by 10-50%, which is a comparatively small adjustment given other huge uncertainties.
With the value of working at EA organisations, because they’re talent constrained additional staff can have a big impact, even taking account of the fact that someone else could have been hired anyway. For more on this, see our recent talent survey: https://80000hours.org/2017/11/talent-gaps-survey-2017/ This showed that EA orgs highly value marginal staff, even taking account of replaceability.
Here is 80k’s mea culpa on replaceability.
Sure, first 80k thought your counterfactual impact is “often negligible” due to replaceability, then they changed position toward replaceability being “very uncertain” in general. I don’t think you can just remove it from the model completely.
I also don’t think in the particular case of central EA organizations hiring the uncertainty is as big as in general / I’m uncertain about this, but my vague impression was there is a usually a selection of good candidates to choose from when they are hiring.