And the model is that increased resources into main EA cause areas generally affects the EA movement by increasing its visibility, diverting resources from that cause area to others, and bringing in more people in professional contact with EA orgs/people—those general effects trickle down to other cause areas?
Yes that sounds right. There are also internal effects in framing/thinking/composition that by itself have flow-through effects that are plausibly >1% in expectation.
For example, more resources going into forecasting may cause other EAs to be more inclined to quantify uncertainty and focus on the quantifiable, with both potentially positive and negative flow-through effects, more resources going into medicine- or animal welfare- heavy causes will change the gender composition of EA, and so forth.
I think that these flow-through effects mostly apply to specific targets for resources that are more involved with the EA-community. For example, I wouldn’t expect more resources going into efforts by Tetlock to improve the use of forecasting in the US government to have visible flow-through effects on the community. Or more resources going into AMF are not going to affect the community.
I think that this might apply particularly well to career choices.
Also, if these effects are as large as you think, it would be good to more clearly articulate what are the most important flow-through effects and how do we improve the positives and mitigate the negatives.
I think that these flow-through effects mostly apply to specific targets for resources that are more involved with the EA-community
I agree with that.
Also, if these effects are as large as you think, it would be good to more clearly articulate what are the most important flow-through effects and how do we improve the positives and mitigate the negatives.
I agree that people should do this carefully.
One explicit misunderstanding that I want to forestall is using this numerical reasoning to believe “oh cause areas don’t have >100x differences in impact after adjusting for meta considerations. My personal fit for Y cause area is >100x. Therefore I should do Y.”
This is because the metrics for causal assignment for meta-level considerations is quite hard (harder than normal) and may look very different from the object level considerations within a cause area.
To get back to the forecasting example, continue to suppose forecasting is 10,000x less important than AI safety. Suppose further that high-quality research in forecasting has a larger affect in drawing highly talented people within EA to doing forecasting/forecasting research than in drawing highly talented people outside of EA to EA. Then in that case, while high-quality research within forecasting is net positive on the object level, it’s actually negative on the meta level.
There might other good reasons to pay more attention to personal fit than naive cost effectiveness, but the numerical argument for <=~100x differences between cause areas alone is not sufficient.
And the model is that increased resources into main EA cause areas generally affects the EA movement by increasing its visibility, diverting resources from that cause area to others, and bringing in more people in professional contact with EA orgs/people—those general effects trickle down to other cause areas?
Yes that sounds right. There are also internal effects in framing/thinking/composition that by itself have flow-through effects that are plausibly >1% in expectation.
For example, more resources going into forecasting may cause other EAs to be more inclined to quantify uncertainty and focus on the quantifiable, with both potentially positive and negative flow-through effects, more resources going into medicine- or animal welfare- heavy causes will change the gender composition of EA, and so forth.
Thanks again for the clarification!
I think that these flow-through effects mostly apply to specific targets for resources that are more involved with the EA-community. For example, I wouldn’t expect more resources going into efforts by Tetlock to improve the use of forecasting in the US government to have visible flow-through effects on the community. Or more resources going into AMF are not going to affect the community.
I think that this might apply particularly well to career choices.
Also, if these effects are as large as you think, it would be good to more clearly articulate what are the most important flow-through effects and how do we improve the positives and mitigate the negatives.
I agree with that.
I agree that people should do this carefully.
One explicit misunderstanding that I want to forestall is using this numerical reasoning to believe “oh cause areas don’t have >100x differences in impact after adjusting for meta considerations. My personal fit for Y cause area is >100x. Therefore I should do Y.”
This is because the metrics for causal assignment for meta-level considerations is quite hard (harder than normal) and may look very different from the object level considerations within a cause area.
To get back to the forecasting example, continue to suppose forecasting is 10,000x less important than AI safety. Suppose further that high-quality research in forecasting has a larger affect in drawing highly talented people within EA to doing forecasting/forecasting research than in drawing highly talented people outside of EA to EA. Then in that case, while high-quality research within forecasting is net positive on the object level, it’s actually negative on the meta level.
There might other good reasons to pay more attention to personal fit than naive cost effectiveness, but the numerical argument for <=~100x differences between cause areas alone is not sufficient.