By “meta concerns”, do you mean stuff like base rate of interventions, risk of being wildly wrong, methodological errors/biases, etc.?
Hmm I think those are concerns too, but I guess I was primarily thinking about meta-EA concerns like whether an intervention increases or decreases EA prestige, willingness of new talent to work on EA organizations, etc.
Also, did you mean that these dominate when object-level impacts are big enough?
No. Sorry I was maybe being a bit confusing with my language. I mean to say that when comparing two interventions, the meta-level impacts of the less effective intervention will dominate if you believe the object-level impact of the less effective intervention is sufficiently small.
Consider two altruistic interventions, direct AI Safety research and forecasting. Suppose that you did the analysis and think the object-level impact of AI Safety research is X (very high) and the impact of forecasting is only 0.0001X.
(This is just an example. I do not believe that the value of forecasting is 10,000 times lower than AI Safety research).
I think it will then be wrong to think that the all-things-considered value of an EA doing forecasting is 10,000 times lower than the value of an EA doing direct AI Safety research, if for no other reason than because EAs doing forecasting has knock-on effects on EAs doing AI Safety.
If the object-level impacts of the less effective intervention are big enough, then it’s less obvious that the meta-level impacts will dominate. If your analysis instead gave a value of forecasting as 3x less impactful than AIS research, then I have to actually present a fairly strong argument for why the meta-level impacts may still dominate, whereas I think it’s much more self-evident at the 10,000x difference level.
Let me know if this is still unclear, happy to expand.
Oh, also a lot of my concerns (in this particular regard) mirror Brian Tomasik’s, so maybe it’d be easier to just read his post.
Thanks, much clearer! I’ll paraphrase the crux to see if I understand you correctly:
If the EA community is advocating for interventions X and Y, then more resources R going into Y leads to more resources going into X (within about R/10^2).
Yes, though I’m strictly more confident about absolute value than the change being positive (So more resources R going into Y can also eventually lead to less resources going into X, within about R/10^2).
And the model is that increased resources into main EA cause areas generally affects the EA movement by increasing its visibility, diverting resources from that cause area to others, and bringing in more people in professional contact with EA orgs/people—those general effects trickle down to other cause areas?
Yes that sounds right. There are also internal effects in framing/thinking/composition that by itself have flow-through effects that are plausibly >1% in expectation.
For example, more resources going into forecasting may cause other EAs to be more inclined to quantify uncertainty and focus on the quantifiable, with both potentially positive and negative flow-through effects, more resources going into medicine- or animal welfare- heavy causes will change the gender composition of EA, and so forth.
I think that these flow-through effects mostly apply to specific targets for resources that are more involved with the EA-community. For example, I wouldn’t expect more resources going into efforts by Tetlock to improve the use of forecasting in the US government to have visible flow-through effects on the community. Or more resources going into AMF are not going to affect the community.
I think that this might apply particularly well to career choices.
Also, if these effects are as large as you think, it would be good to more clearly articulate what are the most important flow-through effects and how do we improve the positives and mitigate the negatives.
I think that these flow-through effects mostly apply to specific targets for resources that are more involved with the EA-community
I agree with that.
Also, if these effects are as large as you think, it would be good to more clearly articulate what are the most important flow-through effects and how do we improve the positives and mitigate the negatives.
I agree that people should do this carefully.
One explicit misunderstanding that I want to forestall is using this numerical reasoning to believe “oh cause areas don’t have >100x differences in impact after adjusting for meta considerations. My personal fit for Y cause area is >100x. Therefore I should do Y.”
This is because the metrics for causal assignment for meta-level considerations is quite hard (harder than normal) and may look very different from the object level considerations within a cause area.
To get back to the forecasting example, continue to suppose forecasting is 10,000x less important than AI safety. Suppose further that high-quality research in forecasting has a larger affect in drawing highly talented people within EA to doing forecasting/forecasting research than in drawing highly talented people outside of EA to EA. Then in that case, while high-quality research within forecasting is net positive on the object level, it’s actually negative on the meta level.
There might other good reasons to pay more attention to personal fit than naive cost effectiveness, but the numerical argument for <=~100x differences between cause areas alone is not sufficient.
Hmm I think those are concerns too, but I guess I was primarily thinking about meta-EA concerns like whether an intervention increases or decreases EA prestige, willingness of new talent to work on EA organizations, etc.
No. Sorry I was maybe being a bit confusing with my language. I mean to say that when comparing two interventions, the meta-level impacts of the less effective intervention will dominate if you believe the object-level impact of the less effective intervention is sufficiently small.
Consider two altruistic interventions, direct AI Safety research and forecasting. Suppose that you did the analysis and think the object-level impact of AI Safety research is X (very high) and the impact of forecasting is only 0.0001X.
(This is just an example. I do not believe that the value of forecasting is 10,000 times lower than AI Safety research).
I think it will then be wrong to think that the all-things-considered value of an EA doing forecasting is 10,000 times lower than the value of an EA doing direct AI Safety research, if for no other reason than because EAs doing forecasting has knock-on effects on EAs doing AI Safety.
If the object-level impacts of the less effective intervention are big enough, then it’s less obvious that the meta-level impacts will dominate. If your analysis instead gave a value of forecasting as 3x less impactful than AIS research, then I have to actually present a fairly strong argument for why the meta-level impacts may still dominate, whereas I think it’s much more self-evident at the 10,000x difference level.
Let me know if this is still unclear, happy to expand.
Oh, also a lot of my concerns (in this particular regard) mirror Brian Tomasik’s, so maybe it’d be easier to just read his post.
Thanks, much clearer! I’ll paraphrase the crux to see if I understand you correctly:
If the EA community is advocating for interventions X and Y, then more resources R going into Y leads to more resources going into X (within about R/10^2).
Is this what you have in mind?
Yes, though I’m strictly more confident about absolute value than the change being positive (So more resources R going into Y can also eventually lead to less resources going into X, within about R/10^2).
And the model is that increased resources into main EA cause areas generally affects the EA movement by increasing its visibility, diverting resources from that cause area to others, and bringing in more people in professional contact with EA orgs/people—those general effects trickle down to other cause areas?
Yes that sounds right. There are also internal effects in framing/thinking/composition that by itself have flow-through effects that are plausibly >1% in expectation.
For example, more resources going into forecasting may cause other EAs to be more inclined to quantify uncertainty and focus on the quantifiable, with both potentially positive and negative flow-through effects, more resources going into medicine- or animal welfare- heavy causes will change the gender composition of EA, and so forth.
Thanks again for the clarification!
I think that these flow-through effects mostly apply to specific targets for resources that are more involved with the EA-community. For example, I wouldn’t expect more resources going into efforts by Tetlock to improve the use of forecasting in the US government to have visible flow-through effects on the community. Or more resources going into AMF are not going to affect the community.
I think that this might apply particularly well to career choices.
Also, if these effects are as large as you think, it would be good to more clearly articulate what are the most important flow-through effects and how do we improve the positives and mitigate the negatives.
I agree with that.
I agree that people should do this carefully.
One explicit misunderstanding that I want to forestall is using this numerical reasoning to believe “oh cause areas don’t have >100x differences in impact after adjusting for meta considerations. My personal fit for Y cause area is >100x. Therefore I should do Y.”
This is because the metrics for causal assignment for meta-level considerations is quite hard (harder than normal) and may look very different from the object level considerations within a cause area.
To get back to the forecasting example, continue to suppose forecasting is 10,000x less important than AI safety. Suppose further that high-quality research in forecasting has a larger affect in drawing highly talented people within EA to doing forecasting/forecasting research than in drawing highly talented people outside of EA to EA. Then in that case, while high-quality research within forecasting is net positive on the object level, it’s actually negative on the meta level.
There might other good reasons to pay more attention to personal fit than naive cost effectiveness, but the numerical argument for <=~100x differences between cause areas alone is not sufficient.