Re#6: The only object-level cause discussed is global poverty and health interventions. However, other object-level causes seem much more structurally similar to meta-level work. For instance, this description would seem to hold true of much of the work on X-risk:
[They] typically have many distinct activities for the same goal. These activities can have very different cost-effectiveness. The marginal dollar will typically fund the activity with the lowest (estimated) cost-effectiveness, and so will likely be significantly less impactful than the average dollar
Hence insofar as this is an issue (though see Rob’s and Ben’s comments) it’s not unique to meta-level work.
Re #7: Much of the impact from current work on X-risk plausibly derives from getting more valuable actors involved. Since there are several players in the EA X-risk space, this means that it may be hard to estimate which EA X-risk org caused more valuable actors to get involved in X-risk, just like it may be hard to estimate which EA meta-org caused EA movement growth. Thus this problem doesn’t seem to be unique to meta-orgs. (Also, I agree with Ben that one would like to see detailed case arguing that these are actually problems, rather than just pointing out that they might be problems.)
This points to the fact that much of the work within object-level causes is “meta” in the sense that it concerns getting more people involved, rather than in doing direct work. However, it is not “meta” in the sense used in this post. (Ben discussed this distinction in his reply to Hurford—see his remark on ‘second level meta’.)
Generally, I think that the discussion on “meta” vs “object-level” work would gain from more precise definitions and more conceptual clarity. I’m currently working on that.
Re#8
I think that with most object-level causes this is less of an issue. When RCTs are conducted, they eliminate the problem, at least in theory (though you do run into problems when trying to generalize from RCTs to other environments).
I don’t understand why global poverty and health RCTs (which I suppose is what you refer to) would make a difference. What you’re discussing is whether someone’s investment makes a difference, or whether what they’re trying to do would have occurred anyway. For instance, whether them donating to AMF leads to less people dying from malaria. I think that’s plausibly the case, but the question of RCTs vs other kinds of evidence—e.g. observational studies—seems orthogonal to that issue.
I think that this is a problem in far future areas (would the existential risk have happened, or would it have been solved anyway?), but people are aware of the problem and tackle it (research into the probabilities of various existential risks, looking for particularly neglected existential risks such as AI risk).
Current neglectedness of an existential risk is not necessarily a good guide to future neglectedness. Hence focussing on currently neglected risks does not guarantee that you have a large counterfactual impact.
I’m currently looking into the issue of future investment into X-risk and there doesn’t seem to be that much research done on it, so it’s not clear to me that people have tackled this problem. It’s generally very difficult.
It thus seems to me the situation regarding X-risk is quite analogous to that regarding meta-work on this score, too, but I am not sure I have understood your argument.
Re#3 (which you support, though you don’t comment on it further):
To the contrary, meta-work can be a wise choice in face of uncertainty of what the best cause is. Meta-work is supposed to give you resources which can be flexibly allocated across a range of causes. This means that if we’re uncertain of what object-level cause is the best, meta-work might be our best choice (whereas being sure what the best cause is is a reason to work on that cause instead of doing meta-level work).
One of the more ingenious aspects of effective altruism is that it is fits an uncertain world so well. If the world were easy to predict, there would be less of a need for a movement which can shift cause as we gather more evidence of what the top cause is. However, that is not the world we’re living in, as, e.g. the literature on forecasting shows.
Given what we know about human overconfidence, I think there is more reason to be worried that people are overconfident about their estimates of the relative marginal expected value of object-level causes, than that they withhold judgement of what object-level cause is best for too long.
Re#6: The only object-level cause discussed is global poverty and health interventions. However, other object-level causes seem much more structurally similar to meta-level work.
It is definitely true for animal welfare, but in this case ACE takes this into account when making its recommendations, which defuses the trap. I’m not too familiar with X-risk organizations so I don’t know to what extent it is true there—it seems plausible that it is also an issue for X-risk organizations.
Re #7: Much of the impact from current work on X-risk plausibly derives from getting more valuable actors involved. Since there are several players in the EA X-risk space, this means that it may be hard to estimate which EA X-risk org caused more valuable actors to get involved in X-risk, just like it may be hard to estimate which EA meta-org caused EA movement growth. Thus this problem doesn’t seem to be unique to meta-orgs.
I would in fact count this as “meta” work—it would fall under “promoting effective altruism in the abstract”.
What you’re discussing is whether someone’s investment makes a difference, or whether what they’re trying to do would have occurred anyway.
My point is that an RCT proves to you that distributing bed nets in a certain situation causes a reduction in child mortality. There is no uncertainty about the counterfactual—that’s the whole point of the control. (Yes, there are problems with generalizing to new situations, and there can be problems with methodology, but it is still very good evidence.)
On the other hand, when somebody takes the GWWC pledge, you have next to no idea how much and where they would have donated had they not taken the pledge. (GWWC
In both cases you can have concerns about funding counterfactuals (“What if someone else had donated and my donation was useless?”) but with meta work you often don’t even know the counterfactuals for the actual intervention you are implementing.
It thus seems to me the situation regarding X-risk is quite analogous to that regarding meta-work on this score, too, but I am not sure I have understood your argument.
Given what you say about future investment into X-risk, it makes sense that the situation is analogous for X-risk. I wasn’t aware of this.
To the contrary, meta-work can be a wise choice in face of uncertainty of what the best cause is.
If you’ve spent a long time thinking about what the best cause is, and still have a lot of uncertainty, then I agree. The case I worry about is that people start doing meta work instead of thinking about cause prioritization, because that’s simply easier, and you get to avoid analysis paralysis. As an anecdatum, I think that’s partly happened to me.
Given what we know about human overconfidence, I think there is more reason to be worried that people are overconfident about their estimates of the relative marginal expected value of object-level causes, than that they withhold judgement of what object-level cause is best for too long.
Maybe, I’m not sure. It feels more to me like we should be worried about overconfidence once someone makes a decision, but I haven’t seriously thought about it.
I would in fact count this as “meta” work—it would fall under “promoting effective altruism in the abstract”.
I don’t think that to promote X-risk should be counted as “promoting effective altruism in the abstract”.
My point is that an RCT proves to you that distributing bed nets in a certain situation causes a reduction in child mortality.
There are two kinds of issues here:
1) Does the intervention have the intended effect, or would that effect have occurred anyway?
2) Does the donation make the intervention occur, or would that intervention have occurred anyway (for replaceability reasons)?
Bednet RCTs help with the first question, but not with the second. For meta-work and X-risk both questions are very tricky.
Re#6: The only object-level cause discussed is global poverty and health interventions. However, other object-level causes seem much more structurally similar to meta-level work. For instance, this description would seem to hold true of much of the work on X-risk:
Hence insofar as this is an issue (though see Rob’s and Ben’s comments) it’s not unique to meta-level work.
Re #7: Much of the impact from current work on X-risk plausibly derives from getting more valuable actors involved. Since there are several players in the EA X-risk space, this means that it may be hard to estimate which EA X-risk org caused more valuable actors to get involved in X-risk, just like it may be hard to estimate which EA meta-org caused EA movement growth. Thus this problem doesn’t seem to be unique to meta-orgs. (Also, I agree with Ben that one would like to see detailed case arguing that these are actually problems, rather than just pointing out that they might be problems.)
This points to the fact that much of the work within object-level causes is “meta” in the sense that it concerns getting more people involved, rather than in doing direct work. However, it is not “meta” in the sense used in this post. (Ben discussed this distinction in his reply to Hurford—see his remark on ‘second level meta’.)
Generally, I think that the discussion on “meta” vs “object-level” work would gain from more precise definitions and more conceptual clarity. I’m currently working on that.
Re#8
I don’t understand why global poverty and health RCTs (which I suppose is what you refer to) would make a difference. What you’re discussing is whether someone’s investment makes a difference, or whether what they’re trying to do would have occurred anyway. For instance, whether them donating to AMF leads to less people dying from malaria. I think that’s plausibly the case, but the question of RCTs vs other kinds of evidence—e.g. observational studies—seems orthogonal to that issue.
Current neglectedness of an existential risk is not necessarily a good guide to future neglectedness. Hence focussing on currently neglected risks does not guarantee that you have a large counterfactual impact.
I’m currently looking into the issue of future investment into X-risk and there doesn’t seem to be that much research done on it, so it’s not clear to me that people have tackled this problem. It’s generally very difficult.
It thus seems to me the situation regarding X-risk is quite analogous to that regarding meta-work on this score, too, but I am not sure I have understood your argument.
Re#3 (which you support, though you don’t comment on it further):
To the contrary, meta-work can be a wise choice in face of uncertainty of what the best cause is. Meta-work is supposed to give you resources which can be flexibly allocated across a range of causes. This means that if we’re uncertain of what object-level cause is the best, meta-work might be our best choice (whereas being sure what the best cause is is a reason to work on that cause instead of doing meta-level work).
One of the more ingenious aspects of effective altruism is that it is fits an uncertain world so well. If the world were easy to predict, there would be less of a need for a movement which can shift cause as we gather more evidence of what the top cause is. However, that is not the world we’re living in, as, e.g. the literature on forecasting shows.
Given what we know about human overconfidence, I think there is more reason to be worried that people are overconfident about their estimates of the relative marginal expected value of object-level causes, than that they withhold judgement of what object-level cause is best for too long.
It is definitely true for animal welfare, but in this case ACE takes this into account when making its recommendations, which defuses the trap. I’m not too familiar with X-risk organizations so I don’t know to what extent it is true there—it seems plausible that it is also an issue for X-risk organizations.
I would in fact count this as “meta” work—it would fall under “promoting effective altruism in the abstract”.
My point is that an RCT proves to you that distributing bed nets in a certain situation causes a reduction in child mortality. There is no uncertainty about the counterfactual—that’s the whole point of the control. (Yes, there are problems with generalizing to new situations, and there can be problems with methodology, but it is still very good evidence.)
On the other hand, when somebody takes the GWWC pledge, you have next to no idea how much and where they would have donated had they not taken the pledge. (GWWC
In both cases you can have concerns about funding counterfactuals (“What if someone else had donated and my donation was useless?”) but with meta work you often don’t even know the counterfactuals for the actual intervention you are implementing.
Given what you say about future investment into X-risk, it makes sense that the situation is analogous for X-risk. I wasn’t aware of this.
Maybe, I’m not sure. It feels more to me like we should be worried about overconfidence once someone makes a decision, but I haven’t seriously thought about it.
I don’t think that to promote X-risk should be counted as “promoting effective altruism in the abstract”.
There are two kinds of issues here:
1) Does the intervention have the intended effect, or would that effect have occurred anyway? 2) Does the donation make the intervention occur, or would that intervention have occurred anyway (for replaceability reasons)?
Bednet RCTs help with the first question, but not with the second. For meta-work and X-risk both questions are very tricky.
Yes, I agree.