Re#6: The only object-level cause discussed is global poverty and health interventions. However, other object-level causes seem much more structurally similar to meta-level work.
It is definitely true for animal welfare, but in this case ACE takes this into account when making its recommendations, which defuses the trap. I’m not too familiar with X-risk organizations so I don’t know to what extent it is true there—it seems plausible that it is also an issue for X-risk organizations.
Re #7: Much of the impact from current work on X-risk plausibly derives from getting more valuable actors involved. Since there are several players in the EA X-risk space, this means that it may be hard to estimate which EA X-risk org caused more valuable actors to get involved in X-risk, just like it may be hard to estimate which EA meta-org caused EA movement growth. Thus this problem doesn’t seem to be unique to meta-orgs.
I would in fact count this as “meta” work—it would fall under “promoting effective altruism in the abstract”.
What you’re discussing is whether someone’s investment makes a difference, or whether what they’re trying to do would have occurred anyway.
My point is that an RCT proves to you that distributing bed nets in a certain situation causes a reduction in child mortality. There is no uncertainty about the counterfactual—that’s the whole point of the control. (Yes, there are problems with generalizing to new situations, and there can be problems with methodology, but it is still very good evidence.)
On the other hand, when somebody takes the GWWC pledge, you have next to no idea how much and where they would have donated had they not taken the pledge. (GWWC
In both cases you can have concerns about funding counterfactuals (“What if someone else had donated and my donation was useless?”) but with meta work you often don’t even know the counterfactuals for the actual intervention you are implementing.
It thus seems to me the situation regarding X-risk is quite analogous to that regarding meta-work on this score, too, but I am not sure I have understood your argument.
Given what you say about future investment into X-risk, it makes sense that the situation is analogous for X-risk. I wasn’t aware of this.
To the contrary, meta-work can be a wise choice in face of uncertainty of what the best cause is.
If you’ve spent a long time thinking about what the best cause is, and still have a lot of uncertainty, then I agree. The case I worry about is that people start doing meta work instead of thinking about cause prioritization, because that’s simply easier, and you get to avoid analysis paralysis. As an anecdatum, I think that’s partly happened to me.
Given what we know about human overconfidence, I think there is more reason to be worried that people are overconfident about their estimates of the relative marginal expected value of object-level causes, than that they withhold judgement of what object-level cause is best for too long.
Maybe, I’m not sure. It feels more to me like we should be worried about overconfidence once someone makes a decision, but I haven’t seriously thought about it.
I would in fact count this as “meta” work—it would fall under “promoting effective altruism in the abstract”.
I don’t think that to promote X-risk should be counted as “promoting effective altruism in the abstract”.
My point is that an RCT proves to you that distributing bed nets in a certain situation causes a reduction in child mortality.
There are two kinds of issues here:
1) Does the intervention have the intended effect, or would that effect have occurred anyway?
2) Does the donation make the intervention occur, or would that intervention have occurred anyway (for replaceability reasons)?
Bednet RCTs help with the first question, but not with the second. For meta-work and X-risk both questions are very tricky.
It is definitely true for animal welfare, but in this case ACE takes this into account when making its recommendations, which defuses the trap. I’m not too familiar with X-risk organizations so I don’t know to what extent it is true there—it seems plausible that it is also an issue for X-risk organizations.
I would in fact count this as “meta” work—it would fall under “promoting effective altruism in the abstract”.
My point is that an RCT proves to you that distributing bed nets in a certain situation causes a reduction in child mortality. There is no uncertainty about the counterfactual—that’s the whole point of the control. (Yes, there are problems with generalizing to new situations, and there can be problems with methodology, but it is still very good evidence.)
On the other hand, when somebody takes the GWWC pledge, you have next to no idea how much and where they would have donated had they not taken the pledge. (GWWC
In both cases you can have concerns about funding counterfactuals (“What if someone else had donated and my donation was useless?”) but with meta work you often don’t even know the counterfactuals for the actual intervention you are implementing.
Given what you say about future investment into X-risk, it makes sense that the situation is analogous for X-risk. I wasn’t aware of this.
Maybe, I’m not sure. It feels more to me like we should be worried about overconfidence once someone makes a decision, but I haven’t seriously thought about it.
I don’t think that to promote X-risk should be counted as “promoting effective altruism in the abstract”.
There are two kinds of issues here:
1) Does the intervention have the intended effect, or would that effect have occurred anyway? 2) Does the donation make the intervention occur, or would that intervention have occurred anyway (for replaceability reasons)?
Bednet RCTs help with the first question, but not with the second. For meta-work and X-risk both questions are very tricky.
Yes, I agree.