If the arrow is from good in the world, this could increase the value of direct work and direct spending (and thus earning to give) relative to movement building. I can imagine setups where this might flip the conclusion, but I think that this would be fairly unlikely.
E.g., because of scope insensitivity, I don’t think potential movement participants would be substantially more impressed by $2*N billions of GiveDirectly-equivalents of good per year vs just $N billions.
If the arrow is from direct work, this increases the value of direct work relative to everything else, and our conclusions almost certainly still hold.
I imagine that Phil might have some other thoughts to share.
because of scope insensitivity, I don’t think potential movement participants would be substantially more impressed by $2*N billions of GiveDirectly-equivalents of good per year vs just $N billions
Agree (though potential EAs may be more likely to be impressed with that stuff than most people), but I think qualitative things that we could accomplish would be impressive. For instance, if we funded a cure for malaria (or cancer, or …) I think that would be more impressive than if we funded some people trying to cure those diseases but none of the people we funded succeeded. I also think that people are more likely to be attracted to AI safety if it seems like we’re making real headway on the problem.
This is a good point, and thanks for the comment.
If the arrow is from good in the world, this could increase the value of direct work and direct spending (and thus earning to give) relative to movement building. I can imagine setups where this might flip the conclusion, but I think that this would be fairly unlikely.
E.g., because of scope insensitivity, I don’t think potential movement participants would be substantially more impressed by $2*N billions of GiveDirectly-equivalents of good per year vs just $N billions.
If the arrow is from direct work, this increases the value of direct work relative to everything else, and our conclusions almost certainly still hold.
I imagine that Phil might have some other thoughts to share.
Agree (though potential EAs may be more likely to be impressed with that stuff than most people), but I think qualitative things that we could accomplish would be impressive. For instance, if we funded a cure for malaria (or cancer, or …) I think that would be more impressive than if we funded some people trying to cure those diseases but none of the people we funded succeeded. I also think that people are more likely to be attracted to AI safety if it seems like we’re making real headway on the problem.