This looks good! One possible modification that I think would enhance the model would be an arrow from “direct work” or “good in the world” to “movement building” – I’d imagine that the movement will be much more successful in attracting new members if we’re seen as doing valuable things in the world.
Thanks! I agree that this might be another pretty important consideration, though I’d want to think a bit about how to model it in a way that feels relatively realistic and non-arbitrary.
E.g. maybe we should say people start out with a prior on the effectiveness of a movement at getting good things done, and instead of just being deterministically “recruited”, they decide whether to contribute their labor and/or capital to a movement partly on the basis of their evaluation of its effectiveness, after updating on the basis of its track record.
A hacky solution is just to bear in mind that ‘movement building’ often doesn’t look like explicit recruitment, but could include a lot of things that look a lot like object level work.
We can then consider two questions:
What’s the ideal fraction to invest in movement building?
What are the highest-return movement building efforts? (where that might look like object-level work)
This would ignore the object level value projected by the movement building efforts, but that would be fine, unless they’re of comparable value.
For most interventions, either the movement building effects or the object level value is going to dominate, so we can just treat them as one of the other.
I guess some sorts of earning to give may also attract new members. E.g. it wouldn’t surprise me if Sam Bankman-Fried’s work attracts some people to effective altruism.
I can imagine that feedback loop (good in the world → movement building) being important at the beginning. Arguably one of the reasons why the global health & development → longtermism change of minds is so common is because longtermism has good arguments in principle but no big tangible wins to its name, so it’s better able to convince those who pay attention to it because they’re drawn to EA because of global health & development’s big wins, rather than convince people directly.
But even in that case, if one wants longtermism to get a few big wins to increase its movement building appeal, it would surprise me if the way to do this was through more earning to give, rather than by spending down longtermism’s big pot of money and using some of its labor for direct work.
if one wants longtermism to get a few big wins to increase its movement building appeal, it would surprise me if the way to do this was through more earning to give, rather than by spending down longtermism’s big pot of money and using some of its labor for direct work
I agree – I think the practical implication is more “this consideration updates us towards funding/allocating labor towards direct work over explicit movement building” and less “this consideration updates us towards E2G over direct work/movement building”.
If the arrow is from good in the world, this could increase the value of direct work and direct spending (and thus earning to give) relative to movement building. I can imagine setups where this might flip the conclusion, but I think that this would be fairly unlikely.
E.g., because of scope insensitivity, I don’t think potential movement participants would be substantially more impressed by $2*N billions of GiveDirectly-equivalents of good per year vs just $N billions.
If the arrow is from direct work, this increases the value of direct work relative to everything else, and our conclusions almost certainly still hold.
I imagine that Phil might have some other thoughts to share.
because of scope insensitivity, I don’t think potential movement participants would be substantially more impressed by $2*N billions of GiveDirectly-equivalents of good per year vs just $N billions
Agree (though potential EAs may be more likely to be impressed with that stuff than most people), but I think qualitative things that we could accomplish would be impressive. For instance, if we funded a cure for malaria (or cancer, or …) I think that would be more impressive than if we funded some people trying to cure those diseases but none of the people we funded succeeded. I also think that people are more likely to be attracted to AI safety if it seems like we’re making real headway on the problem.
This looks good! One possible modification that I think would enhance the model would be an arrow from “direct work” or “good in the world” to “movement building” – I’d imagine that the movement will be much more successful in attracting new members if we’re seen as doing valuable things in the world.
Thanks! I agree that this might be another pretty important consideration, though I’d want to think a bit about how to model it in a way that feels relatively realistic and non-arbitrary.
E.g. maybe we should say people start out with a prior on the effectiveness of a movement at getting good things done, and instead of just being deterministically “recruited”, they decide whether to contribute their labor and/or capital to a movement partly on the basis of their evaluation of its effectiveness, after updating on the basis of its track record.
A hacky solution is just to bear in mind that ‘movement building’ often doesn’t look like explicit recruitment, but could include a lot of things that look a lot like object level work.
We can then consider two questions:
What’s the ideal fraction to invest in movement building?
What are the highest-return movement building efforts? (where that might look like object-level work)
This would ignore the object level value projected by the movement building efforts, but that would be fine, unless they’re of comparable value.
For most interventions, either the movement building effects or the object level value is going to dominate, so we can just treat them as one of the other.
I guess some sorts of earning to give may also attract new members. E.g. it wouldn’t surprise me if Sam Bankman-Fried’s work attracts some people to effective altruism.
More off-the-cuff thought:
I can imagine that feedback loop (good in the world → movement building) being important at the beginning. Arguably one of the reasons why the global health & development → longtermism change of minds is so common is because longtermism has good arguments in principle but no big tangible wins to its name, so it’s better able to convince those who pay attention to it because they’re drawn to EA because of global health & development’s big wins, rather than convince people directly.
But even in that case, if one wants longtermism to get a few big wins to increase its movement building appeal, it would surprise me if the way to do this was through more earning to give, rather than by spending down longtermism’s big pot of money and using some of its labor for direct work.
I agree – I think the practical implication is more “this consideration updates us towards funding/allocating labor towards direct work over explicit movement building” and less “this consideration updates us towards E2G over direct work/movement building”.
This is a good point, and thanks for the comment.
If the arrow is from good in the world, this could increase the value of direct work and direct spending (and thus earning to give) relative to movement building. I can imagine setups where this might flip the conclusion, but I think that this would be fairly unlikely.
E.g., because of scope insensitivity, I don’t think potential movement participants would be substantially more impressed by $2*N billions of GiveDirectly-equivalents of good per year vs just $N billions.
If the arrow is from direct work, this increases the value of direct work relative to everything else, and our conclusions almost certainly still hold.
I imagine that Phil might have some other thoughts to share.
Agree (though potential EAs may be more likely to be impressed with that stuff than most people), but I think qualitative things that we could accomplish would be impressive. For instance, if we funded a cure for malaria (or cancer, or …) I think that would be more impressive than if we funded some people trying to cure those diseases but none of the people we funded succeeded. I also think that people are more likely to be attracted to AI safety if it seems like we’re making real headway on the problem.