My guess is that cause-neutral activities are 30-90% as effective as cause-specific ones (in terms of generating labor for that specific cause), which is remarkably high, but still less than 100%
This isnât obvious to me. If you want to generate generic workers for your animal welfare org, sure, you might prefer to fund a vegan group. But if you want people who are good at making explicit tradeoffs, focusing on scope sensitivity, and being exceptionally truth-seeking, I would bet that an EA group is more likely to get you those people. And so it seems plausible that a donor who only prioritized animal welfare would still fund EA groups if they otherwise wouldnât exist.
In a related point, I would have been nervous (before GPT-4 made this concern much less prominent) about whether funding an AI Safety group that mostly just talked about AI got more safety workers, or just got more people interested in working on explicit AGI.
Relatedly: I expect that the margins change with differing levels of investment. Even if you only cared about AI safety, I suspect that the correct amount of investment in cause-general stuff is significantly non-zero, because you first get the low-hanging fruit of the people who were especially receptive to cause-general material, and so forth.
So it actually feels weird to talk about estimating these relative effectiveness numbers without talking about which margins weâre considering them at. (However, I might be overestimating the extent to which these different buckets are best modelled as having distinct diminishing returns curves.)
I agree. A funder interested in career changes in one cause area will probably only reach a subset of potential talent if they only target people who are already interested in this cause area vs. generally capable individuals who could choose directions.
In the business context, you could imagine a recruiter having the option to buy a booth at a university specialising in the area the company is working in vs. buying one at a broad career fair of a top university. While the specialised university may bring more people that have trained in and are specialised in your area, you might still go for the top university as talent there might have overall greater potential, has the ability to more easily pivot or can contribute in more general areas like leadership, entrepreneurship, communications or similar.
One aspect here is also the timeframe youâre looking at. If we think of EA community building as talent development, and weâre working with people who might have many years until they peak in their careers, then focussing on a specific cause area might be limiting. A funder who is interested in job changes in one cause area now can still see the value of a pipeline of generally capable people skilling up in different areas of expertise before being a good fit for a new role. The Open Phil EA/âLT Survey touches on this, and similarly, Holdenâs post on career choices for longtermists also covers broader skills independent of cause area.
In the business context, you could imagine a recruiter having the option to buy a booth at a university specialising in the area the company is working in vs. buying one at a broad career fair of a top university. While the specialised university may bring more people that have trained in and are specialised in your area, you might still go for the top university as talent there might have overall greater potential, has the ability to more easily pivot or can contribute in more general areas like leadership, entrepreneurship, communications or similar.
I think this is a spot on analogy, and something weâve discussed in our group a lot.
One additional cost of cause specific groups is that once you brand yourself inside a movement, you get drawn into the politics of that movement. Other existing groups perceive you as a competitor for influence and activists. Hence they become much less tolerant of differences in your approach.
For example an animal advocacy group advocating for cultivated meat work in my country would frequently be bad-mouthed by other activists for not being a vegan group(because cultivated meat production uses some animal cells taken without consent).
My observation is that animal activists are much more lenient when an organisation doesnât brand itself as an âanimalâ organisation.
And so it seems plausible that a donor who only prioritized animal welfare would still fund EA groups if they otherwise wouldnât exist.
I think this is possibly correct but unfortunately not at a level to be cruxyâanimal welfare groups just donât get that much funding to begin with, so even if animal welfare advocates valued EA groups a bit above animal welfare groups, itâs still pretty low in absolute.
This isnât obvious to me. If you want to generate generic workers for your animal welfare org, sure, you might prefer to fund a vegan group. But if you want people who are good at making explicit tradeoffs, focusing on scope sensitivity, and being exceptionally truth-seeking, I would bet that an EA group is more likely to get you those people. And so it seems plausible that a donor who only prioritized animal welfare would still fund EA groups if they otherwise wouldnât exist.
In a related point, I would have been nervous (before GPT-4 made this concern much less prominent) about whether funding an AI Safety group that mostly just talked about AI got more safety workers, or just got more people interested in working on explicit AGI.
Relatedly: I expect that the margins change with differing levels of investment. Even if you only cared about AI safety, I suspect that the correct amount of investment in cause-general stuff is significantly non-zero, because you first get the low-hanging fruit of the people who were especially receptive to cause-general material, and so forth.
So it actually feels weird to talk about estimating these relative effectiveness numbers without talking about which margins weâre considering them at. (However, I might be overestimating the extent to which these different buckets are best modelled as having distinct diminishing returns curves.)
I agree. A funder interested in career changes in one cause area will probably only reach a subset of potential talent if they only target people who are already interested in this cause area vs. generally capable individuals who could choose directions.
In the business context, you could imagine a recruiter having the option to buy a booth at a university specialising in the area the company is working in vs. buying one at a broad career fair of a top university. While the specialised university may bring more people that have trained in and are specialised in your area, you might still go for the top university as talent there might have overall greater potential, has the ability to more easily pivot or can contribute in more general areas like leadership, entrepreneurship, communications or similar.
One aspect here is also the timeframe youâre looking at. If we think of EA community building as talent development, and weâre working with people who might have many years until they peak in their careers, then focussing on a specific cause area might be limiting. A funder who is interested in job changes in one cause area now can still see the value of a pipeline of generally capable people skilling up in different areas of expertise before being a good fit for a new role. The Open Phil EA/âLT Survey touches on this, and similarly, Holdenâs post on career choices for longtermists also covers broader skills independent of cause area.
I think this is a spot on analogy, and something weâve discussed in our group a lot.
One additional cost of cause specific groups is that once you brand yourself inside a movement, you get drawn into the politics of that movement. Other existing groups perceive you as a competitor for influence and activists. Hence they become much less tolerant of differences in your approach.
For example an animal advocacy group advocating for cultivated meat work in my country would frequently be bad-mouthed by other activists for not being a vegan group(because cultivated meat production uses some animal cells taken without consent).
My observation is that animal activists are much more lenient when an organisation doesnât brand itself as an âanimalâ organisation.
Yep, I think this is a good point, thanks!
I think this is possibly correct but unfortunately not at a level to be cruxyâanimal welfare groups just donât get that much funding to begin with, so even if animal welfare advocates valued EA groups a bit above animal welfare groups, itâs still pretty low in absolute.