You’re right that basically everything defined in this list can be referred to as EA work itself. However most of these things can be referred to as “meta EA” is used within the community.
I feel like the major innovation of EA is the idea that altruists can and should compare the value of different interventions (which you appear to consider meta-EA). In other words, EA is meta-altruism.
Meta EA is not limited to comparing causes or interventions. The examples given in the EA Movement Building section are more action oriented. E.g. running a local group or field building. They are a higher level of abstraction (one step or more removed) from direct impact.
I agree that that the USP of EA is the concept of cause neutrality / prioritisation. However EA is more than just meta work—so some people may spend a little time on comparing, and then move on to direct work in the space (e.g. lobbying for animal rights, research on x-risk, implementing vaccine programs). I think meta work is sufficiently different that it’s worth mapping out the possible things you could do.
What these areas have in common is that they are indirect, as opposed to having some kind of abstract meta-ness property.
I think these things have a meta-ness property in the sense that they influence the structure / composition / nature of the EA movement. GPR research influences the causes we focus on, movement building affects the people within the movement and what they do. One influences the other.
For example, if research on cause prioritisation suggests we should prioritise AI Safety movement builders may do active outreach to software engineers, thus changing the composition of the movement. Similarly, if fundraisers decide to fund certain cause areas, they may pull in new people who counterfactually wouldn’t have joined the movement. On the other side, if movement builders start to quickly grow specific profession-specific networks, then there may be interest to research how people from say, a political background, can leverage their political capital—which may result in a very different prioritisation than if we are looking for causes that have the biggest funding gaps.
I think I had a similar confusion to bwildi, or more specifically I wondered while reading this what wouldn’t count as meta EA. Your comment helps clarity that, but I think there’s still an issue, which is essentially that (almost?) all impacts will only occur indirectly. Some examples:
“implementing vaccine programs” is unusually close to being direct, but arguably still indirect, as what we have in mind is probably something more organisational rather than literally being the person giving out the injections
“lobbying for animal rights” is of course only impactful if the lobbying changes policies, and then the policies change behaviours
likewise for “research on x-risk”
likewise for other policymaking or policy advising work (which came to mind as one of the candidates for “not meta EA” when reading this)
I still think like there’s a useful category in this vicinity, which includes the examples you give but doesn’t include things like researching specific AI safety ideas or doing policy advising. But I don’t think that the definition you give by itself makes it clear what’s in and what’s out of scope.
I think maybe I’d see it as cleaner to have a concept for “building effective altruism”, and a concept for “global priorities research”, and then everything else (e.g. technical AI safety research, policy advising). Rather than trying to merge building effective altruism and global priorities research under the meta EA banner and then explain why those things fit together but everything else doesn’t fit as part of them.
(All that said, I found this post interesting, and I think this sort of mapping seems hard so I’m not saying I’d have done a better job.)
Hi Bill,
You’re right that basically everything defined in this list can be referred to as EA work itself. However most of these things can be referred to as “meta EA” is used within the community.
Meta EA is not limited to comparing causes or interventions. The examples given in the EA Movement Building section are more action oriented. E.g. running a local group or field building. They are a higher level of abstraction (one step or more removed) from direct impact.
I agree that that the USP of EA is the concept of cause neutrality / prioritisation. However EA is more than just meta work—so some people may spend a little time on comparing, and then move on to direct work in the space (e.g. lobbying for animal rights, research on x-risk, implementing vaccine programs). I think meta work is sufficiently different that it’s worth mapping out the possible things you could do.
I think these things have a meta-ness property in the sense that they influence the structure / composition / nature of the EA movement. GPR research influences the causes we focus on, movement building affects the people within the movement and what they do. One influences the other.
For example, if research on cause prioritisation suggests we should prioritise AI Safety movement builders may do active outreach to software engineers, thus changing the composition of the movement. Similarly, if fundraisers decide to fund certain cause areas, they may pull in new people who counterfactually wouldn’t have joined the movement. On the other side, if movement builders start to quickly grow specific profession-specific networks, then there may be interest to research how people from say, a political background, can leverage their political capital—which may result in a very different prioritisation than if we are looking for causes that have the biggest funding gaps.
I think I had a similar confusion to bwildi, or more specifically I wondered while reading this what wouldn’t count as meta EA. Your comment helps clarity that, but I think there’s still an issue, which is essentially that (almost?) all impacts will only occur indirectly. Some examples:
“implementing vaccine programs” is unusually close to being direct, but arguably still indirect, as what we have in mind is probably something more organisational rather than literally being the person giving out the injections
“lobbying for animal rights” is of course only impactful if the lobbying changes policies, and then the policies change behaviours
likewise for “research on x-risk”
likewise for other policymaking or policy advising work (which came to mind as one of the candidates for “not meta EA” when reading this)
I still think like there’s a useful category in this vicinity, which includes the examples you give but doesn’t include things like researching specific AI safety ideas or doing policy advising. But I don’t think that the definition you give by itself makes it clear what’s in and what’s out of scope.
I think maybe I’d see it as cleaner to have a concept for “building effective altruism”, and a concept for “global priorities research”, and then everything else (e.g. technical AI safety research, policy advising). Rather than trying to merge building effective altruism and global priorities research under the meta EA banner and then explain why those things fit together but everything else doesn’t fit as part of them.
(All that said, I found this post interesting, and I think this sort of mapping seems hard so I’m not saying I’d have done a better job.)
Vaguely relevant:
My attempt to distinguish “fundamental research” from “intervention research”
A framework distinguishing between values research, strategy research, intervention research, and implementation
(I think neither of those things attempts to explain where movement building fits in.)