I think I had a similar confusion to bwildi, or more specifically I wondered while reading this what wouldn’t count as meta EA. Your comment helps clarity that, but I think there’s still an issue, which is essentially that (almost?) all impacts will only occur indirectly. Some examples:
“implementing vaccine programs” is unusually close to being direct, but arguably still indirect, as what we have in mind is probably something more organisational rather than literally being the person giving out the injections
“lobbying for animal rights” is of course only impactful if the lobbying changes policies, and then the policies change behaviours
likewise for “research on x-risk”
likewise for other policymaking or policy advising work (which came to mind as one of the candidates for “not meta EA” when reading this)
I still think like there’s a useful category in this vicinity, which includes the examples you give but doesn’t include things like researching specific AI safety ideas or doing policy advising. But I don’t think that the definition you give by itself makes it clear what’s in and what’s out of scope.
I think maybe I’d see it as cleaner to have a concept for “building effective altruism”, and a concept for “global priorities research”, and then everything else (e.g. technical AI safety research, policy advising). Rather than trying to merge building effective altruism and global priorities research under the meta EA banner and then explain why those things fit together but everything else doesn’t fit as part of them.
(All that said, I found this post interesting, and I think this sort of mapping seems hard so I’m not saying I’d have done a better job.)
I think I had a similar confusion to bwildi, or more specifically I wondered while reading this what wouldn’t count as meta EA. Your comment helps clarity that, but I think there’s still an issue, which is essentially that (almost?) all impacts will only occur indirectly. Some examples:
“implementing vaccine programs” is unusually close to being direct, but arguably still indirect, as what we have in mind is probably something more organisational rather than literally being the person giving out the injections
“lobbying for animal rights” is of course only impactful if the lobbying changes policies, and then the policies change behaviours
likewise for “research on x-risk”
likewise for other policymaking or policy advising work (which came to mind as one of the candidates for “not meta EA” when reading this)
I still think like there’s a useful category in this vicinity, which includes the examples you give but doesn’t include things like researching specific AI safety ideas or doing policy advising. But I don’t think that the definition you give by itself makes it clear what’s in and what’s out of scope.
I think maybe I’d see it as cleaner to have a concept for “building effective altruism”, and a concept for “global priorities research”, and then everything else (e.g. technical AI safety research, policy advising). Rather than trying to merge building effective altruism and global priorities research under the meta EA banner and then explain why those things fit together but everything else doesn’t fit as part of them.
(All that said, I found this post interesting, and I think this sort of mapping seems hard so I’m not saying I’d have done a better job.)
Vaguely relevant:
My attempt to distinguish “fundamental research” from “intervention research”
A framework distinguishing between values research, strategy research, intervention research, and implementation
(I think neither of those things attempts to explain where movement building fits in.)