It seems right to me that strategic coordination and better communication among various EA nodes is essential. I’d love to see more thinking and action on the “improving movement coordination” like you are demonstrating, so thank you. We could host some collaborative events with complexity folks at EA Toronto maybe? Though I imagine knowing something about what higher complexity strategic thinking is currently being done is a wiser first step. What does Systems Change in EA (on FB) think? The CEA def has some thoughts: https://www.centreforeffectivealtruism.org/strategy/ - or is your sense that the thoughts are still lower on the abstraction ladder than they could be?
I also think it is appropriate that many bits of the movement are only vaguely connected through the highest order question of EA—“how do we do the most good, act on that, and learn from our actions”—yet free to propose their own subquestions and seek answers. Since much of the meta framing itself is an open question, there feels to me to be leverage in more bottom-up processes too—a sort of “everyone get some basic skills, now go do whatever you think is best, then we’ll regroup and reflect, then go out and try again” kind of approach. And of course, not everyone can or wants to be part of more involved/meta reflections. Yet overall I feel you raise a dang good question to bring up regularly.
I’d speculate the CEA’s focus on reducing risky actions in interactions with individual EA commmunities is in part a reflection of a more bottom-up orientation. If you want a group to experientially explore without anchoring them too much with your framework, you can give them tools and then step back until/unless they come to an answer you have good reason to believe is not the right one.
I particularly appreciate your thought that people can not share information or not feel their work is meaningful if not aware of a broader community strategy though. I wonder how often this happens and how… Maybe a future EA Survey could ask “have you worked on a project which you believe is helpful but would not classify as EA-adjacent”.
Thanks for the article!
It seems right to me that strategic coordination and better communication among various EA nodes is essential. I’d love to see more thinking and action on the “improving movement coordination” like you are demonstrating, so thank you. We could host some collaborative events with complexity folks at EA Toronto maybe? Though I imagine knowing something about what higher complexity strategic thinking is currently being done is a wiser first step. What does Systems Change in EA (on FB) think? The CEA def has some thoughts: https://www.centreforeffectivealtruism.org/strategy/ - or is your sense that the thoughts are still lower on the abstraction ladder than they could be?
I also think it is appropriate that many bits of the movement are only vaguely connected through the highest order question of EA—“how do we do the most good, act on that, and learn from our actions”—yet free to propose their own subquestions and seek answers. Since much of the meta framing itself is an open question, there feels to me to be leverage in more bottom-up processes too—a sort of “everyone get some basic skills, now go do whatever you think is best, then we’ll regroup and reflect, then go out and try again” kind of approach. And of course, not everyone can or wants to be part of more involved/meta reflections. Yet overall I feel you raise a dang good question to bring up regularly.
I’d speculate the CEA’s focus on reducing risky actions in interactions with individual EA commmunities is in part a reflection of a more bottom-up orientation. If you want a group to experientially explore without anchoring them too much with your framework, you can give them tools and then step back until/unless they come to an answer you have good reason to believe is not the right one.
I particularly appreciate your thought that people can not share information or not feel their work is meaningful if not aware of a broader community strategy though. I wonder how often this happens and how… Maybe a future EA Survey could ask “have you worked on a project which you believe is helpful but would not classify as EA-adjacent”.