Thanks for the response! Overall I found this comment surprisingly less helpful than I’d hoped (though I wasn’t one of the downvotes). My best guess for why is that most of the answers seem like they should be fairly accessible but I feel like I didn’t get as much of substance as I originally expected.[1]
1) On the question “How are you modelling the negative impacts non-AI cause areas might suffer as a result, or EA groups who might move away from EA branding as a result when making this decision?”, I thought it was fairly clear I was mainly asking about negative externalities to non-AI parts of the movement. That there are stories of animal welfare benefitting from being part of EA in the past doesn’t exclude the possibility that such a shift in CEA’s strategic direction is a mistake. To be clear, I’m not claiming that it is a mistake to make such a shift, but I’m interested in how CEA has gone about weighing the pros and cons when considering such a decision.
As an example, see Will giving his thoughts on AI safety VS EA movement building here. If he is correct, and it is the case that AI safety should have a separate movement building infrastructure to EA, would it be a good idea for the “Centre for Effective Altruism” to do more AI safety work, or more “EA qua EA” work? If the former, what kinds of AI safety work would they be best placed to do?
These are the kind of questions that I am curious about seeing what CEA’s thoughts are and would be interested in more visibility on, and generally I’m more interested in CEA’s internal views rather than a link to a external GHD/animal welfare person’s opinion here.
My guess is that you commenting about why having these posts being public would be useful to you might increase the likelihood that they eventually get published, so I would encourage you to share that, if you indeed do think it would be useful.
I also personally feel unclear about how useful it is to share our thinking in public; if you had concrete stories of how us doing this in the past made your work more impactful I would be interested to hear them.
2) If the organization you run is called the “Centre for Effective Altruism”, and your mission is “to nurture a community of people who are thinking carefully about the world’s biggest problems and taking impactful action to solve them”, then it seems relevant to share your thinking to the community of people you purport to nurture, to help them decide whether this is the right community for them, and to help other people in adjacent spaces decide what other gaps might need to be filled, especially given a previous mistake of of CEA being that CEA “stak[ed] a claim on projects that might otherwise have been taken on by other individuals or groups that could have done a better job than we were doing”. This is also part of why I asked about CEA’s comparative advantage in the AI space.
3) My intention with the questions around stakeholders is not to elicit a list of names, but wanting to know the extent to which CEA has engaged other stakeholders who are going to be affected by their decisions, and how much coordination has been done in this space.
4) Lastly, also just following up on the Qs you deferred to Shakeel on CEA’s comparative advantage, i.e., specific examples of positive coverage in the mainstream media that was due to CEA’s efforts, and the two branding related Qs.
For example, you linked a few posts, but Rob’s post discusses why a potential community builder might consider AI community building, while Jess focuses on uni group organizers. Rob states “CEA’s groups team is at an EA organisation, not an AI-safety organisation”, and “I think work on the most pressing problems could go better if EAs did not form a super-majority (but still a large enough faction that EA principles are strongly weighted in decision making)”, while Jess states she is “fairly confused about the value of EA community building versus cause-specific community building in the landscape that we are now in (especially with increased attention on AI). All this is to say that I don’t get a compelling case from those posts that support CEA as an organization doing more AI safety work etc.
Thanks for the response! Overall I found this comment surprisingly less helpful than I’d hoped (though I wasn’t one of the downvotes). My best guess for why is that most of the answers seem like they should be fairly accessible but I feel like I didn’t get as much of substance as I originally expected.[1]
1) On the question “How are you modelling the negative impacts non-AI cause areas might suffer as a result, or EA groups who might move away from EA branding as a result when making this decision?”, I thought it was fairly clear I was mainly asking about negative externalities to non-AI parts of the movement. That there are stories of animal welfare benefitting from being part of EA in the past doesn’t exclude the possibility that such a shift in CEA’s strategic direction is a mistake. To be clear, I’m not claiming that it is a mistake to make such a shift, but I’m interested in how CEA has gone about weighing the pros and cons when considering such a decision.
As an example, see Will giving his thoughts on AI safety VS EA movement building here. If he is correct, and it is the case that AI safety should have a separate movement building infrastructure to EA, would it be a good idea for the “Centre for Effective Altruism” to do more AI safety work, or more “EA qua EA” work? If the former, what kinds of AI safety work would they be best placed to do?
These are the kind of questions that I am curious about seeing what CEA’s thoughts are and would be interested in more visibility on, and generally I’m more interested in CEA’s internal views rather than a link to a external GHD/animal welfare person’s opinion here.
2) If the organization you run is called the “Centre for Effective Altruism”, and your mission is “to nurture a community of people who are thinking carefully about the world’s biggest problems and taking impactful action to solve them”, then it seems relevant to share your thinking to the community of people you purport to nurture, to help them decide whether this is the right community for them, and to help other people in adjacent spaces decide what other gaps might need to be filled, especially given a previous mistake of of CEA being that CEA “stak[ed] a claim on projects that might otherwise have been taken on by other individuals or groups that could have done a better job than we were doing”. This is also part of why I asked about CEA’s comparative advantage in the AI space.
3) My intention with the questions around stakeholders is not to elicit a list of names, but wanting to know the extent to which CEA has engaged other stakeholders who are going to be affected by their decisions, and how much coordination has been done in this space.
4) Lastly, also just following up on the Qs you deferred to Shakeel on CEA’s comparative advantage, i.e., specific examples of positive coverage in the mainstream media that was due to CEA’s efforts, and the two branding related Qs.
Thanks!
For example, you linked a few posts, but Rob’s post discusses why a potential community builder might consider AI community building, while Jess focuses on uni group organizers. Rob states “CEA’s groups team is at an EA organisation, not an AI-safety organisation”, and “I think work on the most pressing problems could go better if EAs did not form a super-majority (but still a large enough faction that EA principles are strongly weighted in decision making)”, while Jess states she is “fairly confused about the value of EA community building versus cause-specific community building in the landscape that we are now in (especially with increased attention on AI). All this is to say that I don’t get a compelling case from those posts that support CEA as an organization doing more AI safety work etc.