Glad you like the dashboard! Credit for that is due to @Angelina Li .
I think we are unlikely to share names of the people we talk to (mostly because it would be tedious to create a list of everyone and get approval from them, and presumably some of them also wouldnât want to be named) but part of the motivation behind EA Strategy Fortnight was to discuss ideas about the future of EA (see my post Third Wave Effective Altruism), which heavily overlaps with the questions you asked. This includes:
Rob (CEAâs Head of Groups) and Jessica (Uni Groups Team Lead) wrote about some trade-offs between EA and AI community building.
Multiple non-CEA staff also discussed this, e.g. Kuhanj and Ardenlk
Regarding ânegative impacts to non-AI causesâ: part of why I was excited about the EAâs success no one cares about post (apart from trolling Shakeel) was to get more discussion about how animal welfare has been benefited from being part of EA. Unfortunately, the other animal welfare posts that were solicited for the Fortnight have had delays in publishing (as have some GH&WB posts), but hopefully they will eventually be published.
My guess is that you commenting about why having these posts being public would be useful to you might increase the likelihood that they eventually get published, so I would encourage you to share that, if you indeed do think it would be useful.
I also personally feel unclear about how useful it is to share our thinking in public; if you had concrete stories of how us doing this in the past made your work more impactful I would be interested to hear them. The tworetrospectives make me feel like Strategy Fortnight was nice, but I donât have concrete stories of how it made the world better.
Thanks for the response! Overall I found this comment surprisingly less helpful than Iâd hoped (though I wasnât one of the downvotes). My best guess for why is that most of the answers seem like they should be fairly accessible but I feel like I didnât get as much of substance as I originally expected.[1]
1) On the question âHow are you modelling the negative impacts non-AI cause areas might suffer as a result, or EA groups who might move away from EA branding as a result when making this decision?â, I thought it was fairly clear I was mainly asking about negative externalities to non-AI parts of the movement. That there are stories of animal welfare benefitting from being part of EA in the past doesnât exclude the possibility that such a shift in CEAâs strategic direction is a mistake. To be clear, Iâm not claiming that it is a mistake to make such a shift, but Iâm interested in how CEA has gone about weighing the pros and cons when considering such a decision.
As an example, see Will giving his thoughts on AI safety VS EA movement building here. If he is correct, and it is the case that AI safety should have a separate movement building infrastructure to EA, would it be a good idea for the âCentre for Effective Altruismâ to do more AI safety work, or more âEA qua EAâ work? If the former, what kinds of AI safety work would they be best placed to do?
These are the kind of questions that I am curious about seeing what CEAâs thoughts are and would be interested in more visibility on, and generally Iâm more interested in CEAâs internal views rather than a link to a external GHD/âanimal welfare personâs opinion here.
My guess is that you commenting about why having these posts being public would be useful to you might increase the likelihood that they eventually get published, so I would encourage you to share that, if you indeed do think it would be useful.
I also personally feel unclear about how useful it is to share our thinking in public; if you had concrete stories of how us doing this in the past made your work more impactful I would be interested to hear them.
2) If the organization you run is called the âCentre for Effective Altruismâ, and your mission is âto nurture a community of people who are thinking carefully about the worldâs biggest problems and taking impactful action to solve themâ, then it seems relevant to share your thinking to the community of people you purport to nurture, to help them decide whether this is the right community for them, and to help other people in adjacent spaces decide what other gaps might need to be filled, especially given a previous mistake of of CEA being that CEA âstak[ed] a claim on projects that might otherwise have been taken on by other individuals or groups that could have done a better job than we were doingâ. This is also part of why I asked about CEAâs comparative advantage in the AI space.
3) My intention with the questions around stakeholders is not to elicit a list of names, but wanting to know the extent to which CEA has engaged other stakeholders who are going to be affected by their decisions, and how much coordination has been done in this space.
4) Lastly, also just following up on the Qs you deferred to Shakeel on CEAâs comparative advantage, i.e., specific examples of positive coverage in the mainstream media that was due to CEAâs efforts, and the two branding related Qs.
For example, you linked a few posts, but Robâs post discusses why a potential community builder might consider AI community building, while Jess focuses on uni group organizers. Rob states âCEAâs groups team is at an EA organisation, not an AI-safety organisationâ, and âI think work on the most pressing problems could go better if EAs did not form a super-majority (but still a large enough faction that EA principles are strongly weighted in decision making)â, while Jess states she is âfairly confused about the value of EA community building versus cause-specific community building in the landscape that we are now in (especially with increased attention on AI). All this is to say that I donât get a compelling case from those posts that support CEA as an organization doing more AI safety work etc.
Glad you like the dashboard! Credit for that is due to @Angelina Li .
I think we are unlikely to share names of the people we talk to (mostly because it would be tedious to create a list of everyone and get approval from them, and presumably some of them also wouldnât want to be named) but part of the motivation behind EA Strategy Fortnight was to discuss ideas about the future of EA (see my post Third Wave Effective Altruism), which heavily overlaps with the questions you asked. This includes:
Rob (CEAâs Head of Groups) and Jessica (Uni Groups Team Lead) wrote about some trade-offs between EA and AI community building.
Multiple non-CEA staff also discussed this, e.g. Kuhanj and Ardenlk
Regarding ânegative impacts to non-AI causesâ: part of why I was excited about the EAâs success no one cares about post (apart from trolling Shakeel) was to get more discussion about how animal welfare has been benefited from being part of EA. Unfortunately, the other animal welfare posts that were solicited for the Fortnight have had delays in publishing (as have some GH&WB posts), but hopefully they will eventually be published.
My guess is that you commenting about why having these posts being public would be useful to you might increase the likelihood that they eventually get published, so I would encourage you to share that, if you indeed do think it would be useful.
I also personally feel unclear about how useful it is to share our thinking in public; if you had concrete stories of how us doing this in the past made your work more impactful I would be interested to hear them. The two retrospectives make me feel like Strategy Fortnight was nice, but I donât have concrete stories of how it made the world better.
There were also posts about effective giving, AI welfare, and other potential priorities for EA.
On comms and brand, Shakeel (CEAâs Head of Comms) is out at the moment, but I will prompt him to respond when he returns.
On the list of focus areas: thanks for flagging this! Weâve added a note with a link to this post stating that we are exploring more AI things.
Thanks for the response! Overall I found this comment surprisingly less helpful than Iâd hoped (though I wasnât one of the downvotes). My best guess for why is that most of the answers seem like they should be fairly accessible but I feel like I didnât get as much of substance as I originally expected.[1]
1) On the question âHow are you modelling the negative impacts non-AI cause areas might suffer as a result, or EA groups who might move away from EA branding as a result when making this decision?â, I thought it was fairly clear I was mainly asking about negative externalities to non-AI parts of the movement. That there are stories of animal welfare benefitting from being part of EA in the past doesnât exclude the possibility that such a shift in CEAâs strategic direction is a mistake. To be clear, Iâm not claiming that it is a mistake to make such a shift, but Iâm interested in how CEA has gone about weighing the pros and cons when considering such a decision.
As an example, see Will giving his thoughts on AI safety VS EA movement building here. If he is correct, and it is the case that AI safety should have a separate movement building infrastructure to EA, would it be a good idea for the âCentre for Effective Altruismâ to do more AI safety work, or more âEA qua EAâ work? If the former, what kinds of AI safety work would they be best placed to do?
These are the kind of questions that I am curious about seeing what CEAâs thoughts are and would be interested in more visibility on, and generally Iâm more interested in CEAâs internal views rather than a link to a external GHD/âanimal welfare personâs opinion here.
2) If the organization you run is called the âCentre for Effective Altruismâ, and your mission is âto nurture a community of people who are thinking carefully about the worldâs biggest problems and taking impactful action to solve themâ, then it seems relevant to share your thinking to the community of people you purport to nurture, to help them decide whether this is the right community for them, and to help other people in adjacent spaces decide what other gaps might need to be filled, especially given a previous mistake of of CEA being that CEA âstak[ed] a claim on projects that might otherwise have been taken on by other individuals or groups that could have done a better job than we were doingâ. This is also part of why I asked about CEAâs comparative advantage in the AI space.
3) My intention with the questions around stakeholders is not to elicit a list of names, but wanting to know the extent to which CEA has engaged other stakeholders who are going to be affected by their decisions, and how much coordination has been done in this space.
4) Lastly, also just following up on the Qs you deferred to Shakeel on CEAâs comparative advantage, i.e., specific examples of positive coverage in the mainstream media that was due to CEAâs efforts, and the two branding related Qs.
Thanks!
For example, you linked a few posts, but Robâs post discusses why a potential community builder might consider AI community building, while Jess focuses on uni group organizers. Rob states âCEAâs groups team is at an EA organisation, not an AI-safety organisationâ, and âI think work on the most pressing problems could go better if EAs did not form a super-majority (but still a large enough faction that EA principles are strongly weighted in decision making)â, while Jess states she is âfairly confused about the value of EA community building versus cause-specific community building in the landscape that we are now in (especially with increased attention on AI). All this is to say that I donât get a compelling case from those posts that support CEA as an organization doing more AI safety work etc.