Some questions about the AI safety related updates. There are risks to associating AI safety work with the EA brand, as well as risks to non-AI portions of EA if the “Centre of Effective Altruism” moves towards more AI work. On risks to AI safety:
What’s CEA’s comparative advantage in communicating AI safety efforts? I’d love to see some specific examples of positive coverage in the mainstream media that was due to CEA’s efforts.
What are CEA’s plans in making sure the EA brand does not negatively affect AI safety efforts, either in community building, engaging policy makers, engaging the public, or risking exacerbating further politicization between AI safety and ethics folk?
Which stakeholders in the AI safety space have you spoken to or have already planned to speak to about CEA’s potential shift in strategic direction?
On risks to non-AI portions of EA:
Which stakeholders outside of the AI safety space have you spoken to or have already planned to speak to about CEA’s potential shift in strategic direction?
If none, how are you modelling the negative impacts non-AI cause areas might suffer as a result, or EA groups who might move away from EA branding as a result when making this decision?
How much AI safety work will CEA be doing before the team decides that the “Centre for Effective Altruism” is no longer the appropriate brand for its goals, or whether an offshoot/different organization is a better home for more AI safety heavy community building work?
Lastly, the linked quote of “list of things we are not focusing on” currently includes “cause-specific work (such as community building specifically for effective animal advocacy, AI safety, biosecurity, etc.)”, which seems to be in tension somewhat with the AI related updates (e.g. AI safety group support). I’d love to see the website updated accordingly once the decision is finalized so this doesn’t contribute to more miscommunication in future.
Glad you like the dashboard! Credit for that is due to @Angelina Li .
I think we are unlikely to share names of the people we talk to (mostly because it would be tedious to create a list of everyone and get approval from them, and presumably some of them also wouldn’t want to be named) but part of the motivation behind EA Strategy Fortnight was to discuss ideas about the future of EA (see my post Third Wave Effective Altruism), which heavily overlaps with the questions you asked. This includes:
Rob (CEA’s Head of Groups) and Jessica (Uni Groups Team Lead) wrote about some trade-offs between EA and AI community building.
Multiple non-CEA staff also discussed this, e.g. Kuhanj and Ardenlk
Regarding “negative impacts to non-AI causes”: part of why I was excited about the EA’s success no one cares about post (apart from trolling Shakeel) was to get more discussion about how animal welfare has been benefited from being part of EA. Unfortunately, the other animal welfare posts that were solicited for the Fortnight have had delays in publishing (as have some GH&WB posts), but hopefully they will eventually be published.
My guess is that you commenting about why having these posts being public would be useful to you might increase the likelihood that they eventually get published, so I would encourage you to share that, if you indeed do think it would be useful.
I also personally feel unclear about how useful it is to share our thinking in public; if you had concrete stories of how us doing this in the past made your work more impactful I would be interested to hear them. The tworetrospectives make me feel like Strategy Fortnight was nice, but I don’t have concrete stories of how it made the world better.
Thanks for the response! Overall I found this comment surprisingly less helpful than I’d hoped (though I wasn’t one of the downvotes). My best guess for why is that most of the answers seem like they should be fairly accessible but I feel like I didn’t get as much of substance as I originally expected.[1]
1) On the question “How are you modelling the negative impacts non-AI cause areas might suffer as a result, or EA groups who might move away from EA branding as a result when making this decision?”, I thought it was fairly clear I was mainly asking about negative externalities to non-AI parts of the movement. That there are stories of animal welfare benefitting from being part of EA in the past doesn’t exclude the possibility that such a shift in CEA’s strategic direction is a mistake. To be clear, I’m not claiming that it is a mistake to make such a shift, but I’m interested in how CEA has gone about weighing the pros and cons when considering such a decision.
As an example, see Will giving his thoughts on AI safety VS EA movement building here. If he is correct, and it is the case that AI safety should have a separate movement building infrastructure to EA, would it be a good idea for the “Centre for Effective Altruism” to do more AI safety work, or more “EA qua EA” work? If the former, what kinds of AI safety work would they be best placed to do?
These are the kind of questions that I am curious about seeing what CEA’s thoughts are and would be interested in more visibility on, and generally I’m more interested in CEA’s internal views rather than a link to a external GHD/animal welfare person’s opinion here.
My guess is that you commenting about why having these posts being public would be useful to you might increase the likelihood that they eventually get published, so I would encourage you to share that, if you indeed do think it would be useful.
I also personally feel unclear about how useful it is to share our thinking in public; if you had concrete stories of how us doing this in the past made your work more impactful I would be interested to hear them.
2) If the organization you run is called the “Centre for Effective Altruism”, and your mission is “to nurture a community of people who are thinking carefully about the world’s biggest problems and taking impactful action to solve them”, then it seems relevant to share your thinking to the community of people you purport to nurture, to help them decide whether this is the right community for them, and to help other people in adjacent spaces decide what other gaps might need to be filled, especially given a previous mistake of of CEA being that CEA “stak[ed] a claim on projects that might otherwise have been taken on by other individuals or groups that could have done a better job than we were doing”. This is also part of why I asked about CEA’s comparative advantage in the AI space.
3) My intention with the questions around stakeholders is not to elicit a list of names, but wanting to know the extent to which CEA has engaged other stakeholders who are going to be affected by their decisions, and how much coordination has been done in this space.
4) Lastly, also just following up on the Qs you deferred to Shakeel on CEA’s comparative advantage, i.e., specific examples of positive coverage in the mainstream media that was due to CEA’s efforts, and the two branding related Qs.
For example, you linked a few posts, but Rob’s post discusses why a potential community builder might consider AI community building, while Jess focuses on uni group organizers. Rob states “CEA’s groups team is at an EA organisation, not an AI-safety organisation”, and “I think work on the most pressing problems could go better if EAs did not form a super-majority (but still a large enough faction that EA principles are strongly weighted in decision making)”, while Jess states she is “fairly confused about the value of EA community building versus cause-specific community building in the landscape that we are now in (especially with increased attention on AI). All this is to say that I don’t get a compelling case from those posts that support CEA as an organization doing more AI safety work etc.
The dashboard is neat!
Some questions about the AI safety related updates.
There are risks to associating AI safety work with the EA brand, as well as risks to non-AI portions of EA if the “Centre of Effective Altruism” moves towards more AI work. On risks to AI safety:
What’s CEA’s comparative advantage in communicating AI safety efforts? I’d love to see some specific examples of positive coverage in the mainstream media that was due to CEA’s efforts.
What are CEA’s plans in making sure the EA brand does not negatively affect AI safety efforts, either in community building, engaging policy makers, engaging the public, or risking exacerbating further politicization between AI safety and ethics folk?
Which stakeholders in the AI safety space have you spoken to or have already planned to speak to about CEA’s potential shift in strategic direction?
On risks to non-AI portions of EA:
Which stakeholders outside of the AI safety space have you spoken to or have already planned to speak to about CEA’s potential shift in strategic direction?
If none, how are you modelling the negative impacts non-AI cause areas might suffer as a result, or EA groups who might move away from EA branding as a result when making this decision?
How much AI safety work will CEA be doing before the team decides that the “Centre for Effective Altruism” is no longer the appropriate brand for its goals, or whether an offshoot/different organization is a better home for more AI safety heavy community building work?
Lastly, the linked quote of “list of things we are not focusing on” currently includes “cause-specific work (such as community building specifically for effective animal advocacy, AI safety, biosecurity, etc.)”, which seems to be in tension somewhat with the AI related updates (e.g. AI safety group support). I’d love to see the website updated accordingly once the decision is finalized so this doesn’t contribute to more miscommunication in future.
Glad you like the dashboard! Credit for that is due to @Angelina Li .
I think we are unlikely to share names of the people we talk to (mostly because it would be tedious to create a list of everyone and get approval from them, and presumably some of them also wouldn’t want to be named) but part of the motivation behind EA Strategy Fortnight was to discuss ideas about the future of EA (see my post Third Wave Effective Altruism), which heavily overlaps with the questions you asked. This includes:
Rob (CEA’s Head of Groups) and Jessica (Uni Groups Team Lead) wrote about some trade-offs between EA and AI community building.
Multiple non-CEA staff also discussed this, e.g. Kuhanj and Ardenlk
Regarding “negative impacts to non-AI causes”: part of why I was excited about the EA’s success no one cares about post (apart from trolling Shakeel) was to get more discussion about how animal welfare has been benefited from being part of EA. Unfortunately, the other animal welfare posts that were solicited for the Fortnight have had delays in publishing (as have some GH&WB posts), but hopefully they will eventually be published.
My guess is that you commenting about why having these posts being public would be useful to you might increase the likelihood that they eventually get published, so I would encourage you to share that, if you indeed do think it would be useful.
I also personally feel unclear about how useful it is to share our thinking in public; if you had concrete stories of how us doing this in the past made your work more impactful I would be interested to hear them. The two retrospectives make me feel like Strategy Fortnight was nice, but I don’t have concrete stories of how it made the world better.
There were also posts about effective giving, AI welfare, and other potential priorities for EA.
On comms and brand, Shakeel (CEA’s Head of Comms) is out at the moment, but I will prompt him to respond when he returns.
On the list of focus areas: thanks for flagging this! We’ve added a note with a link to this post stating that we are exploring more AI things.
Thanks for the response! Overall I found this comment surprisingly less helpful than I’d hoped (though I wasn’t one of the downvotes). My best guess for why is that most of the answers seem like they should be fairly accessible but I feel like I didn’t get as much of substance as I originally expected.[1]
1) On the question “How are you modelling the negative impacts non-AI cause areas might suffer as a result, or EA groups who might move away from EA branding as a result when making this decision?”, I thought it was fairly clear I was mainly asking about negative externalities to non-AI parts of the movement. That there are stories of animal welfare benefitting from being part of EA in the past doesn’t exclude the possibility that such a shift in CEA’s strategic direction is a mistake. To be clear, I’m not claiming that it is a mistake to make such a shift, but I’m interested in how CEA has gone about weighing the pros and cons when considering such a decision.
As an example, see Will giving his thoughts on AI safety VS EA movement building here. If he is correct, and it is the case that AI safety should have a separate movement building infrastructure to EA, would it be a good idea for the “Centre for Effective Altruism” to do more AI safety work, or more “EA qua EA” work? If the former, what kinds of AI safety work would they be best placed to do?
These are the kind of questions that I am curious about seeing what CEA’s thoughts are and would be interested in more visibility on, and generally I’m more interested in CEA’s internal views rather than a link to a external GHD/animal welfare person’s opinion here.
2) If the organization you run is called the “Centre for Effective Altruism”, and your mission is “to nurture a community of people who are thinking carefully about the world’s biggest problems and taking impactful action to solve them”, then it seems relevant to share your thinking to the community of people you purport to nurture, to help them decide whether this is the right community for them, and to help other people in adjacent spaces decide what other gaps might need to be filled, especially given a previous mistake of of CEA being that CEA “stak[ed] a claim on projects that might otherwise have been taken on by other individuals or groups that could have done a better job than we were doing”. This is also part of why I asked about CEA’s comparative advantage in the AI space.
3) My intention with the questions around stakeholders is not to elicit a list of names, but wanting to know the extent to which CEA has engaged other stakeholders who are going to be affected by their decisions, and how much coordination has been done in this space.
4) Lastly, also just following up on the Qs you deferred to Shakeel on CEA’s comparative advantage, i.e., specific examples of positive coverage in the mainstream media that was due to CEA’s efforts, and the two branding related Qs.
Thanks!
For example, you linked a few posts, but Rob’s post discusses why a potential community builder might consider AI community building, while Jess focuses on uni group organizers. Rob states “CEA’s groups team is at an EA organisation, not an AI-safety organisation”, and “I think work on the most pressing problems could go better if EAs did not form a super-majority (but still a large enough faction that EA principles are strongly weighted in decision making)”, while Jess states she is “fairly confused about the value of EA community building versus cause-specific community building in the landscape that we are now in (especially with increased attention on AI). All this is to say that I don’t get a compelling case from those posts that support CEA as an organization doing more AI safety work etc.