When I read Critiques of EA that I want to read, one very concerning section seemed to be “People are pretty justified in their fears of critiquing EA leadership/community norms.”
1) How seriously is this concern taken by those that are considered EA leadership, major/public facing organizations, or those working on community health? (say, CEA, OpenPhil, GiveWell, 80000 hours, Forethought, GWWC, FHI, FTX)
2a) What plans and actions have been taken or considered? 2b) Do any of these solutions interact with the current EA funding situation and distribution? Why/why not?
3) Are there publicly available compilations of times where EA leadership or major/public facing organizations have made meaningful changes as a result of public or private feedback?
(Additional note: there were a lot of publicly supportive comments [1] on the Democratising Risk—or how EA deals with critics post, yet it seems like the overall impression was that despite these public comments, she was disappointed by what came out of it. It’s unclear whether the recent Criticism/Red-teaming contest was a result of these events, though it would be useful to know which organizations considered or adopted any of the suggestions listed[2] or alternate strategies to mitigate concerns raised, and the process behind this consideration. I use this as an example primarily because it was a higher-profile post that involved engagement from many who would be considered “EA Leaders”.)
“EA needs to diversify funding sources by breaking up big funding bodies and by reducing each orgs’ reliance on EA funding and tech billionaire funding, it needs to produce academically credible work, set up whistle-blower protection, actively fund critical work, allow for bottom-up control over how funding is distributed, diversify academic fields represented in EA, make the leaders’ forum and funding decisions transparent, stop glorifying individual thought-leaders, stop classifying everything as info hazards…amongst other structural changes.”
Thanks for asking this. I can chime in, although obviously I can’t speak for all the organizations listed, or for “EA leadership.” Also, I’m writing as myself — not a representative of my organization (although I mention the work that my team does).
I think the Forum team takes this worry seriously, and we hope that the Forum contributes to making the EA community more truth-seeking in a way that disregards status or similar phenomena (as much as possible). One of the goals for the Forum is to improve community norms and epistemics, and this (criticism of established ideas and entities) is a relevant dimension; we want to find out the truth, regardless of whether it’s inconvenient to leadership. We also try to make it easy for people to share concerns anonymously, which I think makes it easier to overcome these barriers.
I personally haven’t encountered this problem (that there are reasons to be afraid of criticizing leadership or established norms) — no one ever hinted at this, and I’ve never encountered repercussions for encouraging criticism, writing some myself, etc. I think it’s possible that this happens, though, and I also think it’s a problem even if people in the community only think it’s a problem, as that can still silence useful criticism.
I can’t speak about funding structures, but we did run the criticism contest in part to encourage criticism of the most established organizations and norms, and we explicitly encouraged criticism of the most important of those.
Are there publicly available compilations of times where EA leadership or major/public facing organizations have made meaningful changes as a result of public or private feedback?
Thanks for the link! I think most examples in the post do not include the part about “as a result of public or private feedback”, though I think I communicated this poorly.
My thought process behind going beyond a list of mistakes and changes to including a description of how they discovered this issue or the feedback that prompted it,[1] is that doing so may be more effective at allaying people’s fears of critiquing EA leadership.
For example, while mistakes and updates are documented, if you were concerned about, say, gender diversity (~75% men in senior roles) in the organization,[2] but you were an OpenPhil employee or someone receiving money from OpenPhil, would the the contents of the post[3] you linked actually make you feel comfortable raising these concerns?[4] Or would you feel better if there was an explicit acknowledgement that someone in a similar situation had previously spoken up and contributed to positive change?
I also think curating something like this could be beneficial not just for the EA community, but also for leaders and organizations who have a large influence in this space. I’ll leave the rest of my thoughts in a footnote to minimize derailing the thread, but would be happy to discuss further elsewhere with anyone who has thoughts or pushbacks about this.[5]
I am not saying that I think OpenPhil in fact has a gender diversity problem (is 3⁄4 men too much? what about 2/3? what about 3/5? Is this even the right way of thinking about this question?), nor am I saying that people working in OpenPhil or receiving their funding don’t feel comfortable voicing concerns.
I am not using OpenPhil as an example because I believe they are bad, but because they seem especially important as both a major funder of EA and as folks who are influential in object-level discussions on a range of EA cause areas.
This also applies to CEA’s “Our Mistakes” page, which includes a line “downplaying critical feedback from the community”. Since the page does not talk about why the feedback was downplayed or steps taken specifically to address the causes of the downplaying, one might even update away from providing feedback after reading this.
(In this hypothetical, I am starting from the place where the original concern: “People are pretty justified in their fears of critiquing EA leadership/community norms.” is true. This is an assumption, not because I personally know this is the case.)
Among the many bad criticisms of EA, I have heard some good-faith criticisms of major organizations in the EA community (some by people directly involved) that I would consider fairly serious. I think having these criticisms circulating in the community through word of mouth might be worse than having a public compilation, because:
1) it contributes to distrust in EA leaders or their ability to steer the EA movement (which might make it harder for them to do so); 2) it means there are less opportunities for this to be fixed if EA leaders aren’t getting an accurate sense of the severity of mistakes they might be making, which might also further exacerbate this problem; and 3) it might mean an increased difficulty in attracting or retain the best people in the EA space
I think it could be in the interest of organizations who play a large role in steering the EA movement to make compilations of all the good-faith pieces of feedback and criticisms they’ve received, as well as a response that includes points of (dis)agreement, and any updates as a result of the feedback (possibly even a reward, if it has contributed to a meaningfully positive change).
If the criticism is misplaced, it provides an opportunity to provide a justification that might have been overlooked, and to minimize rumors or speculation about these concerns. The extent to which the criticism is not misplaced, it provides an opportunity for accountability and responsiveness that builds and maintains the trust of the community. It also means that those who disagree at a fundamental level with the direction the movement is being steered can make better-informed decisions about the extent of their involvement with the movement.
This also means other organizations who might be subject to similar criticisms can benefit from the feedback and responses, without having to make the same mistakes themselves.
One final benefit of including responses to substantial pieces of feedback and not just “mistakes”, is that feedback can be relevant even if not in response to a mistake. For example, the post Red Teaming CEA’s Community Building Work claims that CEA’s mistakes page has “routinely omitted major problems, significantly downplayed the problems that are discussed, and regularly suggests problems have been resolved when that has not been the case”.
Part of Max’s response here suggests some of these weren’t considered “mistakes” but were “suboptimal”. While I agree it would be unrealistic to include every inefficiency in every project, I can imagine two classes of scenarios where responses to feedback could capture important things that responses to mistakes do not.
The first class is when there’s no clear seriously concerning event that one can point to, but the status quo is detrimental in the long run if not changed. For example, if a leader of a research organization is a great researcher but bad at running a research organization, at what stage does this count as a “mistake”? If an organization lacks diversity, to whatever extent this is harmful, at what stage does this count as a “mistake”?
The second class is when the organization itself is perpetuating harm in the community but aren’t subject to any formal accountability mechanisms. If an organization funded by CEA does harm, they can have their funding pulled. If an individual is harmful to the community, they can be banned. While there have been some form of accountability in what appears to be an informal, crowd-sourced list of concerns, this seemed to be prompted by egregious and obvious cases of alleged misconduct, and might not work for all organizations. Imagine an alternate universe where OpenPhil started actively contributing to harm in the EA community, and this harm grew slowly over time. How much harm would they need to be doing for internal or external feedback to be made public to the rest of the EA community? How much harm would they need to do for a similar crowd-sourced list of concerns to arise? How much harm would they need to do for the EA community to disavow them and their funding? Do we have accountability mechanisms and systems in place to reduce the risk here, or notice it early?
When I read Critiques of EA that I want to read, one very concerning section seemed to be “People are pretty justified in their fears of critiquing EA leadership/community norms.”
1) How seriously is this concern taken by those that are considered EA leadership, major/public facing organizations, or those working on community health? (say, CEA, OpenPhil, GiveWell, 80000 hours, Forethought, GWWC, FHI, FTX)
2a) What plans and actions have been taken or considered?
2b) Do any of these solutions interact with the current EA funding situation and distribution? Why/why not?
3) Are there publicly available compilations of times where EA leadership or major/public facing organizations have made meaningful changes as a result of public or private feedback?
(Additional note: there were a lot of publicly supportive comments [1] on the Democratising Risk—or how EA deals with critics post, yet it seems like the overall impression was that despite these public comments, she was disappointed by what came out of it. It’s unclear whether the recent Criticism/Red-teaming contest was a result of these events, though it would be useful to know which organizations considered or adopted any of the suggestions listed[2] or alternate strategies to mitigate concerns raised, and the process behind this consideration. I use this as an example primarily because it was a higher-profile post that involved engagement from many who would be considered “EA Leaders”.)
1, 2, 3, 4
“EA needs to diversify funding sources by breaking up big funding bodies and by reducing each orgs’ reliance on EA funding and tech billionaire funding, it needs to produce academically credible work, set up whistle-blower protection, actively fund critical work, allow for bottom-up control over how funding is distributed, diversify academic fields represented in EA, make the leaders’ forum and funding decisions transparent, stop glorifying individual thought-leaders, stop classifying everything as info hazards…amongst other structural changes.”
Thanks for asking this. I can chime in, although obviously I can’t speak for all the organizations listed, or for “EA leadership.” Also, I’m writing as myself — not a representative of my organization (although I mention the work that my team does).
I think the Forum team takes this worry seriously, and we hope that the Forum contributes to making the EA community more truth-seeking in a way that disregards status or similar phenomena (as much as possible). One of the goals for the Forum is to improve community norms and epistemics, and this (criticism of established ideas and entities) is a relevant dimension; we want to find out the truth, regardless of whether it’s inconvenient to leadership. We also try to make it easy for people to share concerns anonymously, which I think makes it easier to overcome these barriers.
I personally haven’t encountered this problem (that there are reasons to be afraid of criticizing leadership or established norms) — no one ever hinted at this, and I’ve never encountered repercussions for encouraging criticism, writing some myself, etc. I think it’s possible that this happens, though, and I also think it’s a problem even if people in the community only think it’s a problem, as that can still silence useful criticism.
I can’t speak about funding structures, but we did run the criticism contest in part to encourage criticism of the most established organizations and norms, and we explicitly encouraged criticism of the most important of those.
I think lots of organizations have “mistakes” pages, and Ben linked to a question asking for examples of this kind of thing. Off the top of my head, I don’t know of much else — this could be a good project for someone to undertake!
Some examples here: Examples of someone admitting an error or changing a key conclusion.
Thanks for the link!
I think most examples in the post do not include the part about “as a result of public or private feedback”, though I think I communicated this poorly.
My thought process behind going beyond a list of mistakes and changes to including a description of how they discovered this issue or the feedback that prompted it,[1] is that doing so may be more effective at allaying people’s fears of critiquing EA leadership.
For example, while mistakes and updates are documented, if you were concerned about, say, gender diversity (~75% men in senior roles) in the organization,[2] but you were an OpenPhil employee or someone receiving money from OpenPhil, would the the contents of the post [3] you linked actually make you feel comfortable raising these concerns?[4] Or would you feel better if there was an explicit acknowledgement that someone in a similar situation had previously spoken up and contributed to positive change?
I also think curating something like this could be beneficial not just for the EA community, but also for leaders and organizations who have a large influence in this space. I’ll leave the rest of my thoughts in a footnote to minimize derailing the thread, but would be happy to discuss further elsewhere with anyone who has thoughts or pushbacks about this.[5]
Anonymized as necessary
I am not saying that I think OpenPhil in fact has a gender diversity problem (is 3⁄4 men too much? what about 2/3? what about 3/5? Is this even the right way of thinking about this question?), nor am I saying that people working in OpenPhil or receiving their funding don’t feel comfortable voicing concerns.
I am not using OpenPhil as an example because I believe they are bad, but because they seem especially important as both a major funder of EA and as folks who are influential in object-level discussions on a range of EA cause areas.
Specifically, this would be Holden’s Three Key Issues I’ve Changed My Mind About
This also applies to CEA’s “Our Mistakes” page, which includes a line “downplaying critical feedback from the community”. Since the page does not talk about why the feedback was downplayed or steps taken specifically to address the causes of the downplaying, one might even update away from providing feedback after reading this.
(In this hypothetical, I am starting from the place where the original concern: “People are pretty justified in their fears of critiquing EA leadership/community norms.” is true. This is an assumption, not because I personally know this is the case.)
Among the many bad criticisms of EA, I have heard some good-faith criticisms of major organizations in the EA community (some by people directly involved) that I would consider fairly serious. I think having these criticisms circulating in the community through word of mouth might be worse than having a public compilation, because:
1) it contributes to distrust in EA leaders or their ability to steer the EA movement (which might make it harder for them to do so);
2) it means there are less opportunities for this to be fixed if EA leaders aren’t getting an accurate sense of the severity of mistakes they might be making, which might also further exacerbate this problem; and
3) it might mean an increased difficulty in attracting or retain the best people in the EA space
I think it could be in the interest of organizations who play a large role in steering the EA movement to make compilations of all the good-faith pieces of feedback and criticisms they’ve received, as well as a response that includes points of (dis)agreement, and any updates as a result of the feedback (possibly even a reward, if it has contributed to a meaningfully positive change).
If the criticism is misplaced, it provides an opportunity to provide a justification that might have been overlooked, and to minimize rumors or speculation about these concerns. The extent to which the criticism is not misplaced, it provides an opportunity for accountability and responsiveness that builds and maintains the trust of the community. It also means that those who disagree at a fundamental level with the direction the movement is being steered can make better-informed decisions about the extent of their involvement with the movement.
This also means other organizations who might be subject to similar criticisms can benefit from the feedback and responses, without having to make the same mistakes themselves.
One final benefit of including responses to substantial pieces of feedback and not just “mistakes”, is that feedback can be relevant even if not in response to a mistake. For example, the post Red Teaming CEA’s Community Building Work claims that CEA’s mistakes page has “routinely omitted major problems, significantly downplayed the problems that are discussed, and regularly suggests problems have been resolved when that has not been the case”.
Part of Max’s response here suggests some of these weren’t considered “mistakes” but were “suboptimal”. While I agree it would be unrealistic to include every inefficiency in every project, I can imagine two classes of scenarios where responses to feedback could capture important things that responses to mistakes do not.
The first class is when there’s no clear seriously concerning event that one can point to, but the status quo is detrimental in the long run if not changed.
For example, if a leader of a research organization is a great researcher but bad at running a research organization, at what stage does this count as a “mistake”? If an organization lacks diversity, to whatever extent this is harmful, at what stage does this count as a “mistake”?
The second class is when the organization itself is perpetuating harm in the community but aren’t subject to any formal accountability mechanisms. If an organization funded by CEA does harm, they can have their funding pulled. If an individual is harmful to the community, they can be banned. While there have been some form of accountability in what appears to be an informal, crowd-sourced list of concerns, this seemed to be prompted by egregious and obvious cases of alleged misconduct, and might not work for all organizations. Imagine an alternate universe where OpenPhil started actively contributing to harm in the EA community, and this harm grew slowly over time. How much harm would they need to be doing for internal or external feedback to be made public to the rest of the EA community? How much harm would they need to do for a similar crowd-sourced list of concerns to arise? How much harm would they need to do for the EA community to disavow them and their funding? Do we have accountability mechanisms and systems in place to reduce the risk here, or notice it early?