I think EA is made up of a bunch of different parties, many of whom find it at least somewhat uncomfortable to criticize or honestly evaluate each other for a wide variety of reasons.
I think that this is a very standard challenge that most organizations and movements have. As I would also recommend to other parties, I think that investigation and improvement here could be very valuable to EA. This post continues on many of the ideas as Select Challenges with Criticism & Evaluation Around EA.
To be clear, there are many bottlenecks between “someone is in a place to come up with a valuable critique” and “different decisions actually get made.” This process is costly and precarious at each step. For instance, decision makers think in very different ways than critics realize, so it’s easy for critics to waste a lot of time writing to them.
This post just focuses on the challenges that come from the challenges of things being uncomfortable to say. Going through the entire pipeline would require far more words.
One early reviewer critiqued this post saying that they didn’t believe that discomfort was a problem. If you don’t think it is, I don’t aim in this post to convince you. My goal here is to do early exploration what the problem even seems to look like, not to argue that the problem is severe or not.
Like with that previous post, I rely here mostly on anecdotal experiences, introspection, and recommendations from various management books. This comes from me working on QURI (trying to pursue better longtermist evaluation), being an employee and manager in multiple (EA and not EA) organizations, and hearing a whole lot of various rants and frustrations from EAs. I’d love to see further work to better understand where bottlenecks to valuable communication are most restrictive and then design and test solutions.
Writing this has helped me find some insights on this problem. However, it is a messy problem, and as I explained before, I find the terminology lacking. Apologies in advance.
Introduction
There’s a massive difference between a group saying that it’s open to criticism, and a group that people actually feel comfortable criticizing.
I think that many EA individuals and organizations advocate and promote feedback in ways unusual for the industry. However, I think there’s also a lot of work to do.
In most communities, it takes a lot of iteration and trust-building to find ways for people to routinely and usefully give candid information or feedback to each other.
In companies, for example, employees often don’t have much to personally gain by voicing their critiques to management, and a lot to potentially lose. Even if the management seems really nice, is an honest critique really worth the ~3% chance of resentment? Often you won’t ever know — management could just keep their dislike of you to themselves, and later take action accordingly. On the other side, it’s often uncomfortable for managers to convey candid feedback to their reports privately, let alone discuss department or employee failures to people throughout the organization.
My impression is that many online social settings contain a bunch of social groups that are really afraid of being honest with each other, and this leads to problems immediate (important information not getting shared) and expansive (groups developing extended distrust and sometimes hatred with each other).
Problems of communication and comfort happen within power hierarchies, and they also happen between peer communities. Really, they happen everywhere.
To a first approximation, “Everyone is at least a little of afraid of everyone else.”
I think a lot of people’s natural reaction to issues like this is to point fingers at groups they don’t like and blame them. But really, I personally think that all of us are broadly responsible (at least a little), and also are broadly able to understand and improve things. I see these issues as systemic, not personal.
Criticism Between Different Groups
Around effective altruism, I’ve noticed:
Evaluation in Global Welfare
Global poverty charity evaluation and criticism seem like fair game. When GiveWell started, they weren’t friends with the leaders of the organizations they were evaluating.
GiveWell evaluates of the very best charities in its focus areas. It’s much easier to get buy-in to evaluate an organization when it’s clear you’re pretty positive about them.
I don’t know of much systematic EA research into evaluating mediocre or poor global health charities. One reason is that the target audience is typically impact-focused donors, and doesn’t care about the mediocre charities. I suspect another reason is that this would create push back and animosity.
I think it’s now understood that new EA-created global health organizations can be publicly evaluated on grounds similar to what GiveWell does.
Evaluation in Longtermism
Around longtermism, there doesn’t seem to be much public organization evaluation or criticism. I think one issue is that many of the potential evaluators are social peers of the people they would be evaluating. They all go to the same EAGs (me included!). Larks did yearly reviews that I thought were strong, but those didn’t really give much negative feedback — they focused more on highlighting the good. It feels good to be supportive to one’s colleagues, and scary if they were to resent you.
Around AI Safety, there are many candid discussions on certain strategies and research directions. [1] I think this is great, and probably a good place to begin, but I also think there’s quite a bit more work to be done.
There’s some discussed controversies about calling out AI organizations people think might be causing net harm (i.e. OpenAI and Deepmind), though this is a very different issue—the risk is mainly about public knowledge and backlash, not frustration within the EA community.
I think there are a bunch of bottlenecks here other than discomfort (evaluation is expensive, and arguably we don’t have many choices anyway). However, I think discomfort is one significant bottleneck.
Evaluation of EA Funders/Leaders by EA Community Members
In conversations I’ve had, people seem particularly nervous around the funders and top leaders. There are only a few EA funders, and they seem highly correlated in opinion with each other. It might be that way for the next 20-50 years. For those without a terrific understanding of these funders, upsetting a funder can feel like a lifetime ban from almost all high-level effective altruist positions.
I’ve previously done some funding work, and know many of the funders personally, but they still make me very nervous. (To be clear, I have a lot of respect for most of the funders, and I blame most of the issue on the situation.)
Recently there have been several posts complaining against “EA Leadership”. These posts seem mostly either by new/adjacent members or/and are anonymous (which would demonstrate discomfort).
I think the main people who could give great critiques generally stay quiet. The ones who are loud and public are more commonly the ones with less to lose by voicing their statements, and that correlates with them generally not being able to provide as well-pointed feedback.
Evaluation of EA Community Members by Funders/Leaders
While it’s awkward to criticize those with power over you, it can be even more awkward to publicly criticize those that you have power over.
I’ve very rarely seen EA funders publicly say bad things about organizations. [2] Their one primary signal of a bad project is to just not fund it, but that’s a very minimal signal.
Related, there’s far more public criticism from Google employees about their management than there is their management about their employees. This plays out on a lot of levels.
It can be really difficult for those in power to respond to criticisms they think are bad. Those who publicly “punch up” are often given much more leeway and have less potential downside than those who “punch down”. Leaders are typically very much outnumbered and their time is particularly limited. My guess is that some don’t feel very safe engaging a lot with outer community members, especially if they don’t have enough time to adequately respond to comments or deal with problems that might come up.
Of course, those with power can still take action behind the scenes. This combination (has trouble publicly responding, but can secretly respond in powerful ways) is catastrophic for trust building.
On this topic, I think it’s generally fair to say that power is much more complicated than leaders have power, others don’t. Managers and funders are highly outnumbered, have a restricted set of information, and sometimes are comparatively disadvantaged at community discussion (compared to what their status would imply). When managers reveal information to their communities, they consider how their communities might harm them with this information. Issues like Twitter mobs are particularly dangerous for people with public positions, for instance.
So trust works both ways. Communities will share key feedback with leadership in proportion to how much they trust leadership to make use of that feedback and not take it personally. Leaders will share useful information and feedback insofar as they trust their communities (and the public at large, when communication is public) to not harm them.
Evaluation of Adjacent Groups, by EAs
I’m personally uncomfortable critiquing many groups online (or even saying things I know some groups disagree with), particularly on Twitter, because I’m afraid of aggressive backlash/trolls. There seem to be a lot of combative people online from all parts of the political aisle.
I think the EA community (including me) have been dismissive of other groups, but often there seems to be fairly little public clarity on why.
I think many of the EA critiques recently have been pretty bad. That’s not too surprising, most online writing is very bad (in my opinion). But me publicly denouncing some of it would look a lot like me punching down. Many of the criticisms come from communities I don’t feel comfortable communicating with. It’s easy to imagine that I’ll say something accidentally naive and agitating, and I’m not sure how bad this could be.
I think that more broadly, EA as a whole has taken a public image of being cooperative, polite, and respectful. This broadly seems positive, but I worry that this image might make it tricky to be honest about EA disagreements about other groups.
Sometimes one can be both polite and honest by just being incredibly nuanced and repeatedly demonstrating respect. The problem with this, in addition to being labor-intensive to write, is that it’s labor-intensive to read. If posts need to be very long and dry in order to not be inflammatory, it’s likely almost no one will read them.
Spending tons of effort writing incredibly detailed and dry pieces is not a solution for fixing communication issues between communities. Frustratingly, that might not lead us with many viable options.
Evaluation of EA, by Adjacent Groups
I think many EA Forum readers might assume that “Others being scared of criticizing key EA ideas, or the EA community”, is not an issue at this point, especially given the recent waves of angry critiques. I want to address some arguments here.
1. Isn’t EA getting tons of criticism already?
I think right now some groups are too comfortable critiquing effective altruism. The last few months have felt intense.
My impression is that the recent Time piece came out in large part because attacking EA was expected to get page views. After the FTX issue in particular, EA has become a much more attractive target.
However, just because some voices are loudly criticizing EA, doesn’t mean that all the informative voices are. Different communities come with different critiques. Generally I find that most communities with interesting things to say, just aren’t at all incentivized to say these things to other communities.
Just because you have a lot of incorrect detractors, doesn’t meant you’re right. Often, you’re all wrong. You need to be smart and hunt for the really good critiques.
I personally know several smart people who have given me interesting and novel (to me) arguments against aspects of EA thought in person, but still seem reluctant to post them online.
One ugly thing is that bad or repetitive criticisms can crowd out good ones. I think it’s easy to get exhausted after going through a bunch of drivel. The obvious solution is to get much better at quickly sifting through the items and focus on the most promising parts.
2. But *I* feel comfortable criticizing EA
It’s easy for me to feel like others are comfortable criticizing EA at this point, but I’m sure that the situation isn’t as clean as I’d like. I feel mostly welcome to be critical to EA, but also, I’ve also spent a lot of time learning how to do just that.
I think that the default is that large communities with significant power and important members, like EA, are scary to attempt to stand up to.
The people we need to be asking are the ones on the periphery. The ones with the most to lose, or the least to gain. We could probably learn a lot here with a usual mix of qualitative and quantitative methods.
3. Isn’t it preferable for EA to be uncomfortable for others to critique?
There are clearly some benefits to a movement for others to be scared to attack it. When I look around online, I see lot of bullying efforts by Twitter users specifically, to push for specific ideological narratives. I’m sure this is somewhat effective, but I really don’t want to live in an intellectual environment where that’s a common occurrence. It’s playing with fire.
So, I really hope that critics of EA can expect not to be harassed or attacked. I guess this is at least somewhat of an issue now, and I’m sure it could get a lot worse.
There’s also the issue of discerning between angry unfair rants against EA that are harmful to EA, and critiques of EA that are genuinely useful to others or useful to EA. I think there’s a lot of net-negative information everywhere, but I have a very hard time actually discerning the eventually net-positive from the eventually net-negative.
If EA were to scare away people with bad critiques, it might also be scaring away the people with good critiques.
EAs, Critiquing EA Critics
I think I covered mostly this in the sections above, but wanted to specifically call it out.
Some issues with critiquing your critics:
When notable EAs critique EA critics, it looks like punching down.
The audience will expect you to be biased, so you have to work hard to show otherwise. The most obvious way to do this is to show that you respect the work, even if you find it seriously lacking.
It might raise awareness to groups we don’t want to give attention to, like trolls.
You might be extra emotionally invested, which can make it more taxing to properly give fair responses.
I find that honestly responding to critique, especially when I think the critique is just really bad (but my audience doesn’t), is often surprisingly tricky to do.
Grab Bag of Related Examples
I oversaw one manager who continuously gave positive reports about the work they were overseeing. Eventually, these reports said several things like “overcame disaster X successfully”. I asked if there were early indicators of said disaster, and of course, there were, this person just didn’t these were worth including. I learned that this manager was just very optimistic and had low neuroticism. This made me much more paranoid about distortions from those I oversee.
I’ve been around several group houses that had official policies asking victims of sexual abuse to speak up. Few did, so many of these houses assumed that things were totally fine. I later learned of some serious incidents. One problem was that the victims didn’t trust the house management, and the “official policies” didn’t mean much to them.
I once had one boss who I had a lot of problems with. I cared about our relationship and the organization. I really didn’t want to be fired, so I thought it best to keep much of the criticism to myself.
I see a big part of my job as a board member to try to collect important information that people don’t feel comfortable telling the executive directors directly. This has happened several times so far, and I’m sure I’ve missed a whole lot.
I’ve probably spent 30 solid hours trying to explain and steelman the position of senior EA leadership (1-3 levels above me) to people. Many of these conversations get pretty heated, even though I’m a few steps removed from the actual decisions. I think there are a bunch of EAs with very strong feelings, but very poor models of how EA leadership actually works.
I really liked the book Radical Candor. This book presents several examples of bosses who mess up at giving feedback, either being too mean, or not giving enough.
What Can Be Done?
What should be done about this?
The fastest thing is probably to find some management experts or consultants who have both seen and improved similar problems. The broad problems are incredibly common to businesses. These people would probably be good for some of the EA-internal issues. I know there are some EA consultants around, perhaps some can provide takes.
Outside of that, when direct feedback is uncomfortable, one trick is to go up one meta-level and get “information about feedback”.
Some questions to ask:
How much evaluation and candid information do we have, privately and publicly, about which groups, from which groups?
How valuable is information about these groups? Are there clear wins to be made?
If there are gaps, do potential critics feel disincentivized to provide such critiques? Are there any fixes that could change these incentives?
What even are the groups we should focus on? Can we do things like cluster analysis to develop a much better understanding?
I think that a much better bar than asking, “Do we feel like we’re being open to feedback?” is objectively surveying the people you are curious about to see how comfortable they feel communicating with you. In the cases above, this could mean surveys to see which groups are comfortable criticizing which other groups.
I hypothesize that if asked, many of the groups mentioned above would express discomforts somewhat similar to what I outline.
In the medium term, I would encourage readers to consider ways that their potential critics might be uncomfortable giving them feedback, publicly or privately. This might involve asking third party intermediaries to help gather information and present summarized versions. Executives sometimes go through lengthy 360-degree evaluations and similar to do this.
In the long term, I wonder if we could ever have “epistemic safety” standards and evaluations. Restaurants get letter grades for food safety, perhaps communities could for things like, “good at taking criticism.” Communities then could have healthy competitions with each other to improve in important ways.
Afterward: Quick Example Surveys
I made some incredibly hasty Twitter polls to test the waters here. Results below.
These are obviously very biased, particularly because it comes from my personal Twitter audience.
Who is Uncomfortable Critiquing Who, Around EA?
Summary and Disclaimers
I think EA is made up of a bunch of different parties, many of whom find it at least somewhat uncomfortable to criticize or honestly evaluate each other for a wide variety of reasons.
I think that this is a very standard challenge that most organizations and movements have. As I would also recommend to other parties, I think that investigation and improvement here could be very valuable to EA. This post continues on many of the ideas as Select Challenges with Criticism & Evaluation Around EA.
To be clear, there are many bottlenecks between “someone is in a place to come up with a valuable critique” and “different decisions actually get made.” This process is costly and precarious at each step. For instance, decision makers think in very different ways than critics realize, so it’s easy for critics to waste a lot of time writing to them.
This post just focuses on the challenges that come from the challenges of things being uncomfortable to say. Going through the entire pipeline would require far more words.
One early reviewer critiqued this post saying that they didn’t believe that discomfort was a problem. If you don’t think it is, I don’t aim in this post to convince you. My goal here is to do early exploration what the problem even seems to look like, not to argue that the problem is severe or not.
Like with that previous post, I rely here mostly on anecdotal experiences, introspection, and recommendations from various management books. This comes from me working on QURI (trying to pursue better longtermist evaluation), being an employee and manager in multiple (EA and not EA) organizations, and hearing a whole lot of various rants and frustrations from EAs. I’d love to see further work to better understand where bottlenecks to valuable communication are most restrictive and then design and test solutions.
Writing this has helped me find some insights on this problem. However, it is a messy problem, and as I explained before, I find the terminology lacking. Apologies in advance.
Introduction
There’s a massive difference between a group saying that it’s open to criticism, and a group that people actually feel comfortable criticizing.
I think that many EA individuals and organizations advocate and promote feedback in ways unusual for the industry. However, I think there’s also a lot of work to do.
In most communities, it takes a lot of iteration and trust-building to find ways for people to routinely and usefully give candid information or feedback to each other.
In companies, for example, employees often don’t have much to personally gain by voicing their critiques to management, and a lot to potentially lose. Even if the management seems really nice, is an honest critique really worth the ~3% chance of resentment? Often you won’t ever know — management could just keep their dislike of you to themselves, and later take action accordingly. On the other side, it’s often uncomfortable for managers to convey candid feedback to their reports privately, let alone discuss department or employee failures to people throughout the organization.
My impression is that many online social settings contain a bunch of social groups that are really afraid of being honest with each other, and this leads to problems immediate (important information not getting shared) and expansive (groups developing extended distrust and sometimes hatred with each other).
Problems of communication and comfort happen within power hierarchies, and they also happen between peer communities. Really, they happen everywhere.
To a first approximation, “Everyone is at least a little of afraid of everyone else.”
I think a lot of people’s natural reaction to issues like this is to point fingers at groups they don’t like and blame them. But really, I personally think that all of us are broadly responsible (at least a little), and also are broadly able to understand and improve things. I see these issues as systemic, not personal.
Criticism Between Different Groups
Around effective altruism, I’ve noticed:
Evaluation in Global Welfare
Global poverty charity evaluation and criticism seem like fair game. When GiveWell started, they weren’t friends with the leaders of the organizations they were evaluating.
GiveWell evaluates of the very best charities in its focus areas. It’s much easier to get buy-in to evaluate an organization when it’s clear you’re pretty positive about them.
I don’t know of much systematic EA research into evaluating mediocre or poor global health charities. One reason is that the target audience is typically impact-focused donors, and doesn’t care about the mediocre charities. I suspect another reason is that this would create push back and animosity.
I think it’s now understood that new EA-created global health organizations can be publicly evaluated on grounds similar to what GiveWell does.
Evaluation in Longtermism
Around longtermism, there doesn’t seem to be much public organization evaluation or criticism. I think one issue is that many of the potential evaluators are social peers of the people they would be evaluating. They all go to the same EAGs (me included!). Larks did yearly reviews that I thought were strong, but those didn’t really give much negative feedback — they focused more on highlighting the good. It feels good to be supportive to one’s colleagues, and scary if they were to resent you.
Around AI Safety, there are many candid discussions on certain strategies and research directions. [1] I think this is great, and probably a good place to begin, but I also think there’s quite a bit more work to be done.
There’s some discussed controversies about calling out AI organizations people think might be causing net harm (i.e. OpenAI and Deepmind), though this is a very different issue—the risk is mainly about public knowledge and backlash, not frustration within the EA community.
I think there are a bunch of bottlenecks here other than discomfort (evaluation is expensive, and arguably we don’t have many choices anyway). However, I think discomfort is one significant bottleneck.
Evaluation of EA Funders/Leaders by EA Community Members
In conversations I’ve had, people seem particularly nervous around the funders and top leaders. There are only a few EA funders, and they seem highly correlated in opinion with each other. It might be that way for the next 20-50 years. For those without a terrific understanding of these funders, upsetting a funder can feel like a lifetime ban from almost all high-level effective altruist positions.
I’ve previously done some funding work, and know many of the funders personally, but they still make me very nervous. (To be clear, I have a lot of respect for most of the funders, and I blame most of the issue on the situation.)
Recently there have been several posts complaining against “EA Leadership”. These posts seem mostly either by new/adjacent members or/and are anonymous (which would demonstrate discomfort).
I think the main people who could give great critiques generally stay quiet. The ones who are loud and public are more commonly the ones with less to lose by voicing their statements, and that correlates with them generally not being able to provide as well-pointed feedback.
Evaluation of EA Community Members by Funders/Leaders
While it’s awkward to criticize those with power over you, it can be even more awkward to publicly criticize those that you have power over.
I’ve very rarely seen EA funders publicly say bad things about organizations. [2] Their one primary signal of a bad project is to just not fund it, but that’s a very minimal signal.
Related, there’s far more public criticism from Google employees about their management than there is their management about their employees. This plays out on a lot of levels.
It can be really difficult for those in power to respond to criticisms they think are bad. Those who publicly “punch up” are often given much more leeway and have less potential downside than those who “punch down”. Leaders are typically very much outnumbered and their time is particularly limited. My guess is that some don’t feel very safe engaging a lot with outer community members, especially if they don’t have enough time to adequately respond to comments or deal with problems that might come up.
Of course, those with power can still take action behind the scenes. This combination (has trouble publicly responding, but can secretly respond in powerful ways) is catastrophic for trust building.
On this topic, I think it’s generally fair to say that power is much more complicated than leaders have power, others don’t. Managers and funders are highly outnumbered, have a restricted set of information, and sometimes are comparatively disadvantaged at community discussion (compared to what their status would imply). When managers reveal information to their communities, they consider how their communities might harm them with this information. Issues like Twitter mobs are particularly dangerous for people with public positions, for instance.
So trust works both ways. Communities will share key feedback with leadership in proportion to how much they trust leadership to make use of that feedback and not take it personally. Leaders will share useful information and feedback insofar as they trust their communities (and the public at large, when communication is public) to not harm them.
Evaluation of Adjacent Groups, by EAs
I’m personally uncomfortable critiquing many groups online (or even saying things I know some groups disagree with), particularly on Twitter, because I’m afraid of aggressive backlash/trolls. There seem to be a lot of combative people online from all parts of the political aisle.
I think the EA community (including me) have been dismissive of other groups, but often there seems to be fairly little public clarity on why.
I think many of the EA critiques recently have been pretty bad. That’s not too surprising, most online writing is very bad (in my opinion). But me publicly denouncing some of it would look a lot like me punching down. Many of the criticisms come from communities I don’t feel comfortable communicating with. It’s easy to imagine that I’ll say something accidentally naive and agitating, and I’m not sure how bad this could be.
I think that more broadly, EA as a whole has taken a public image of being cooperative, polite, and respectful. This broadly seems positive, but I worry that this image might make it tricky to be honest about EA disagreements about other groups.
Sometimes one can be both polite and honest by just being incredibly nuanced and repeatedly demonstrating respect. The problem with this, in addition to being labor-intensive to write, is that it’s labor-intensive to read. If posts need to be very long and dry in order to not be inflammatory, it’s likely almost no one will read them.
Spending tons of effort writing incredibly detailed and dry pieces is not a solution for fixing communication issues between communities. Frustratingly, that might not lead us with many viable options.
Evaluation of EA, by Adjacent Groups
I think many EA Forum readers might assume that “Others being scared of criticizing key EA ideas, or the EA community”, is not an issue at this point, especially given the recent waves of angry critiques. I want to address some arguments here.
1. Isn’t EA getting tons of criticism already?
I think right now some groups are too comfortable critiquing effective altruism. The last few months have felt intense.
My impression is that the recent Time piece came out in large part because attacking EA was expected to get page views. After the FTX issue in particular, EA has become a much more attractive target.
However, just because some voices are loudly criticizing EA, doesn’t mean that all the informative voices are. Different communities come with different critiques. Generally I find that most communities with interesting things to say, just aren’t at all incentivized to say these things to other communities.
Just because you have a lot of incorrect detractors, doesn’t meant you’re right. Often, you’re all wrong. You need to be smart and hunt for the really good critiques.
I personally know several smart people who have given me interesting and novel (to me) arguments against aspects of EA thought in person, but still seem reluctant to post them online.
One ugly thing is that bad or repetitive criticisms can crowd out good ones. I think it’s easy to get exhausted after going through a bunch of drivel. The obvious solution is to get much better at quickly sifting through the items and focus on the most promising parts.
2. But *I* feel comfortable criticizing EA
It’s easy for me to feel like others are comfortable criticizing EA at this point, but I’m sure that the situation isn’t as clean as I’d like. I feel mostly welcome to be critical to EA, but also, I’ve also spent a lot of time learning how to do just that.
I think that the default is that large communities with significant power and important members, like EA, are scary to attempt to stand up to.
The people we need to be asking are the ones on the periphery. The ones with the most to lose, or the least to gain. We could probably learn a lot here with a usual mix of qualitative and quantitative methods.
3. Isn’t it preferable for EA to be uncomfortable for others to critique?
There are clearly some benefits to a movement for others to be scared to attack it. When I look around online, I see lot of bullying efforts by Twitter users specifically, to push for specific ideological narratives. I’m sure this is somewhat effective, but I really don’t want to live in an intellectual environment where that’s a common occurrence. It’s playing with fire.
So, I really hope that critics of EA can expect not to be harassed or attacked. I guess this is at least somewhat of an issue now, and I’m sure it could get a lot worse.
There’s also the issue of discerning between angry unfair rants against EA that are harmful to EA, and critiques of EA that are genuinely useful to others or useful to EA. I think there’s a lot of net-negative information everywhere, but I have a very hard time actually discerning the eventually net-positive from the eventually net-negative.
If EA were to scare away people with bad critiques, it might also be scaring away the people with good critiques.
EAs, Critiquing EA Critics
I think I covered mostly this in the sections above, but wanted to specifically call it out.
Some issues with critiquing your critics:
When notable EAs critique EA critics, it looks like punching down.
The audience will expect you to be biased, so you have to work hard to show otherwise. The most obvious way to do this is to show that you respect the work, even if you find it seriously lacking.
It might raise awareness to groups we don’t want to give attention to, like trolls.
You might be extra emotionally invested, which can make it more taxing to properly give fair responses.
I find that honestly responding to critique, especially when I think the critique is just really bad (but my audience doesn’t), is often surprisingly tricky to do.
Grab Bag of Related Examples
I oversaw one manager who continuously gave positive reports about the work they were overseeing. Eventually, these reports said several things like “overcame disaster X successfully”. I asked if there were early indicators of said disaster, and of course, there were, this person just didn’t these were worth including. I learned that this manager was just very optimistic and had low neuroticism. This made me much more paranoid about distortions from those I oversee.
I’ve been around several group houses that had official policies asking victims of sexual abuse to speak up. Few did, so many of these houses assumed that things were totally fine. I later learned of some serious incidents. One problem was that the victims didn’t trust the house management, and the “official policies” didn’t mean much to them.
I once had one boss who I had a lot of problems with. I cared about our relationship and the organization. I really didn’t want to be fired, so I thought it best to keep much of the criticism to myself.
I see a big part of my job as a board member to try to collect important information that people don’t feel comfortable telling the executive directors directly. This has happened several times so far, and I’m sure I’ve missed a whole lot.
I’ve probably spent 30 solid hours trying to explain and steelman the position of senior EA leadership (1-3 levels above me) to people. Many of these conversations get pretty heated, even though I’m a few steps removed from the actual decisions. I think there are a bunch of EAs with very strong feelings, but very poor models of how EA leadership actually works.
I really liked the book Radical Candor. This book presents several examples of bosses who mess up at giving feedback, either being too mean, or not giving enough.
What Can Be Done?
What should be done about this?
The fastest thing is probably to find some management experts or consultants who have both seen and improved similar problems. The broad problems are incredibly common to businesses. These people would probably be good for some of the EA-internal issues. I know there are some EA consultants around, perhaps some can provide takes.
Outside of that, when direct feedback is uncomfortable, one trick is to go up one meta-level and get “information about feedback”.
Some questions to ask:
How much evaluation and candid information do we have, privately and publicly, about which groups, from which groups?
How valuable is information about these groups? Are there clear wins to be made?
If there are gaps, do potential critics feel disincentivized to provide such critiques? Are there any fixes that could change these incentives?
What even are the groups we should focus on? Can we do things like cluster analysis to develop a much better understanding?
I think that a much better bar than asking, “Do we feel like we’re being open to feedback?” is objectively surveying the people you are curious about to see how comfortable they feel communicating with you. In the cases above, this could mean surveys to see which groups are comfortable criticizing which other groups.
I hypothesize that if asked, many of the groups mentioned above would express discomforts somewhat similar to what I outline.
In the medium term, I would encourage readers to consider ways that their potential critics might be uncomfortable giving them feedback, publicly or privately. This might involve asking third party intermediaries to help gather information and present summarized versions. Executives sometimes go through lengthy 360-degree evaluations and similar to do this.
In the long term, I wonder if we could ever have “epistemic safety” standards and evaluations. Restaurants get letter grades for food safety, perhaps communities could for things like, “good at taking criticism.” Communities then could have healthy competitions with each other to improve in important ways.
Afterward: Quick Example Surveys
I made some incredibly hasty Twitter polls to test the waters here. Results below.
These are obviously very biased, particularly because it comes from my personal Twitter audience.
Links:
Global Health
Animal Welfare Orgs
Longtermist Orgs
QURI
EA Funders
Acknowledgments
Thanks to Nuño Sempere, Nics Olayres, Misha Yagudin, Ben Goldhaber, Lizka Vaintrob, and Ben West for their comments.
Ben West recommended these posts on candid AI critique:
1. https://forum.effectivealtruism.org/posts/jydymb23NWF3Q4oDt/on-how-various-plans-miss-the-hard-bits-of-the-alignment
2. https://www.lesswrong.com/s/4iEpGXbD3tQW5atab
3. https://www.lesswrong.com/posts/wnnkD6P2k2TfHnNmt/threat-model-literature-review
4. https://forum.effectivealtruism.org/s/QtBPgszyK4yXwduKS
An obvious exception to this is the LTFF, which has writeups of the projects they fund.