I’m Chana, a manager on the Community Health team. This comment is meant to address some of the things Ben says in the post above as well as things other commenters have mentioned, though very likely I won’t have answered all the questions or concerns.
High level
I agree with some of those commenters that our role is not always clear, and I’m sorry for the difficulties that this causes. Some of this ambiguity is intrinsic to our work, but some is not, and I would like people to have a better sense of what to expect from us, especially as our strategy develops. I’d like to give some thoughts here that hopefully give some clarity, and we might communicate more about how we see our role in the future.
For a high level description of our work: We aim to address problems that could prevent the effective altruism community from fulfilling its potential for impact. That looks like: taking seriously problems with the culture, and problems from individuals or organizations; hearing and addressing concerns about interpersonal or organizational issues (primarily done by our community liaisons); thinking about community-wide problems and gaps and occasionally trying to fill those; and advising various actors in the EA space based on the information and expertise we have. This work allows us to address specific problems, be aware of concerning actors, and give advice to help the community do its best work.
Context on our responses
Sometimes we have significant constraints on what we can do and say that result in us being unable to share our complete perspective (or any perspective at all). Sometimes that is because people have requested that we keep some or all information about them confidential, including what actions our team has taken. Sometimes it is because us weighing in will increase public discussion that could be harmful to some or all of the people involved. This information asymmetry can be particularly tricky when someone else in the community shares some information about a situation that we think is inaccurate or is only a small part of the picture, but we’re not in a position to correct it. I’m sorry for how frustrating this can be.
I imagine this might end up being relevant to responses to this comment (and which and how and when we respond to them), so I think it’s useful to highlight.
I’ll also flag that many of our staff are at events for the next two weeks, so it might be an especially slow time for Community Health responses.
About what to expect
I think some of the disagreements here come from different understanding of what the Community Health team’s mission is or should be. We want to hear and (where possible) address problems in the community, at the interpersonal, organizational, and community levels. But we often won’t resolve a situation to the satisfaction of everyone involved, or do everything that would be helpful for individuals who were harmed. Ben mentions people “hoping that it [Community Health] will pursue justice for them.” I want to be totally upfront that we don’t see pursuing justice as our mission (and I don’t think we’ve claimed to). In the same vein, protecting people from bullies is sometimes a part of our work, and something we’d always like to be able to do, but it’s not our primary goal and sadly, we won’t always be able to do it.
We don’t want people to have a false impression of what they can expect from talking to us.
Sometimes people come to us with a picture of what they’d like to happen, but we won’t always take the steps they hope we’ll take, either because 1) we don’t agree that those steps are the right call, 2) we’re not willing to take the steps based on the information we have (for example if we don’t have their permission to ask for the other person’s side of the story), or 3) the costs (time, legal risk etc) are too great. We generally explain our considerations to the people involved, but could probably communicate better about this publicly, and as we continue thinking about strategic changes, we’ll want to give people an accurate picture of what to expect.
(At other times people come to us without specific steps they’d like us to take. Sometimes they think something should be done, but don’t know what is feasible, other times they share information as “I don’t think this is very bad and don’t want much to be done, but I thought you should know and be able to look for patterns”, which can be quite helpful.)
We talk about confidentiality and what actions we might be able to take by default in calls. Typically this results in people deciding to go forward with working with us, but some people might decide that what we’re likely to be able to provide isn’t a good match for their situation.
I don’t think the downside of a false sense of security people might get from our team’s existence is strong enough to counteract the benefits.
It’s true that we rarely write up our findings publicly. I don’t take that as damning since I don’t think that is or should be the default expectation. I think public writeups can be a valuable tool in some cases, but often there are good reasons to use other tools instead.
One main reason is the large amount of time they take — Ben pointed out that he didn’t necessarily endorse how much time this project took him, but that it was really hard to do less.
I agree with Ben that we aren’t the EA police. We have some levers we can pull related to advising on a number of decisions, and we do our best to use these to address problems and concerns. I think describing occasions that we use the information we have as “rare” is very much not reflective of the reality of our day-to-day work.
I’m sad to read in some comments that we didn’t satisfy people’s needs or wants in those situations. I’m very open to receiving feedback, concerns or complaints in my capacity as a manager on the team—feel free to message me on the forum or email me (including anonymously). I recognize someone not wanting to talk to the Community Health team might not want to share feedback with that same team, but I want the offer available for anyone who might. You can also send feedback to CEA interim CEO Ben West here.
I also think not feeling satisfied with our actions is plausibly a normal outcome even if everything is going well—sometimes the best available choice won’t make everyone (or anyone) happy. I definitely want people to come in expecting that they might not end up happy with our choices (though I think in many cases they are).
Again, if people think we’re making wrong calls, I’m interested to hear about it. Under some circumstances we can also re-review cases.
Regarding trust
We’re aware that some people might feel hesitant to talk to us (and of course, it’s entirely up to them). There are many understandable reasons for this (even if our team was flawless). Our team isn’t flawless, though, which means there are likely additional cases where people don’t want to talk to us, which I’m sad about. I don’t know how much of a problem this is.
In particular, we are worried to hear that some people didn’t feel that they’d be treated with respect (I can’t tell if they mean by our team or the general institutional network we’re a part of, or something else). In this case, it sounds like potentially they aren’t confident we’d handle their information well or treat them respectfully. If that is what they meant, that sounds like a bad (and potentially stressful) situation and I’m really sorry to hear about it. I could imagine there being a concerning pattern around this that we should prioritize learning about and working on. If at any point people wanted to share information on the reasons they wouldn’t talk to us, I’m interested (including anonymously—here for the community liaisons, here for me personally and here for Ben West, interim CEO of CEA).
People might also worry that we’d negatively update our perception of them if they were implicated in something. (This is one of the reasons people might not want to speak to us that might be implied by this post, though I am not at all sure this is what was meant). I don’t currently think we should have a strict policy of amnesty for any concerning information people provide about themselves, though we in fact try hard to not make people regret talking to us. (Strict amnesty of that kind would probably result in less of us doing things about issues we hear about and make Ben’s concerns worse rather than better, though I haven’t gone and researched this question.)
In general, we care a lot about not making people regret speaking to us and not pressuring people to do or share more than they’re comfortable with. These are big elements of why we sometimes do less than we’d like, since we don’t want to take actions they’re not comfortable with, or push them to stay involved in a situation they’d like to be done with, or to do anything that would cause them to be worried we might inadvertently deanonymize them.
My general sense (though of course there are selection effects here) is that people who talk to our team in person or on calls about our decision making often end up happier and finding us largely reasonable. I haven’t figured out how to do that at scale e.g. in public writing.
But we often won’t resolve a situation to the satisfaction of everyone involved, or do everything that would be helpful for individuals who were harmed. Ben mentions people “hoping that it [Community Health] will pursue justice for them.” I want to be totally upfront that we don’t see pursuing justice as our mission (and I don’t think we’ve claimed to). In the same vein, protecting people from bullies is sometimes a part of our work, and something we’d always like to be able to do, but it’s not our primary goal and sadly, we won’t always be able to do it.
A design decision to not have “justice” or “countering bullies” seems sort of big and touches on deep subjects.
I guess this viewpoint above could be valid and deep, (but I’m slightly skeptical the comm. health team has that depth).
It seems possible that, basically, just pursuing justice or countering bullies in a straightforward way might be robustly good and support other objectives. Honestly, it doesn’t seem that complicated, and its slightly a yellow flag if it is hard in EA.
I think writing on this (like Julia W’s writing on on her considerations, not going to search it up but it was good). Such a piece would (ideally) show wisdom and considerations that is illuminating.
I’ll try to produce something, maybe not under this name or an obvious form.
This seems extremely uncharitable. It’s impossible for every good thing to be the top priority, and I really dislike the rhetorical move of criticising someone who says their top priority is X for not caring at all about Y.
In the post you’re replying to Chana makes the (in my view) virtuous move of actually being transparent about what CH’s top priorities are, a move which I think is unfortunately rare because of dynamics like this. You’ve chosen to interpret this as ‘a decision not to have’ [other nice things that you want], apparently realised that it’s possible the thinking here isn’t actually extremely shallow, but then dismissed the possibility of anyone on the team being capable of non-shallow thinking anyway for currently unspecified reasons.
editing this in rather than continuing a thread as I don’t feel able to do protracted discussion at the moment:
Chana is a friend. We haven’t talked about this post, but that’s going to be affecting my thinking.
She’s also, in my view (which you can discount if you like), unusually capable of deep thinking about difficult tradeoffs, which made the comment expressing skepticism about CH’s depth particularly grating.
More generally, I’ve seen several people I consider friends recently put substantial effort into publicly communicating their reasoning about difficult decisions, and be rewarded for this effort with unhelpful criticism.
All that is to say that I’m probably not best placed to impartially evaluate comments like this, but at the end of the day I re-read it and it still feels like what happened is someone responded to Chana saying “our top priority is X” with “it seems possible that Y might be good”, and I called that uncharitable because I’m really, really sure that that possibility has not escaped her notice.
Your reply contains a very strong and in my view, highly incorrect read, and says I am far too judgemental and critical.
rhetorical move of criticising someone who says their top priority is X for not caring at all about Y.
Please review my comment again.
I’m simply pointing to a practice or principle common in many orgs, companies, startups and teams to have principles and flow from them, in addition to “maximizing EV” or “maximizing profits”. This may be wrong or right.
I’m genuinely not judging but keeping it open, like, I literally said this. I specifically suggest writing.
While this wasn’t the focus, I haven’t thought about it, but I probably do think Chana’s writing is virtuous. I actually have very specific reasons to think why the work is shallow, but this is a distinct thing from the principle or choice I’ve talked about. Community health is hard and the team is sort of given an awkward ball to catch.
An actual uncharitable opinion: I understand this is the EA forum, so as one of the challenges of true communication, critiques and devastating things written by “critics” are often masked or coached as insinuations, but I don’t feel like this happened and I kind of resent having to put my comments through these lenses.
BTW, I kind of see Alex L as one of the “best EAs” and I sort of attribute this issue to the forum, and now sort of reinforces my distrust of EA discourse (like, I think there’s an ongoing 50 comment thread or something because a grantmaker asked someone if english was their second language, come on).
I’m Chana, a manager on the Community Health team. This comment is meant to address some of the things Ben says in the post above as well as things other commenters have mentioned, though very likely I won’t have answered all the questions or concerns.
High level
I agree with some of those commenters that our role is not always clear, and I’m sorry for the difficulties that this causes. Some of this ambiguity is intrinsic to our work, but some is not, and I would like people to have a better sense of what to expect from us, especially as our strategy develops. I’d like to give some thoughts here that hopefully give some clarity, and we might communicate more about how we see our role in the future.
For a high level description of our work: We aim to address problems that could prevent the effective altruism community from fulfilling its potential for impact. That looks like: taking seriously problems with the culture, and problems from individuals or organizations; hearing and addressing concerns about interpersonal or organizational issues (primarily done by our community liaisons); thinking about community-wide problems and gaps and occasionally trying to fill those; and advising various actors in the EA space based on the information and expertise we have. This work allows us to address specific problems, be aware of concerning actors, and give advice to help the community do its best work.
Context on our responses
Sometimes we have significant constraints on what we can do and say that result in us being unable to share our complete perspective (or any perspective at all). Sometimes that is because people have requested that we keep some or all information about them confidential, including what actions our team has taken. Sometimes it is because us weighing in will increase public discussion that could be harmful to some or all of the people involved. This information asymmetry can be particularly tricky when someone else in the community shares some information about a situation that we think is inaccurate or is only a small part of the picture, but we’re not in a position to correct it. I’m sorry for how frustrating this can be.
I imagine this might end up being relevant to responses to this comment (and which and how and when we respond to them), so I think it’s useful to highlight.
I’ll also flag that many of our staff are at events for the next two weeks, so it might be an especially slow time for Community Health responses.
About what to expect
I think some of the disagreements here come from different understanding of what the Community Health team’s mission is or should be. We want to hear and (where possible) address problems in the community, at the interpersonal, organizational, and community levels. But we often won’t resolve a situation to the satisfaction of everyone involved, or do everything that would be helpful for individuals who were harmed. Ben mentions people “hoping that it [Community Health] will pursue justice for them.” I want to be totally upfront that we don’t see pursuing justice as our mission (and I don’t think we’ve claimed to). In the same vein, protecting people from bullies is sometimes a part of our work, and something we’d always like to be able to do, but it’s not our primary goal and sadly, we won’t always be able to do it.
We don’t want people to have a false impression of what they can expect from talking to us.
Sometimes people come to us with a picture of what they’d like to happen, but we won’t always take the steps they hope we’ll take, either because 1) we don’t agree that those steps are the right call, 2) we’re not willing to take the steps based on the information we have (for example if we don’t have their permission to ask for the other person’s side of the story), or 3) the costs (time, legal risk etc) are too great. We generally explain our considerations to the people involved, but could probably communicate better about this publicly, and as we continue thinking about strategic changes, we’ll want to give people an accurate picture of what to expect.
(At other times people come to us without specific steps they’d like us to take. Sometimes they think something should be done, but don’t know what is feasible, other times they share information as “I don’t think this is very bad and don’t want much to be done, but I thought you should know and be able to look for patterns”, which can be quite helpful.)
We talk about confidentiality and what actions we might be able to take by default in calls. Typically this results in people deciding to go forward with working with us, but some people might decide that what we’re likely to be able to provide isn’t a good match for their situation.
I don’t think the downside of a false sense of security people might get from our team’s existence is strong enough to counteract the benefits.
It’s true that we rarely write up our findings publicly. I don’t take that as damning since I don’t think that is or should be the default expectation. I think public writeups can be a valuable tool in some cases, but often there are good reasons to use other tools instead.
One main reason is the large amount of time they take — Ben pointed out that he didn’t necessarily endorse how much time this project took him, but that it was really hard to do less.
I agree with Ben that we aren’t the EA police. We have some levers we can pull related to advising on a number of decisions, and we do our best to use these to address problems and concerns. I think describing occasions that we use the information we have as “rare” is very much not reflective of the reality of our day-to-day work.
I’m sad to read in some comments that we didn’t satisfy people’s needs or wants in those situations. I’m very open to receiving feedback, concerns or complaints in my capacity as a manager on the team—feel free to message me on the forum or email me (including anonymously). I recognize someone not wanting to talk to the Community Health team might not want to share feedback with that same team, but I want the offer available for anyone who might. You can also send feedback to CEA interim CEO Ben West here.
I also think not feeling satisfied with our actions is plausibly a normal outcome even if everything is going well—sometimes the best available choice won’t make everyone (or anyone) happy. I definitely want people to come in expecting that they might not end up happy with our choices (though I think in many cases they are).
Again, if people think we’re making wrong calls, I’m interested to hear about it. Under some circumstances we can also re-review cases.
Regarding trust
We’re aware that some people might feel hesitant to talk to us (and of course, it’s entirely up to them). There are many understandable reasons for this (even if our team was flawless). Our team isn’t flawless, though, which means there are likely additional cases where people don’t want to talk to us, which I’m sad about. I don’t know how much of a problem this is.
In particular, we are worried to hear that some people didn’t feel that they’d be treated with respect (I can’t tell if they mean by our team or the general institutional network we’re a part of, or something else). In this case, it sounds like potentially they aren’t confident we’d handle their information well or treat them respectfully. If that is what they meant, that sounds like a bad (and potentially stressful) situation and I’m really sorry to hear about it. I could imagine there being a concerning pattern around this that we should prioritize learning about and working on. If at any point people wanted to share information on the reasons they wouldn’t talk to us, I’m interested (including anonymously—here for the community liaisons, here for me personally and here for Ben West, interim CEO of CEA).
People might also worry that we’d negatively update our perception of them if they were implicated in something. (This is one of the reasons people might not want to speak to us that might be implied by this post, though I am not at all sure this is what was meant). I don’t currently think we should have a strict policy of amnesty for any concerning information people provide about themselves, though we in fact try hard to not make people regret talking to us. (Strict amnesty of that kind would probably result in less of us doing things about issues we hear about and make Ben’s concerns worse rather than better, though I haven’t gone and researched this question.)
In general, we care a lot about not making people regret speaking to us and not pressuring people to do or share more than they’re comfortable with. These are big elements of why we sometimes do less than we’d like, since we don’t want to take actions they’re not comfortable with, or push them to stay involved in a situation they’d like to be done with, or to do anything that would cause them to be worried we might inadvertently deanonymize them.
My general sense (though of course there are selection effects here) is that people who talk to our team in person or on calls about our decision making often end up happier and finding us largely reasonable. I haven’t figured out how to do that at scale e.g. in public writing.
Thanks all for your thoughts and feedback.
A design decision to not have “justice” or “countering bullies” seems sort of big and touches on deep subjects.
I guess this viewpoint above could be valid and deep, (but I’m slightly skeptical the comm. health team has that depth).
It seems possible that, basically, just pursuing justice or countering bullies in a straightforward way might be robustly good and support other objectives. Honestly, it doesn’t seem that complicated, and its slightly a yellow flag if it is hard in EA.
I think writing on this (like Julia W’s writing on on her considerations, not going to search it up but it was good). Such a piece would (ideally) show wisdom and considerations that is illuminating.
I’ll try to produce something, maybe not under this name or an obvious form.
This seems extremely uncharitable. It’s impossible for every good thing to be the top priority, and I really dislike the rhetorical move of criticising someone who says their top priority is X for not caring at all about Y.
In the post you’re replying to Chana makes the (in my view) virtuous move of actually being transparent about what CH’s top priorities are, a move which I think is unfortunately rare because of dynamics like this. You’ve chosen to interpret this as ‘a decision not to have’ [other nice things that you want], apparently realised that it’s possible the thinking here isn’t actually extremely shallow, but then dismissed the possibility of anyone on the team being capable of non-shallow thinking anyway for currently unspecified reasons.
editing this in rather than continuing a thread as I don’t feel able to do protracted discussion at the moment:
Chana is a friend. We haven’t talked about this post, but that’s going to be affecting my thinking.
She’s also, in my view (which you can discount if you like), unusually capable of deep thinking about difficult tradeoffs, which made the comment expressing skepticism about CH’s depth particularly grating.
More generally, I’ve seen several people I consider friends recently put substantial effort into publicly communicating their reasoning about difficult decisions, and be rewarded for this effort with unhelpful criticism.
All that is to say that I’m probably not best placed to impartially evaluate comments like this, but at the end of the day I re-read it and it still feels like what happened is someone responded to Chana saying “our top priority is X” with “it seems possible that Y might be good”, and I called that uncharitable because I’m really, really sure that that possibility has not escaped her notice.
Your reply contains a very strong and in my view, highly incorrect read, and says I am far too judgemental and critical.
Please review my comment again.
I’m simply pointing to a practice or principle common in many orgs, companies, startups and teams to have principles and flow from them, in addition to “maximizing EV” or “maximizing profits”. This may be wrong or right.
I’m genuinely not judging but keeping it open, like, I literally said this. I specifically suggest writing.
While this wasn’t the focus, I haven’t thought about it, but I probably do think Chana’s writing is virtuous. I actually have very specific reasons to think why the work is shallow, but this is a distinct thing from the principle or choice I’ve talked about. Community health is hard and the team is sort of given an awkward ball to catch.
An actual uncharitable opinion: I understand this is the EA forum, so as one of the challenges of true communication, critiques and devastating things written by “critics” are often masked or coached as insinuations, but I don’t feel like this happened and I kind of resent having to put my comments through these lenses.
BTW, I kind of see Alex L as one of the “best EAs” and I sort of attribute this issue to the forum, and now sort of reinforces my distrust of EA discourse (like, I think there’s an ongoing 50 comment thread or something because a grantmaker asked someone if english was their second language, come on).
The community health team’s work on interpersonal harm in the community