First off, I want to thank you for taking what was obviously a substantial amount of time to reply (and also to Sarah in another comment that I haven’t had time to reply to). This is, fwiw, is already well above the level of community engagement that I’ve perceived from most previous heads of CEA.
On your specific comments, it’s possible that we agree more than I expected. Nonetheless, there are still some substantial concerns they raise for me. In typical Crocker-y fashion, I hope you’ll appreciate that me focusing on the disagreements for the rest of this comment doesn’t imply that they’re my entire impression. Should you think about replying to this, know that I appreciate your time, and I hope you feel able to reply to individual points without being morally compelled to respond to the whole thing. I’m giving my concerns here as much for your and the community’s information as with the hope of a further response.
> I view transparency as part of the how, i.e. I believe transparency can be a tool to achieve goals informed by EA principles, but I don’t think it’s a goal in itself.
In some sense this is obviously true, but I believe it’s gerrymandering what the difference between ‘what’ and ‘how’ actually is.
For example, to my mind ‘scout mindset’ doesn’t seem any more central a goal than ‘be transparent’. In the post by Peter you linked, his definition of it sounds remarkably like ‘be transparent’, to wit: ‘the view that we should be open, collaborative, and truth-seeking in our understanding of what to do’.
One can imagine a world where we should rationally stop exploring new ideas and just make the best of the information we have (this isn’t so hard to imagine if it’s understood as a temporary measure to firefight urgent siutations), and where major charities can make substantial decisions without explanation and this tend to produce trustworthy and trusted policies—but I don’t think we live in either world most of the time.
In the actual world, the community doesn’t really know, for example with what weighting CEA priorities longtermist causes over others; how it priorities AI vs other longtermist causes, how it runs admissions at EAGs,;why some posts get tagged as ‘community’ on the forum, and therefore effectively suppressed while similar ones stay at the top level; why the ‘community’ tag has been made admin-editable-only; what the region pro rata rates CEA uses when contracting externally; what your funding breakdown looks like (or even the absolute amount); what the inclusion criteria for ‘leadership’ forums is, or who the attendees are; or many many other such questions people in the community have urgently raised. And we don’t have any regular venue for being able to discuss such questions and community-facing CEA policies and metrics with some non-negligible chance of CEA responding—a simple weekly office hours policy could fix this.
> confidentiality seems like an obvious good to me, e.g. with some information that is shared with our Community Health Team
Confidentiality is largely unrelated to transparency. If in any context someone speaks to someone else in confidence, there have to be exceptionally good reasons for breaking that confidence. None of what I’m pointing at in the previous paragraph would come close to asking them to do that.
> Amy Labenz (our Head of Events) has stated, we want to avoid situations where we share so much information that people can use it to game the admissions process.
I think this statement was part of the problem… We as a community have no information on which to evaluate the statement, and no particular reason to take it at face value. Are there concrete examples of people gaming the system this way? Is there empirical data showing some patterns that justify this assertion (and comparing it to the upsides)? I know experienced EA event organisers who explicitly claim she’s wrong on this. As presented, Labenz’s statement is in itself a further example of lack of transparency that seems not to serve the community—it’s a proclamation from above, with no follow-up, on a topic that the EA community would actively like to help out with if we were given sufficient data.
This raises a more general point—transparency doesn’t just allow the community to criticise CEA, but enables individuals and other orgs to actively help find useful info in the data that CEA otherwise wouldn’t have had the bandwidth to uncover.
> I think transparency may cause active harm for impactful projects involving private political negotiations or infohazards in biosecurity
These scenarios get wheeled out repeatedly for this sort of discussion (Chris Leong basically used the same ones elsewhere in this thread), but I find them somewhat disingenuous. For most charities, including all core-to-the-community EA charities, this is not a concern. I certainly hope CEA doesn’t deal in biosecurity or international politics—if it does, then the lack of transparency is much worse than I thought!
> Transparency is also not costless, e.g. Open Philanthropy has repeatedly published pieces on the challenges of transparency
All of the concerns they list there apply equally to all the charities that Givewell, EAFunds etc expect to be transparent. I see no principled reason in that article why CEA, OP, EA Funds, GWWC or any other regranters should expect so much more transparency than they’re willing to offer themselves. Briefly going through their three key arguments:
‘Challenge 1: protecting our brand’ - empirically I think this is something CEA and EV have substantially failed to do in the last few years. And in most of the major cases (continual failure for anyone to admit any responsibility for FTX; confusion around Wytham Abbey—the fact that that was ‘other CEA’ notwithstanding; PELTIV scores and other elitism-favouring policies; the community health team not disclosing allegations against Owen (or more politic-ly ‘a key member of our organisation’) sooner; etc) this was explicitly bad feeling over lack of transparency. I think publishing somee half-baked explanations that summarised the actual thinking of these at the time (rather than when in response to them later being exposed by critics) would a) have given people far less to complain about, and b) possibly generated (kinder) pushback from the community that might have averted some of the problem as it eventually manifested. I have also argued that CEA’s historical media policy of ‘talk as little as possible to the media’ both left a void in media discussion of the movement that was filled by the most vociferous critics and generally worsened the epistemics of the movement.
‘Challenge 2: information about us is information about grantees’ - this mostly doesn’t apply to CEA. Your grantees are the community and community orgs, both groups of whom would almost certainly like more info from you. (it also does apply to nonmeta charities like Givedirectly, who we nonetheless expect to gather large amounts of info on the community they’re serving—but in that situation we think it’s a good tradeoff)
‘Challenge 3: transparency is unusual’ - this seems more like a whinge than a real objection. Yes, it’s a higher standard than the average nonprofit holds itself to. The whole point of the EA movement was to encourage higher standards in the world. If we can’t hold ourselves to those raised standards, it’s hard to have much hope that we’ll ever inspire meaningful change in others.
> I also think it’s possible to have impartiality without scope sensitivity. Animal shelters and animal sanctuaries strike me as efforts that reflect impartiality insofar as they value the wellbeing of a wide array of species, but they don’t try to account for scope sensitivity
This may be quibbling, but I would consider focusing on visible subsets of the animal population (esp pets) a form of partiality. This particular disagreement doesn’t matter much, but it illustrates why I think gestures towards principles that are really not that well defined is that helpful for giving a sense of what we can expect CEA to do in future.
> “While we often strive to collaborate and to support people in their engagement with EA, our primary goal is having a positive impact on the world, not satisfying community members (though oftentimes the two are intertwined).”
I think this politicianspeak. If AMF said ‘our primary goal is having a positive impact on the world rather than distributing bednets’ and used that as a rationale to remove their hyperfocus on bednets, I’m confident a) that they would become much less positive on the world, and b) that Givewell would stop recommending them for that reason. Taking a risk on choosing your focus and core competencies is essential to actually doing something useful—if you later find out that your core competencies aren’t that valuable then you can either disband the organisation, or attempt a radical pivot (as Charity Science’s founders did on multiple occasions!).
> I think this was particularly true during the FTX boom times, when significant amounts of money were spent in ways that, to my eyes, blurred the lines between helping the community do more good and just plain helping the community. See e.g. these posts for some historical discussion … We have made decisions that may make our events less of a pleasant experience (e.g. cutting back on meals and snack variety)
I think this along with the transparency question is our biggest disagreement and/or misunderstanding. There’s a major equivocation going on here between exactly *which* members of the community you’re serving. I am entirely in favour of cutting costs at EAGs (the free wine at one I went to tasted distinctly of dead children), and of reducing all-expenses-paid forums for ‘people leading EA community-building’. I want to see CEA support people who actually need support to do good—the low-level community builders with little to no career development, esp in low or middle income countries whose communities are being starved; the small organisations with good track records but such mercurialfunding; all the talented people who didn’t go to top 100 universities and therefore get systemically deprioritised by CEA. These people were never major beneficiaries of the boom, but were given false expectations during it and have been struggling in the general pullback ever since.
> For example, for events, our primary focus is on metrics like how many positive career changes occur as a result of our events, as opposed to attendee satisfaction.
I think the focus would be better placed on why attendees are satisfied or dissatisfied. If I go to an event and feel motivated to work harder in what I’m already doing, or build a social network who make me feel better enough about my life that I counterfactually make or keep a pledge, these things are equally as important. There’s something very patriarchal about CEA assuming they know better what makes members of the community more effective than the members of the community do. And, as any metric, ‘positive career changes’ can be gamed, or could just be the wrong thing to focus on.
> I think if anyone was best able to make a claim to be our customers, it would be our donors. Accountability to the intent behind their donations does drive our decision-making, as I discussed in the OP.
If both CEA and its donors are effectiveness-minded, this shouldn’t really be a distinction—per my comments about focus above, serving CEA’s community is about the most effective thing an org with a community focus can do, and so one would hope the donors would favour it. But also, this argument would be stronger if CEA only took money from major donors. As is, as long as CEA accepts donations from the community, sometimes actively solicits it, and broadly requires it (subject to honesty policy) from people attending EAGs—then your donors are the community and hence, either way, your customers.
(I work on the Forum but I am only speaking for myself.)
To respond to some bits related to the Forum:
In the actual world, the community doesn’t really know… why some posts get tagged as ‘community’ on the forum, and therefore effectively suppressed while similar ones stay at the top level
If you’re referring to “why” as in, what criteria is used for determining when to tag a post as “Community”, that is listed in the Community topic page. If you’re referring to “why” as in, how does that judgement happen, this is done by either the post author or a Forum Facilitator (as described here).
In the actual world, the community doesn’t really know… why the ‘community’ tag has been made admin-editable-only
We provided a brief explanation in this Forum update post. The gist is that we would like to prevent misuse (i.e. people applying it to posts because they wanted to move them down, or people removing it from posts because they wanted to move them up).
Thank you for flagging your interest in this information! In general we don’t publicly post about every small technical change we make on the Forum, as it’s hard to know what people are interested in reading about. If you have additional questions about the Forum, please feel free to contact us.
In general, our codebase is open source so you’re welcome to look at our PRs descriptions. It’s true that those can be sparse sometimes — feel free to comment on the PR if you have questions about it.
we don’t have any regular venue for being able to discuss such questions and community-facing CEA policies and metrics with some non-negligible chance of CEA responding—a simple weekly office hours policy could fix this.
If you have questions for the Forum team, you’re welcome to contact us at any time. I know that we have not been perfect at responding but we do care about being responsive and do try to improve. You can DM me directly if you don’t get a response; I am happy to answer questions about the Forum. I also attend multiple EAG(x) conferences each year and am generally easy to talk to there—I take a shift at the CEA org fair booth (if I am not too busy volunteering), and fill my 1:1 slots with user interviews asking people for feedback on the Forum. I think most people are excited for others to show an interest in their work, and that applies to me as well! :)
> “While we often strive to collaborate and to support people in their engagement with EA, our primary goal is having a positive impact on the world, not satisfying community members (though oftentimes the two are intertwined).”
I think this politicianspeak. If AMF said ‘our primary goal is having a positive impact on the world rather than distributing bednets’ and used that as a rationale to remove their hyperfocus on bednets, I’m confident a) that they would become much less positive on the world, and b) that Givewell would stop recommending them for that reason. Taking a risk on choosing your focus and core competencies is essential to actually doing something useful—if you later find out that your core competencies aren’t that valuable then you can either disband the organisation, or attempt a radical pivot (as Charity Science’s founders did on multiple occasions!).
I personally disagree that it would be better for CEA to have a goal that includes a specific solution to their overarching goal. I think it is often the case that it’s better to focus on outcomes rather than specific solutions. In the specific case of the Forum team, having an overarching goal that is about having a positive impact means that we feel free to do work that is unrelated to the Forum if we believe that it will be impactful. This can take the shape of, for example, a month-long technical project for another organization that has no tech team. I think if our goal were more like “have a positive impact by improving the EA Forum” that would be severely limiting.
I also personally disagree that this is “politicianspeak”, in the sense that I believe the quoted text is accurate, will help you predict our future actions, and highlights a meaningful distinction. I’ll refer back to an example from my other long comment: when we released the big Forum redesign, the feedback from the community was mostly negative, and yet I believe it was the right thing to do from an impact perspective (as it gave the site a better UX for new users). I think there are very few examples of us making a change to the Forum that the community overall disagrees with, but I think it is both more accurate for us to say that “our primary goal is having a positive impact on the world”, and better for the world that that is our primary goal (rather than “community satisfaction”).
Hi Zachary,
First off, I want to thank you for taking what was obviously a substantial amount of time to reply (and also to Sarah in another comment that I haven’t had time to reply to). This is, fwiw, is already well above the level of community engagement that I’ve perceived from most previous heads of CEA.
On your specific comments, it’s possible that we agree more than I expected. Nonetheless, there are still some substantial concerns they raise for me. In typical Crocker-y fashion, I hope you’ll appreciate that me focusing on the disagreements for the rest of this comment doesn’t imply that they’re my entire impression. Should you think about replying to this, know that I appreciate your time, and I hope you feel able to reply to individual points without being morally compelled to respond to the whole thing. I’m giving my concerns here as much for your and the community’s information as with the hope of a further response.
> I view transparency as part of the how, i.e. I believe transparency can be a tool to achieve goals informed by EA principles, but I don’t think it’s a goal in itself.
In some sense this is obviously true, but I believe it’s gerrymandering what the difference between ‘what’ and ‘how’ actually is.
For example, to my mind ‘scout mindset’ doesn’t seem any more central a goal than ‘be transparent’. In the post by Peter you linked, his definition of it sounds remarkably like ‘be transparent’, to wit: ‘the view that we should be open, collaborative, and truth-seeking in our understanding of what to do’.
One can imagine a world where we should rationally stop exploring new ideas and just make the best of the information we have (this isn’t so hard to imagine if it’s understood as a temporary measure to firefight urgent siutations), and where major charities can make substantial decisions without explanation and this tend to produce trustworthy and trusted policies—but I don’t think we live in either world most of the time.
In the actual world, the community doesn’t really know, for example with what weighting CEA priorities longtermist causes over others; how it priorities AI vs other longtermist causes, how it runs admissions at EAGs,;why some posts get tagged as ‘community’ on the forum, and therefore effectively suppressed while similar ones stay at the top level; why the ‘community’ tag has been made admin-editable-only; what the region pro rata rates CEA uses when contracting externally; what your funding breakdown looks like (or even the absolute amount); what the inclusion criteria for ‘leadership’ forums is, or who the attendees are; or many many other such questions people in the community have urgently raised. And we don’t have any regular venue for being able to discuss such questions and community-facing CEA policies and metrics with some non-negligible chance of CEA responding—a simple weekly office hours policy could fix this.
> confidentiality seems like an obvious good to me, e.g. with some information that is shared with our Community Health Team
Confidentiality is largely unrelated to transparency. If in any context someone speaks to someone else in confidence, there have to be exceptionally good reasons for breaking that confidence. None of what I’m pointing at in the previous paragraph would come close to asking them to do that.
> Amy Labenz (our Head of Events) has stated, we want to avoid situations where we share so much information that people can use it to game the admissions process.
I think this statement was part of the problem… We as a community have no information on which to evaluate the statement, and no particular reason to take it at face value. Are there concrete examples of people gaming the system this way? Is there empirical data showing some patterns that justify this assertion (and comparing it to the upsides)? I know experienced EA event organisers who explicitly claim she’s wrong on this. As presented, Labenz’s statement is in itself a further example of lack of transparency that seems not to serve the community—it’s a proclamation from above, with no follow-up, on a topic that the EA community would actively like to help out with if we were given sufficient data.
This raises a more general point—transparency doesn’t just allow the community to criticise CEA, but enables individuals and other orgs to actively help find useful info in the data that CEA otherwise wouldn’t have had the bandwidth to uncover.
> I think transparency may cause active harm for impactful projects involving private political negotiations or infohazards in biosecurity
These scenarios get wheeled out repeatedly for this sort of discussion (Chris Leong basically used the same ones elsewhere in this thread), but I find them somewhat disingenuous. For most charities, including all core-to-the-community EA charities, this is not a concern. I certainly hope CEA doesn’t deal in biosecurity or international politics—if it does, then the lack of transparency is much worse than I thought!
> Transparency is also not costless, e.g. Open Philanthropy has repeatedly published pieces on the challenges of transparency
All of the concerns they list there apply equally to all the charities that Givewell, EAFunds etc expect to be transparent. I see no principled reason in that article why CEA, OP, EA Funds, GWWC or any other regranters should expect so much more transparency than they’re willing to offer themselves. Briefly going through their three key arguments:
‘Challenge 1: protecting our brand’ - empirically I think this is something CEA and EV have substantially failed to do in the last few years. And in most of the major cases (continual failure for anyone to admit any responsibility for FTX; confusion around Wytham Abbey—the fact that that was ‘other CEA’ notwithstanding; PELTIV scores and other elitism-favouring policies; the community health team not disclosing allegations against Owen (or more politic-ly ‘a key member of our organisation’) sooner; etc) this was explicitly bad feeling over lack of transparency. I think publishing somee half-baked explanations that summarised the actual thinking of these at the time (rather than when in response to them later being exposed by critics) would a) have given people far less to complain about, and b) possibly generated (kinder) pushback from the community that might have averted some of the problem as it eventually manifested. I have also argued that CEA’s historical media policy of ‘talk as little as possible to the media’ both left a void in media discussion of the movement that was filled by the most vociferous critics and generally worsened the epistemics of the movement.
‘Challenge 2: information about us is information about grantees’ - this mostly doesn’t apply to CEA. Your grantees are the community and community orgs, both groups of whom would almost certainly like more info from you. (it also does apply to nonmeta charities like Givedirectly, who we nonetheless expect to gather large amounts of info on the community they’re serving—but in that situation we think it’s a good tradeoff)
‘Challenge 3: transparency is unusual’ - this seems more like a whinge than a real objection. Yes, it’s a higher standard than the average nonprofit holds itself to. The whole point of the EA movement was to encourage higher standards in the world. If we can’t hold ourselves to those raised standards, it’s hard to have much hope that we’ll ever inspire meaningful change in others.
> I also think it’s possible to have impartiality without scope sensitivity. Animal shelters and animal sanctuaries strike me as efforts that reflect impartiality insofar as they value the wellbeing of a wide array of species, but they don’t try to account for scope sensitivity
This may be quibbling, but I would consider focusing on visible subsets of the animal population (esp pets) a form of partiality. This particular disagreement doesn’t matter much, but it illustrates why I think gestures towards principles that are really not that well defined is that helpful for giving a sense of what we can expect CEA to do in future.
> “While we often strive to collaborate and to support people in their engagement with EA, our primary goal is having a positive impact on the world, not satisfying community members (though oftentimes the two are intertwined).”
I think this politicianspeak. If AMF said ‘our primary goal is having a positive impact on the world rather than distributing bednets’ and used that as a rationale to remove their hyperfocus on bednets, I’m confident a) that they would become much less positive on the world, and b) that Givewell would stop recommending them for that reason. Taking a risk on choosing your focus and core competencies is essential to actually doing something useful—if you later find out that your core competencies aren’t that valuable then you can either disband the organisation, or attempt a radical pivot (as Charity Science’s founders did on multiple occasions!).
> I think this was particularly true during the FTX boom times, when significant amounts of money were spent in ways that, to my eyes, blurred the lines between helping the community do more good and just plain helping the community. See e.g. these posts for some historical discussion … We have made decisions that may make our events less of a pleasant experience (e.g. cutting back on meals and snack variety)
I think this along with the transparency question is our biggest disagreement and/or misunderstanding. There’s a major equivocation going on here between exactly *which* members of the community you’re serving. I am entirely in favour of cutting costs at EAGs (the free wine at one I went to tasted distinctly of dead children), and of reducing all-expenses-paid forums for ‘people leading EA community-building’. I want to see CEA support people who actually need support to do good—the low-level community builders with little to no career development, esp in low or middle income countries whose communities are being starved; the small organisations with good track records but such mercurial funding; all the talented people who didn’t go to top 100 universities and therefore get systemically deprioritised by CEA. These people were never major beneficiaries of the boom, but were given false expectations during it and have been struggling in the general pullback ever since.
> For example, for events, our primary focus is on metrics like how many positive career changes occur as a result of our events, as opposed to attendee satisfaction.
I think the focus would be better placed on why attendees are satisfied or dissatisfied. If I go to an event and feel motivated to work harder in what I’m already doing, or build a social network who make me feel better enough about my life that I counterfactually make or keep a pledge, these things are equally as important. There’s something very patriarchal about CEA assuming they know better what makes members of the community more effective than the members of the community do. And, as any metric, ‘positive career changes’ can be gamed, or could just be the wrong thing to focus on.
> I think if anyone was best able to make a claim to be our customers, it would be our donors. Accountability to the intent behind their donations does drive our decision-making, as I discussed in the OP.
If both CEA and its donors are effectiveness-minded, this shouldn’t really be a distinction—per my comments about focus above, serving CEA’s community is about the most effective thing an org with a community focus can do, and so one would hope the donors would favour it. But also, this argument would be stronger if CEA only took money from major donors. As is, as long as CEA accepts donations from the community, sometimes actively solicits it, and broadly requires it (subject to honesty policy) from people attending EAGs—then your donors are the community and hence, either way, your customers.
(I work on the Forum but I am only speaking for myself.)
To respond to some bits related to the Forum:
If you’re referring to “why” as in, what criteria is used for determining when to tag a post as “Community”, that is listed in the Community topic page. If you’re referring to “why” as in, how does that judgement happen, this is done by either the post author or a Forum Facilitator (as described here).
We provided a brief explanation in this Forum update post. The gist is that we would like to prevent misuse (i.e. people applying it to posts because they wanted to move them down, or people removing it from posts because they wanted to move them up).
Thank you for flagging your interest in this information! In general we don’t publicly post about every small technical change we make on the Forum, as it’s hard to know what people are interested in reading about. If you have additional questions about the Forum, please feel free to contact us.
In general, our codebase is open source so you’re welcome to look at our PRs descriptions. It’s true that those can be sparse sometimes — feel free to comment on the PR if you have questions about it.
If you have questions for the Forum team, you’re welcome to contact us at any time. I know that we have not been perfect at responding but we do care about being responsive and do try to improve. You can DM me directly if you don’t get a response; I am happy to answer questions about the Forum. I also attend multiple EAG(x) conferences each year and am generally easy to talk to there—I take a shift at the CEA org fair booth (if I am not too busy volunteering), and fill my 1:1 slots with user interviews asking people for feedback on the Forum. I think most people are excited for others to show an interest in their work, and that applies to me as well! :)
I personally disagree that it would be better for CEA to have a goal that includes a specific solution to their overarching goal. I think it is often the case that it’s better to focus on outcomes rather than specific solutions. In the specific case of the Forum team, having an overarching goal that is about having a positive impact means that we feel free to do work that is unrelated to the Forum if we believe that it will be impactful. This can take the shape of, for example, a month-long technical project for another organization that has no tech team. I think if our goal were more like “have a positive impact by improving the EA Forum” that would be severely limiting.
I also personally disagree that this is “politicianspeak”, in the sense that I believe the quoted text is accurate, will help you predict our future actions, and highlights a meaningful distinction. I’ll refer back to an example from my other long comment: when we released the big Forum redesign, the feedback from the community was mostly negative, and yet I believe it was the right thing to do from an impact perspective (as it gave the site a better UX for new users). I think there are very few examples of us making a change to the Forum that the community overall disagrees with, but I think it is both more accurate for us to say that “our primary goal is having a positive impact on the world”, and better for the world that that is our primary goal (rather than “community satisfaction”).