These individuals—often senior scholars within the field—told us in private that they were concerned that any critique of central figures in EA would result in an inability to secure funding from EA sources, such as OpenPhilanthropy. We don’t know if these concerns are warranted. Nonetheless, any field that operates under such a chilling effect is neither free nor fair. Having a handful of wealthy donors and their advisors dictate the evolution of an entire field is bad epistemics at best and corruption at worst.
Maybe my models are off but I find it hard to believe that anyone actually said that. Are we sure people said “Please don’t criticize central figures in EA because it may lead to an inability to secure EA funding?”
That sounds to me like a thing only cartoon villains would say.
“Please don’t criticize central figures in EA because it may lead to an inability to secure EA funding?” I have heard this multiple times from different sources in EA.
This is interesting if true. With respect to this paper in particular, I don’t really get why anyone would advise the authors not to publish it. It doesn’t seem like it would affect CSER’s funding, since as I understand it (maybe I’m wrong) they don’t get much EA money and it’s hard to see how it would affect FHI’s funding situation. The critiques don’t seem to me to be overly personal, so it’s difficult to see why publishing it would be overly risky.
Strongly upvoted, and me too. Which sources do you have in mind? We can compare lists if you like. I’d be willing to have that conversation in private but for the record I expect it’d be better to have it in public, even if you’d only be vague about it.
I think the rationale behind making such a statement is less about specific funding for the individuals making that statement, but for the EA movement as a whole and goes roughly: Most of the funding EA has is coming from a small number of high-net-worth individuals and they think donating to EA is a good idea because of their relationship and trust into central figures in EA. By criticising those figures, you decrease the chance of these figures pulling more high-net-worth individuals to donate to EA. Hence, criticising central figures in EA is bad.
(Not saying that I agree with this line of reasoning, but it seems plausible to me that people would make such a statement because of this reasoning.)
I think the devil is really in the details here. I think there are some reasonable versions of this.
The big question is why and how you’re criticizing people, and what that reveals about your beliefs (and what those beliefs are).
As an extreme example, imagine if a trusted researcher came out publicly, saying, ”EA is a danger to humanity because it’s stopping us from getting to AGI very quickly, and we need to raise as much public pressure against EA as possible, as quickly as possible. We need to shut EA down.”
If I were a funder, and I were funding researchers, I’d be hesitant to fund researchers who both believed that and was taking intense action accordingly. Like, they might be directly fighting against my interests.
It’s possible to use criticism to improve a field or try to destroy it.
I’m a big fan of positive criticism, but think that some kinds of criticism can be destructive (see a lot of politics, for example)
I know less about this certain circumstance, I’m just pointing out how the other side would see it.
This is all reasonable but none of your comment addresses the part where I’m confused. I’m confused about someone saying something that’s either literally the following sentence, or identical in meaning to:
“Please don’t criticize central figures in EA because it may lead to an inability to secure EA funding.”
If I were a funder, and I were funding researchers, I’d be hesitant to fund researchers who both believed that and was taking intense action accordingly. Like, they might be directly fighting against my interests.
That part of the example makes sense to me. What I don’t understand is the following:
In your example, imagine you’re a friend, colleague, or an acquaintance of that researcher who considers publishing their draft about how EA needs to be stopped because it’s slowing down AGI. What do you tell them? It seems like telling them “The reason you shouldn’t publish this piece is that you [or “we,” in case you’re affiliated with them] might no longer get any funding” is a strange non sequitur. If you think they’re right about their claim, it’s really important to publish the article anyway. If you think they’re wrong, there are still arguments in favor of discussing criticism openly, but also arguments against confidently advocating drastic measures unilaterally and based on brittle arguments. If you thought the article was likely to do damage, the intrinsic damage is probably larger than no longer getting funding?
I can totally see EAs advocating against the publication of certain articles that they think are needlessly incendiary and mostly wrong, too uncharitable, or unilateral and too strongly worded. I don’t share those concerns personally (I think open discussion is almost always best), but I can see other people caring about those things more strongly. I was thrown off by the idea that people would mention funding as the decisive consideration against publication. I still feel confused about this, but now I’m curious.
I’m curious about this and would be happy to hear more about it if you’re comfortable sharing. I’ll get in touch (and would make sure to read the full article before maybe chatting)!
Update: Zoe and I had a call and the private info she shared with me convinced me that some people with credentials or track record in EA/longtermist research indeed discouraged publication of the paper based on funding concerns. I realized that I originally wasn’t imaginative enough to think of situations where those sorts of concerns could apply (in the sense that people would be motivated to voice them for common psychological reasons and not as cartoon villains). When I thought about how EA funding generates pressure to conform, I was much too focused on the parts of EA I was most familiar with. That said, the situation in question arose because of specific features coming together – it wouldn’t be accurate to say that all areas of the EA ecosystem face the same pressures to conform. (I think Zoe agrees with this last bit.) Nonetheless, looking forward I can see similar dynamics happening again, so I think it’s important to have identified this as a source of bias.
I want to flag that “That sounds to me like a thing only cartoon villains would say.” is absolutely contrary to discourse norms on the forum. I don’t think it was said maliciously, but it’s definitely not “kind,” and it does not “approach disagreements with curiosity.”
Edit: Clearly, I read this very differently than others, and given that, I’m happy to retract my claim that this was mean-spirited.
When I wrote my comment, I worried it would be unkind to Zoe because I’m also questioning her recollection of what people said.
Now that it looks like people did in fact say the thing exactly the way I quoted it (or identical to it in meaning and intent), my comment looks more unkind toward Zoe’s critics.
Edit: Knowing for sure that people actually said the comment, I obviously no longer think they must be cartoon villains. (But I remain confused.)
I haven’t seen any quotes but Joey saying he had the same experience, Zoe confirming that she didn’t misremember this part, and none of the reviewers speaking up saying “This isn’t how things happened,” made me update that maybe one or more people actually did say the thing I considered cartoonish.
And because people are never cartoon villains in real life, I’m now trying to understand what their real motivations were.
For instance, one way I thought of how the comment could make sense is if someone brought it up because they are close to Zoe and care most about her future career and how she’ll be doing, and they already happen to have a (for me very surprising) negative view of EA funders and are pessimistic about bringing about change. In that scenario, it makes sense to voice the concerns for Zoe’s sake.
Initially, I simply assumed that the comment must be coming from the people who have strong objections to (parts of) Zoe’s paper. And I was thinking “If you think the paper is really unfair, why not focus on that? Why express a concern about funding that only makes EA look even worse?”
So my new model is that the people who gave Zoe this sort of advice may not have been defending EA at all, but rather shared Zoe’s criticisms or were, if anything, more pessimistic than Zoe.
(I’m probably wrong about the above hypothesis, but then I’m back to being confused.)
It might be useful to hear from the reviewers themselves as to the thought process here. As mentioned above, I don’t really understand why anyone would advise the authors not to publish this. For comparison, I have published several critiques of the research of several Open Phil-funded EA orgs while working at an open phil-funded EA org. In my experience, I think if the arguments are good, it doesn’t really matter if you disagree with something Open Phil funds. Perhaps that is not true in this domain for some reason?
(In my words: Some reviewers like and support Zoe and Luke but are worried about the sustainability of their funding situation because of the model that these reviewers have of some big funders. So these reviewers are well-intentioned and supportive in their own way. I just hope that their worries are unwarranted.)
I think a third hypothesis is that they really think funding whatever we are funding at the moment is more important than continuing to check whether we are right; and don’t see the problems with this attitude (perhaps because the problem is more visible from a movement-wide, longterm perspective rather than an immediate local one?).
As a moderator, I thought Lukas’s comment was fine.
I read it as a humorous version of “this doesn’t sound like something someone would say in those words”, or “I cast doubt on this being the actual thing someone said, because people generally don’t make threats that are this obvious/open”.
Reading between the lines, I saw the comment as “approaching a disagreement with curiosity” by implying a request for clarification or specification (“what did you actually hear someone say”?). Others seem to have read the same implication, though Lukas could have been clearer in the first place and I could be too charitable in my reading.
Compared to this comment, I thought Lukas’s added something to the conversation (though the humor perhaps hurt more than helped).
*****
On a meta level, I upvoted David’s comment because I appreciate people flagging things for potential moderation, though I wish more people would use the Report button attached to all comments and posts (which notifies all mods automatically, so we don’t miss things):
I appreciated Lukas‘ comment as I had the same reaction. The idea somebody would utter this sentence and not cringe about having said something so obviously wrongheaded feels very off. I think adding something like „Hey, this specific claim would be almost shockingly surprising for my current models /gesturing at the reason why/“ is a useful promp/invitation for further discussion and not unkind or uncurios.
As a moderator, I agree with David that this comment doesn’t abide by community norms.
It’s not a serious offense, because “oh dear” is a mild comment that isn’t especially detrimental to a conversation on its own. But if a reply implies that a post or comment is representative of some bad trend, or that the author should feel bad/embarrassed about what they wrote, and doesn’t actually say why, it adds a lot more heat than light.
Strong upvote, especially to signal my support of
Maybe my models are off but I find it hard to believe that anyone actually said that. Are we sure people said “Please don’t criticize central figures in EA because it may lead to an inability to secure EA funding?”
That sounds to me like a thing only cartoon villains would say.
“Please don’t criticize central figures in EA because it may lead to an inability to secure EA funding?” I have heard this multiple times from different sources in EA.
This is interesting if true. With respect to this paper in particular, I don’t really get why anyone would advise the authors not to publish it. It doesn’t seem like it would affect CSER’s funding, since as I understand it (maybe I’m wrong) they don’t get much EA money and it’s hard to see how it would affect FHI’s funding situation. The critiques don’t seem to me to be overly personal, so it’s difficult to see why publishing it would be overly risky.
Why “if true”? Why would Joey misrepresent his own experiences?
yeah fair i didn’t mean it like that
Strongly upvoted, and me too. Which sources do you have in mind? We can compare lists if you like. I’d be willing to have that conversation in private but for the record I expect it’d be better to have it in public, even if you’d only be vague about it.
I think the rationale behind making such a statement is less about specific funding for the individuals making that statement, but for the EA movement as a whole and goes roughly: Most of the funding EA has is coming from a small number of high-net-worth individuals and they think donating to EA is a good idea because of their relationship and trust into central figures in EA. By criticising those figures, you decrease the chance of these figures pulling more high-net-worth individuals to donate to EA. Hence, criticising central figures in EA is bad.
(Not saying that I agree with this line of reasoning, but it seems plausible to me that people would make such a statement because of this reasoning.)
I might be able to provide a bit of context:
I think the devil is really in the details here. I think there are some reasonable versions of this.
The big question is why and how you’re criticizing people, and what that reveals about your beliefs (and what those beliefs are).
As an extreme example, imagine if a trusted researcher came out publicly, saying,
”EA is a danger to humanity because it’s stopping us from getting to AGI very quickly, and we need to raise as much public pressure against EA as possible, as quickly as possible. We need to shut EA down.”
If I were a funder, and I were funding researchers, I’d be hesitant to fund researchers who both believed that and was taking intense action accordingly. Like, they might be directly fighting against my interests.
It’s possible to use criticism to improve a field or try to destroy it.
I’m a big fan of positive criticism, but think that some kinds of criticism can be destructive (see a lot of politics, for example)
I know less about this certain circumstance, I’m just pointing out how the other side would see it.
This is all reasonable but none of your comment addresses the part where I’m confused. I’m confused about someone saying something that’s either literally the following sentence, or identical in meaning to:
“Please don’t criticize central figures in EA because it may lead to an inability to secure EA funding.”
That part of the example makes sense to me. What I don’t understand is the following:
In your example, imagine you’re a friend, colleague, or an acquaintance of that researcher who considers publishing their draft about how EA needs to be stopped because it’s slowing down AGI. What do you tell them? It seems like telling them “The reason you shouldn’t publish this piece is that you [or “we,” in case you’re affiliated with them] might no longer get any funding” is a strange non sequitur. If you think they’re right about their claim, it’s really important to publish the article anyway. If you think they’re wrong, there are still arguments in favor of discussing criticism openly, but also arguments against confidently advocating drastic measures unilaterally and based on brittle arguments. If you thought the article was likely to do damage, the intrinsic damage is probably larger than no longer getting funding?
I can totally see EAs advocating against the publication of certain articles that they think are needlessly incendiary and mostly wrong, too uncharitable, or unilateral and too strongly worded. I don’t share those concerns personally (I think open discussion is almost always best), but I can see other people caring about those things more strongly. I was thrown off by the idea that people would mention funding as the decisive consideration against publication. I still feel confused about this, but now I’m curious.
Very happy to have a private chat and tell you about our experience then.
I’m curious about this and would be happy to hear more about it if you’re comfortable sharing. I’ll get in touch (and would make sure to read the full article before maybe chatting)!
Update: Zoe and I had a call and the private info she shared with me convinced me that some people with credentials or track record in EA/longtermist research indeed discouraged publication of the paper based on funding concerns. I realized that I originally wasn’t imaginative enough to think of situations where those sorts of concerns could apply (in the sense that people would be motivated to voice them for common psychological reasons and not as cartoon villains). When I thought about how EA funding generates pressure to conform, I was much too focused on the parts of EA I was most familiar with. That said, the situation in question arose because of specific features coming together – it wouldn’t be accurate to say that all areas of the EA ecosystem face the same pressures to conform. (I think Zoe agrees with this last bit.) Nonetheless, looking forward I can see similar dynamics happening again, so I think it’s important to have identified this as a source of bias.
I want to flag that “That sounds to me like a thing only cartoon villains would say.” is absolutely contrary to discourse norms on the forum. I don’t think it was said maliciously, but it’s definitely not “kind,” and it does not “approach disagreements with curiosity.”Edit: Clearly, I read this very differently than others, and given that, I’m happy to retract my claim that this was mean-spirited.
When I wrote my comment, I worried it would be unkind to Zoe because I’m also questioning her recollection of what people said.
Now that it looks like people did in fact say the thing exactly the way I quoted it (or identical to it in meaning and intent), my comment looks more unkind toward Zoe’s critics.
Edit: Knowing for sure that people actually said the comment, I obviously no longer think they must be cartoon villains. (But I remain confused.)
fwiw I was not offended at all.
I’m a bit lost, are you saying that the quotes you have seen were or were not as cartoon villainish as you thought?
I haven’t seen any quotes but Joey saying he had the same experience, Zoe confirming that she didn’t misremember this part, and none of the reviewers speaking up saying “This isn’t how things happened,” made me update that maybe one or more people actually did say the thing I considered cartoonish.
And because people are never cartoon villains in real life, I’m now trying to understand what their real motivations were.
For instance, one way I thought of how the comment could make sense is if someone brought it up because they are close to Zoe and care most about her future career and how she’ll be doing, and they already happen to have a (for me very surprising) negative view of EA funders and are pessimistic about bringing about change. In that scenario, it makes sense to voice the concerns for Zoe’s sake.
Initially, I simply assumed that the comment must be coming from the people who have strong objections to (parts of) Zoe’s paper. And I was thinking “If you think the paper is really unfair, why not focus on that? Why express a concern about funding that only makes EA look even worse?”
So my new model is that the people who gave Zoe this sort of advice may not have been defending EA at all, but rather shared Zoe’s criticisms or were, if anything, more pessimistic than Zoe.
(I’m probably wrong about the above hypothesis, but then I’m back to being confused.)
It might be useful to hear from the reviewers themselves as to the thought process here. As mentioned above, I don’t really understand why anyone would advise the authors not to publish this. For comparison, I have published several critiques of the research of several Open Phil-funded EA orgs while working at an open phil-funded EA org. In my experience, I think if the arguments are good, it doesn’t really matter if you disagree with something Open Phil funds. Perhaps that is not true in this domain for some reason?
This is also how I interpreted the situation.
(In my words: Some reviewers like and support Zoe and Luke but are worried about the sustainability of their funding situation because of the model that these reviewers have of some big funders. So these reviewers are well-intentioned and supportive in their own way. I just hope that their worries are unwarranted.)
I think a third hypothesis is that they really think funding whatever we are funding at the moment is more important than continuing to check whether we are right; and don’t see the problems with this attitude (perhaps because the problem is more visible from a movement-wide, longterm perspective rather than an immediate local one?).
As a moderator, I thought Lukas’s comment was fine.
I read it as a humorous version of “this doesn’t sound like something someone would say in those words”, or “I cast doubt on this being the actual thing someone said, because people generally don’t make threats that are this obvious/open”.
Reading between the lines, I saw the comment as “approaching a disagreement with curiosity” by implying a request for clarification or specification (“what did you actually hear someone say”?). Others seem to have read the same implication, though Lukas could have been clearer in the first place and I could be too charitable in my reading.
Compared to this comment, I thought Lukas’s added something to the conversation (though the humor perhaps hurt more than helped).
*****
On a meta level, I upvoted David’s comment because I appreciate people flagging things for potential moderation, though I wish more people would use the Report button attached to all comments and posts (which notifies all mods automatically, so we don’t miss things):
I appreciated Lukas‘ comment as I had the same reaction. The idea somebody would utter this sentence and not cringe about having said something so obviously wrongheaded feels very off. I think adding something like „Hey, this specific claim would be almost shockingly surprising for my current models /gesturing at the reason why/“ is a useful promp/invitation for further discussion and not unkind or uncurios.
...oh dear
This community is entering a rough patch, I feel.
As a moderator, I agree with David that this comment doesn’t abide by community norms.
It’s not a serious offense, because “oh dear” is a mild comment that isn’t especially detrimental to a conversation on its own. But if a reply implies that a post or comment is representative of some bad trend, or that the author should feel bad/embarrassed about what they wrote, and doesn’t actually say why, it adds a lot more heat than light.
I commented that the above comment doesn’t abide by community norms, but I don’t think this comment does, either.
Commenting guidelines:
Aim to explain, not persuade
Try to be clear, on-topic, and kind
Approach disagreements with curiosity