EDIT: See Ben’s comment in the thread below on his experience as Zoe’s advisor and confidence in her good intentions.
(Opening disclaimer: this was written to express my honest thoughts, not to be maximally diplomatic. My response is to the post, not the paper itself.)
I’d like to raise a point I haven’t seen mentioned (though I’m sure it’s somewhere in the comments). EA is a very high-trust environment, and has recently become a high-funding environment. That makes it a tempting target for less intellectually honest or pro-social actors.
If you just read through the post, every paragraph except the last two (and the first sentence) is mostly bravery claims (from SSC’s “Against Bravery Debates”). This is a major red flag for me reading something on the internet about a community I know well. It’s much easier to start an online discussion about how you’re being silenced than to defend your key claims on the merits. Smaller red flags were: explicit warnings of impending harms if the critique is not heeded, and anonymous accounts posting mostly low-quality comments in support of the critique (shoutout to “AnonymousEA”).
A lot of EAs have a natural tendency to defend someone who claims they’re being silenced, and give their claims some deference to avoid being uncharitable. And it’s pretty easy to exploit that tendency.
I don’t know Zoe, and I don’t want to throw accusations of exaggeration or malfeasance into the ring without cause. If these incidents occurred as described, the community should be extremely concerned. But on priors, I expect a lot of claims along these lines, i.e. “Please fund my research if you don’t want to silence criticism” to come from a mix of unaligned academics hoping to do their own thing with LTist funding, and less scrupulous Phil Torres-style actors.
Yes, I’m leaving myself more vulnerable to a world where LTist orgs do in fact silence criticism and nobody hears about it except from brave lone researchers. But I’d like to see more evidence in support of that case before everyone gets too worried.
Thanks—I meant “lone” as in one or two researchers raising these concerns in isolation, not to say they were unaffiliated with an institution.
I’m not familiar with Zoe’s work, and would love to hear from anyone who has worked with them in the past. After seeing the red flags mentioned above, and being stuck with only Zoe’s word for their claims, anything from a named community member along the lines of “this person has done good research/has been intellectually honest” would be a big update for me.
And since I’ve stated my suspicions, I apologize to Zoe if their claims turn out to be substantiated. This is an extremely important post if true, although I remain skeptical.
In particular, a post of the form:
I have written a paper (link).
(12 paragraphs of bravery claims)
(1 paragraph on why EA is failing)
(1 paragraph call to action)
Strikes me as being motivated not by a desire to increase community understanding of an important issue, but rather to generate sympathy for the authors and support for their position by appealing to justice and fairness norms. The other explanation is that this was a very stressful experience, and the author was simply venting their frustrations.
But I’d hope that authors publishing an important paper wouldn’t use its announcement solely as an opportunity for venting, rather than a discussion of the paper and its claims. Whereas that choice makes sense if the goal is to create sympathy and marshal support without needing to defend your object-level argument.
I’m not familiar with Zoe’s work, and would love to hear from anyone who has worked with them in the past. After seeing the red flags mentioned above, and being stuck with only Zoe’s word for their claims, anything from a named community member along the lines of “this person has done good research/has been intellectually honest” would be a big update for me…. [The post] strikes me as being motivated not by a desire to increase community understanding of an important issue, but rather to generate sympathy for the authors and support for their position by appealing to justice and fairness norms. The other explanation is that this was a very stressful experience, and the author was simply venting their frustrations.
(Hopefully I’m not overstepping; I’m just reading this thread now and thought someone ought to reply.)
I’ve worked with Zoe and am happy to vouch for her intentions here; I’m sure others would be as well. I served as her advisor at FHI for a bit more than a year, and have now known her for a few years. Although I didn’t review this paper, and don’t have any detailed or first-hand knowledge of the reviewer discussions, I have also talked to her about this paper a few different times while she’s been working on it with Luke.
I’m very confident that this post reflects genuine concern/frustration; it would be a mistake to dismiss it as (e.g.) a strategy to attract funding or bias readers toward accepting the paper’s arguments. In general, I’m confident that Zoe genuinely cares about the health of the EA and existential risk communities and that her critiques have come from this perspective.
EDIT: See Ben’s comment in the thread below on his experience as Zoe’s advisor and confidence in her good intentions.
(Opening disclaimer: this was written to express my honest thoughts, not to be maximally diplomatic. My response is to the post, not the paper itself.)
I’d like to raise a point I haven’t seen mentioned (though I’m sure it’s somewhere in the comments). EA is a very high-trust environment, and has recently become a high-funding environment. That makes it a tempting target for less intellectually honest or pro-social actors.
If you just read through the post, every paragraph except the last two (and the first sentence) is mostly bravery claims (from SSC’s “Against Bravery Debates”). This is a major red flag for me reading something on the internet about a community I know well. It’s much easier to start an online discussion about how you’re being silenced than to defend your key claims on the merits. Smaller red flags were: explicit warnings of impending harms if the critique is not heeded, and anonymous accounts posting mostly low-quality comments in support of the critique (shoutout to “AnonymousEA”).
A lot of EAs have a natural tendency to defend someone who claims they’re being silenced, and give their claims some deference to avoid being uncharitable. And it’s pretty easy to exploit that tendency.
I don’t know Zoe, and I don’t want to throw accusations of exaggeration or malfeasance into the ring without cause. If these incidents occurred as described, the community should be extremely concerned. But on priors, I expect a lot of claims along these lines, i.e. “Please fund my research if you don’t want to silence criticism” to come from a mix of unaligned academics hoping to do their own thing with LTist funding, and less scrupulous Phil Torres-style actors.
Yes, I’m leaving myself more vulnerable to a world where LTist orgs do in fact silence criticism and nobody hears about it except from brave lone researchers. But I’d like to see more evidence in support of that case before everyone gets too worried.
I believe these are authors already working at EA orgs, not “brave lone researchers” per se.
Thanks—I meant “lone” as in one or two researchers raising these concerns in isolation, not to say they were unaffiliated with an institution.
I’m not familiar with Zoe’s work, and would love to hear from anyone who has worked with them in the past. After seeing the red flags mentioned above, and being stuck with only Zoe’s word for their claims, anything from a named community member along the lines of “this person has done good research/has been intellectually honest” would be a big update for me.
And since I’ve stated my suspicions, I apologize to Zoe if their claims turn out to be substantiated. This is an extremely important post if true, although I remain skeptical.
In particular, a post of the form:
I have written a paper (link).
(12 paragraphs of bravery claims)
(1 paragraph on why EA is failing)
(1 paragraph call to action)
Strikes me as being motivated not by a desire to increase community understanding of an important issue, but rather to generate sympathy for the authors and support for their position by appealing to justice and fairness norms. The other explanation is that this was a very stressful experience, and the author was simply venting their frustrations.
But I’d hope that authors publishing an important paper wouldn’t use its announcement solely as an opportunity for venting, rather than a discussion of the paper and its claims. Whereas that choice makes sense if the goal is to create sympathy and marshal support without needing to defend your object-level argument.
(Hopefully I’m not overstepping; I’m just reading this thread now and thought someone ought to reply.)
I’ve worked with Zoe and am happy to vouch for her intentions here; I’m sure others would be as well. I served as her advisor at FHI for a bit more than a year, and have now known her for a few years. Although I didn’t review this paper, and don’t have any detailed or first-hand knowledge of the reviewer discussions, I have also talked to her about this paper a few different times while she’s been working on it with Luke.
I’m very confident that this post reflects genuine concern/frustration; it would be a mistake to dismiss it as (e.g.) a strategy to attract funding or bias readers toward accepting the paper’s arguments. In general, I’m confident that Zoe genuinely cares about the health of the EA and existential risk communities and that her critiques have come from this perspective.
Thanks Ben! That’s very helpful info. I’ll edit the initial comment to reflect my lowered credence in exaggeration or malfeasance.