Hey Zoe and Luke, thank you for posting this and for writing the paper! I just finished reading it and found it thoughtful, detailed, and it gave me a lot to think about. It is the best piece of criticism I have read, and will recommend it to others looking for that going forward. I can see the care, time, and revisions that went into the piece. I am very sorry to hear about your experience of writing it. I think you contributed something important, and wish you had been met with more support. I hope the community can read this post and learn from it so we can get a little closer to that ideal of how to handle, incorporate, and respond to criticism.
Note: I discuss Open Phil to some degree in this comment. I also start work there on January 3rd. These are my personal views, and do not represent my employer.
Epistemic status: Written late at night, in a rush, I’ll probably regret some of this in the morning but (a) if I don’t publish now, it won’t happen, and (b) I did promise extra spice after I retired.
I think you contributed something important, and wish you had been met with more support.
It seems valuable to separate “support for the action of writing the paper” from “support for the arguments in the paper”. My read is that the authors had a lot of the former, but less of the latter.
From the original post:
We were told by some that our critique is invalid because the community is already very cognitively diverse and in fact welcomes criticism. They also told us that there is no TUA, and if the approach does exist then it certainly isn’t dominant.
While “invalid” seems like too strong a word for a critic to use (and I’d be disappointed in any critic who did use it), this sounds like people were asked to review/critique the paper and then offered reviews and critiques of the paper.
Still, to the degree that there was any opposition for the action of writing the paper, that’s a problem. To address something more concerning:
It was these same people that then tried to prevent this paper from being published. They did so largely out of fear that publishing might offend key funders who are aligned with the TUA.
These individuals—often senior scholars within the field—told us in private that they were concerned that any critique of central figures in EA would result in an inability to secure funding from EA sources, such as Open Philanthropy. We don’t know if these concerns are warranted. Nonetheless, any field that operates under such a chilling effect is neither free nor fair.
I’m not sure what “prevent this paper from being published” means, but in the absence of other points, I assume it refers to the next point of discussion (the concern around access to funding).
I’m glad the authors point out that the concerns may not be warranted. But I’ve seen many people (not necessarily the authors) make arguments like “these concerns could be real, therefore they are real”. There’s a pervasive belief that Open Philanthropy must have a specific agenda they try to fund where X-risk is concerned, and that entire orgs might be blacklisted because individual authors within those orgs criticize that agenda.
The Future of Humanity Institute (one author’s org) has dozens of researchers and has received a consistent flow of new grants from Open Phil. Based on everything I’ve ever seen Open Phil publish, and my knowledge of FHI’s place in the X-risk world, it seems inconceivable that they’d have funding cut because of a single paper that presents a particular point of view.
The same point applies beyond FHI, to other Open Phil grants. They’ve funded dozens of organizations in the AI field, with (I assume) hundreds of total scholars/thinkers in their employ; could it really be the case that at the time those grants were made, none of the people so funded had written things that ran counter to Open Phil’s agenda (for example, calls for greater academic diversity within X-risk)?
Meanwhile, CSER (the other author’s org) doesn’t appear in Open Phil’s grants database at all, and I can’t find anything that looks like funding to CSER online at any point after 2015. If you assume this is related to ideological differences between Open Phil and CSER (I have no idea), this particular paper seems like it wouldn’t change much. Open Phil can’t cut funding it doesn’t provide.
That is to say, if senior scholars expressed these concerns, I think they were unwarranted.
*****
Of course, I’m not a senior scholar myself. But I am someone who worked at CEA for three years, attended two Leaders Forums, and heard many internal/”backroom” conversations between senior leaders and/or big funders.
I think there are things we don’t do well. I’ve seen important people present weak counterarguments to good criticism without giving the questions as much thought as seemed warranted. I’ve seen interesting opportunities get lost because people were (in my view) too worried about the criticism that might follow. I’ve seen the kinds of things Ozzie Gooen talks about here (humans making human mistakes in prioritization, communication, etc.) I think that Ben Hoffman and Zvi have made a number of good points about problems with centralized funding and bad incentives.
But despite all that, I just can’t wrap my head around the idea that the major EA figures I’ve known would see a solid, well-thought-through critique and decide, as a result, to stop funding the people or organizations involved. It seems counter to who they are as people, and counter to the vast effort they expend on reading criticism, asking for criticism, re-evaluating their own work and each other’s work with a critical eye, etc.
I do think that I’m more trusting of people than the average person. It’s possible that things are happening in backrooms that would appall me, and I just haven’t seen them. But whenever one of these conversations comes up, it always seems to end in vague accusations without names attached or supporting documentation, even in cases where someone straight-up left the community. If things were anywhere near as bad as they’ve been represented, I would expect at least one smoking gun, beyond complaints about biased syllabi or “A was concerned that B would be mad”.
For example: Phil Torres claims to have spent months gathering reports of censorship from people all over EA, but the resulting article was remarkably insubstantial. The single actual incident he mentions in the “canceled” section is a Facebook post being deleted by an unknown moderator in 2013. I know more detail about this case than Phil shares, and he left out some critical points:
The post being from 2013, when EA as a whole was much younger/less professional
The CEA employee who called the poster being a personal friend of theirs who wanted to talk about the post’s ideas
The person who took down the post seeing this as a mistake, and something they wouldn’t do today (did Phil try to find them, so he could ask them about the incident?)
If this was Phil’s best example, where’s the rest?
I’d be sad to see a smoking gun because of what it would mean for my relationship with a community I value. But I’ve spent a lot of time trying to find one anyway, because if my work is built on sand I want to know sooner rather than later. I’ve yet to find what I seek.
*****
There was one line that really concerned me:
By others we were accused of lacking academic rigour and harbouring bad intentions.
“Lacking in rigor” sounds like a critique of the type the authors solicited (albeit one that I can imagine being presented unhelpfully).
“Harboring bad intentions” is a serious accusation to throw around, and one I’m actively angry to hear reviewers using in a case like this, where people are trying to present (somewhat) reasonable criticism and doing so with no clear incentive (rather than e.g. writing critical articles in outside publications to build a portfolio, as others have).
I’d rather have meta-discussion of the paper’s support be centered on this point, rather than the “hypothetical loss of funding” point, at least until we have evidence that the concerns of the senior scholars are based on actual decisions or conversations.
This is a great comment, thank you for writing it. I agree—I too have not seen sufficient evidence that could warrant the reaction of these senior scholars. We tried to get evidence from them and tried to understand why they explicitly feared that OpenPhil would not fund them because of some critical papers. Any arguments they shared with us were unconvincing. My own experience with people at OpenPhil (sorry to focus the conversation only on OpenPhil, obviously the broader conversation about funding should not only focus on them) in fact suggests the opposite.
I want to make sure that the discussion does not unduly focus on FHI or CSER in particular. I think this has little to do with the organisations as a whole and more to do with individuals who sit somewhere in the EA hierarchy. We made the choice to protect the privacy of the people whose comments we speak of here. This is out of respect but also because I think the more interesting area of focus (and that which can be changed) is what role EA as a community plays in something like this happening.
I woud caution to center the discussion only around the question of whether or not OpenPhil would reduce funding in respose to crticism. Improtantly, what happened here is that some EAs, who had influence over funding and research postions of more junior researchers in Xrisk, thought this was the case and acted on that assumption. I think it may well be the case that OpenPhil acts perfectly fine, while researchers lower in the funding hierarchy harmfully act as if OpenPhil would not act well. How can that be prevented?
To clarify: all the reviewers we approached gave critical feedback and we incorporated it and responded to it as we saw fit without feeling offended. But the only people who said the paper had a low academic standard, that it was ‘lacking in rigor’ or that it wasn’t ‘loving enough’, were EAs who were emotionally affected by reading the paper. My point here is that in an idealised objective review process it would be totally fine to hear that we do not meet some academic standard. But my sense was that these comments were not actually about academic rigor, but about community membership and friendship. This is understandable, but it’s not surprising that mixture of power, community and research can produce biased scholarship.
It’s hard to see what is going on and this is producing a lot of heat and speculation. I want to present an account or framing below. I want to see if this matches with your beliefs and experiences.
I like to point out the below this isn’t favorable to you, basically, but I don’t have any further deliberate goal and little knowledge in this space.
Firstly, to try to reduce the heat, I will change the situation to the cause area of global health:
Basically, Acemoglu believes the randomista approaches used in EA could be net negative for human welfare because it supplants health institutions, reduces the functionality of the state and slows economic development. It’s also hard to measure. I don’t agree and most EA don’t agree.
The account begins: imagine with dramatically increased funding, Givewell expands and hires a bunch of researchers. GiveWell is more confident and hires less orthodox researchers that seem passionate and talented.
One year later, the very first paper of one of the newly hired researchers makes a strong negative view of the randomista approach and directly criticizes GiveWell’s work.
The paper says EA global health and development is misguided and gives plausible reasons, but these closely follow Acemoglu and randomista critics. The paper makes statements that many aligned EAs find disagreeable, such as saying AMF’s work is unmeasurable. There are also direct criticisms of senior EAs that seem uncharitable.
However, there isn’t a lot of original research or claims in the paper. Also, while not stated, the paper implies the need to restructure and change the fundamental work of GiveWell, including deleting several major programs.
Accompanying the paper, the new researcher also states they had very negative experiences when pushing out the paper, including getting heavily pressured to self-censor. They state people had suggested they had bad intentions, low scholarship ability, and that people said future funding might be pulled.
They state this too on the EA Forum.
Publicly, we never hear any more substantive details about the above. This is because people don’t want to commit to writing when it’s easy to misrepresent the facts on either side, and certain claims make the benign appeal to authority and norms unworkable.
However, the truth of what happened is prosaic:
When the researcher was getting reviews, peer and senior EAs in the space pointed out that the researcher joined GiveWell knowing full well its mission and approaches, and their paper seemed mainly political, simply drawing in and recasting existing outside arguments. Given this, some explicitly questioned the intent and motivation of the researcher.
The director of Givewell research hired the researcher because the director herself wanted to push the frontier of EA global health and development into new policy approaches, maybe even make inroads to people doing work advanced by Acemoglu. Now, her newly hired researcher seems to be a wild activist. It is a nightmare communicating with them. Frustrated, the director loses sleep and doubts herself, was this her fault and incompetence?
The director knows that saying things to the researcher, like they seem unable to do original research, have no value alignment to EA approaches, or that the researcher’s path has no future in GiveWell, seem true to the director, but can make her a lifelong enemy.
The director is also unwilling or unable to be a domineering boss over an underling.
So the director punts by saying that GiveWell’s funding is dependent on executing it’s current mission, and papers directly undermining the current mission will undermine funding too.
This all happens over many meetings and days, where both sides are heated and highly emotional, and many things are said.
The researcher is a true believer against randomista, and think that millions of lives are at stake, and definitely don’t think they are unaligned (it is GiveWell that is). The researcher views all the above as hostile, a reaction of the establishment.
Question: Doyou find my account above plausible, or unfair and wildly distorted? Can you give any details or characterizations of how it differs?
What on Earth is this thinly-veiled attempt at character assassination? Do you actually have any substantive evidence that your “account” is accurate (your disclaimer at the start suggests not), or are you just fishing for a reaction?
Honestly, what did you hope to gain from this? You think this researcher is just gonna respond and say “Yep, you’re right, I’m ill-fit for my job and incapable of good academic work!” “Not favorable to you” is the understatement of the century. Not to mention your change of the area concerned in no way lowers the temperature. It just functions as CYA to avoid making forthright accusations that this researcher’s actual boss might then be called upon to publicly refute. This is one of the slimiest posts I’ve ever seen, to be perfectly honest.
Edit: I would love to see anyone who has downvoted this post explain why they think the above is defensible and how they’d react if someone did it to them.
I don’t agree with your comment on its merits, or the practice of confronting this way with an anonymous throwaway.
(It’s unclear, but it may be unwise and worthwhile for me to think about the consequent effects of this attitude) but it seems justifiable that throwaways that begin this quality sort of debate (the quality being a matter of perspective that you won’t agree upon) can be treated really dismissively.
If you want, you can write with your real name (or PM me) and I will respond, if that’s what you really want.
Also, the downvote(s) on your comment(s) are mine.
I would be more worried about making comments of the kind that you produced above under my real name. Your comment was full of highly negative accusations about a named poster’s professional life and academic capabilities, veiled under utterly transparent hypotheticals, made in public. You offered no evidence whatsoever for any of these accusations nor did you even attempt to justify why that sort of engagement was warranted. Airing such negative judgments publicly and about a named person is an extremely serious matter whether you have evidence or not. I don’t think that you treated it with a fraction of the seriousness it deserves.
I honestly have negative interest in telling you my real name after seeing how you treat others in public, much less making an account here with my real name attached to it. I would prefer to limit your ability to do reputational damage (to me or others) on spurious or non-existent grounds as far as possible. I am honestly extremely curious as to why you thought what you did above was remotely acceptable, but I am not willing to put myself in the line of fire to find out.
Well, thanks for that. Admittedly, the downvotes seemed like good evidence to the contrary.
Unfortunately, I also couldn’t really give you my real name even if I wanted to, because the name of this account shares the name of my online persona elsewhere and I place a very high premium on anonymity. If I had thought to give it a different name, then I’d probably just PM you my real name. But I didn’t think that far ahead.
Anyway, whatever else may be, I’m sorry that I came in so hot. Sometimes I just see something that really sets me off and I consequently approach things too aggressively for my own (and others’) good.
Many of the comments in this comment chain, including the original narrative I wrote, which I view as closer to reality (as opposed to the implicit, difficult, narrative I see in the OP, which seems highly motivated and for which I find evidence that it is contradicted in the subsequent comments by the OP) has been visited by likely a single person, who has made strong downvotes and strong upvotes, of magnitude 9.
So probably a single person has come in and used a strong upvote or downvote of magnitude 9.
While I am totally petty and vain, I don’t usually comment on upvotes or downvotes, because it seems sort of unseemly (unless it is hilarious to do so).
In this case, because of the way strong upvotes are designed, there appears to be literally only 4 accounts who could have this ability, and their judgement is well respected.
So I address you directly: If you have information about this, especially object level information about the underlying situation relative to my original narrative, it would be great to discuss.
The underlying motivation is that truth is a thing, and in some sense having the recent commentor come in and stir this up, was useful.
In an even deeper sense, as we all agree, EA isn’t a social club for people who got here first. EA doesn’t belong to you or me, or even a large subset of the original founders (to be clear, for all intents and purposes, all reasonable paths will include their leadership and guidance for a long time).
Importantly, I think some good operationalizations of the above perspective, combined with awareness of all the possible instantiations of EA, and the composition of people and attitudes, would rationalize a different tone and culture than exists.
So, RE: “I would be more worried about making comments of the kind that you produced above under my real name.” I think could be exactly, perfectly, the opposite of what is true, yet is one of the comments you strong upvoted.
To be even more direct I suspect, but I am unsure, that the culture of discussion in EA has accumulated defects that are costly to effectiveness and truth (under the direct tenure of one of the four people who could have voted +/-9 by the way).
So the most important topic here might not be about the OP at all, which I view as just one instance of an ongoing issue—in a deep sense, it was really about the very person who came in and strong voted!
I’m not sure you see this (or that I see this fully either).
From the very beginning, I specifically constructed this account and persona to interrogate whether this is true, or something.
Circling back to the original topic. The above perspective, the related hysteresis, the consequent effects, implies that the existence of my narrative in this thread, or myself, should be challenged or removed if it’s wrong.
But I can’t really elaborate on my narrative. I can’t defend myself, because it slags the OP, which isn’t appropriate and opens wounds, which is unfair and harmful to everyone involved (but I sort of hoped the new commentor was the OP or a friend, which would have waived this and that’s why I wanted their identity).
But you, the strong downvoter/upvoter, +9 dude, this is a really promising line of discussion. So come and reply?
So, there is some normal sense where I might have a reason to want to them “legitimize” their criticism by identifying themselves (this reason is debatable, it could be weak or very strong).
But the first comments from this person aren’t just vitriolic and a personal attack, they are adamant demands for a significant amount of writing—they disagree greatly with me and so the explanation needed to bridge the opinion could be very long.
The content of this writing has consequences, which is hidden to people without the explanation.
Here, I have special additional reasons to know their identity, because the best way to communicate the underlying events and what my comment meant, depend on who they are.
Some explanations or accounts will be inflammatory, and others useless. For example, the person could be entirely new to EA, or be the OP themselves. Certain explanations, justification or “evidence”, could be hurtful and stir up wounds. Others won’t make sense at all.
In this situation, it’s reasonable to see the commenters demands impose the further, additional burdens on me of having to weigh this harm (just to defend my comment), which is hidden from them. Separately and additionally, I probably view this as particularly unfair, as from my perspective, I think the very reason/issue why I commented and why things are so problematic/sensitive, was because the original environment around the post was inflammatory and hard to approach by design.
Hmm I think I have some different ideas about discussion norms but not sure if I understand them coherently myself/think it’s worth going into. I agree it’s often worthwhile to not engage.
I think the very reason/issue why I commented and why things are so problematic/sensitive, was because the original environment around the post was inflammatory and hard to approach by design.
Hey Zoe and Luke, thank you for posting this and for writing the paper! I just finished reading it and found it thoughtful, detailed, and it gave me a lot to think about. It is the best piece of criticism I have read, and will recommend it to others looking for that going forward. I can see the care, time, and revisions that went into the piece. I am very sorry to hear about your experience of writing it. I think you contributed something important, and wish you had been met with more support. I hope the community can read this post and learn from it so we can get a little closer to that ideal of how to handle, incorporate, and respond to criticism.
Note: I discuss Open Phil to some degree in this comment. I also start work there on January 3rd. These are my personal views, and do not represent my employer.
Epistemic status: Written late at night, in a rush, I’ll probably regret some of this in the morning but (a) if I don’t publish now, it won’t happen, and (b) I did promise extra spice after I retired.
It seems valuable to separate “support for the action of writing the paper” from “support for the arguments in the paper”. My read is that the authors had a lot of the former, but less of the latter.
From the original post:
While “invalid” seems like too strong a word for a critic to use (and I’d be disappointed in any critic who did use it), this sounds like people were asked to review/critique the paper and then offered reviews and critiques of the paper.
Still, to the degree that there was any opposition for the action of writing the paper, that’s a problem. To address something more concerning:
I’m not sure what “prevent this paper from being published” means, but in the absence of other points, I assume it refers to the next point of discussion (the concern around access to funding).
I’m glad the authors point out that the concerns may not be warranted. But I’ve seen many people (not necessarily the authors) make arguments like “these concerns could be real, therefore they are real”. There’s a pervasive belief that Open Philanthropy must have a specific agenda they try to fund where X-risk is concerned, and that entire orgs might be blacklisted because individual authors within those orgs criticize that agenda.
The Future of Humanity Institute (one author’s org) has dozens of researchers and has received a consistent flow of new grants from Open Phil. Based on everything I’ve ever seen Open Phil publish, and my knowledge of FHI’s place in the X-risk world, it seems inconceivable that they’d have funding cut because of a single paper that presents a particular point of view.
The same point applies beyond FHI, to other Open Phil grants. They’ve funded dozens of organizations in the AI field, with (I assume) hundreds of total scholars/thinkers in their employ; could it really be the case that at the time those grants were made, none of the people so funded had written things that ran counter to Open Phil’s agenda (for example, calls for greater academic diversity within X-risk)?
Meanwhile, CSER (the other author’s org) doesn’t appear in Open Phil’s grants database at all, and I can’t find anything that looks like funding to CSER online at any point after 2015. If you assume this is related to ideological differences between Open Phil and CSER (I have no idea), this particular paper seems like it wouldn’t change much. Open Phil can’t cut funding it doesn’t provide.
That is to say, if senior scholars expressed these concerns, I think they were unwarranted.
*****
Of course, I’m not a senior scholar myself. But I am someone who worked at CEA for three years, attended two Leaders Forums, and heard many internal/”backroom” conversations between senior leaders and/or big funders.
I’m also someone who doesn’t rely on the EA world for funding (I have marketable skills and ample savings), is willing to criticize popular people even when it costs time and energy, and cares a lot about getting incentives and funding dynamics right. I created several of the Forum’s criticism tags and helped to populate them. I put Zvi’s recent critical post in the EA Forum Digest.
I think there are things we don’t do well. I’ve seen important people present weak counterarguments to good criticism without giving the questions as much thought as seemed warranted. I’ve seen interesting opportunities get lost because people were (in my view) too worried about the criticism that might follow. I’ve seen the kinds of things Ozzie Gooen talks about here (humans making human mistakes in prioritization, communication, etc.) I think that Ben Hoffman and Zvi have made a number of good points about problems with centralized funding and bad incentives.
But despite all that, I just can’t wrap my head around the idea that the major EA figures I’ve known would see a solid, well-thought-through critique and decide, as a result, to stop funding the people or organizations involved. It seems counter to who they are as people, and counter to the vast effort they expend on reading criticism, asking for criticism, re-evaluating their own work and each other’s work with a critical eye, etc.
I do think that I’m more trusting of people than the average person. It’s possible that things are happening in backrooms that would appall me, and I just haven’t seen them. But whenever one of these conversations comes up, it always seems to end in vague accusations without names attached or supporting documentation, even in cases where someone straight-up left the community. If things were anywhere near as bad as they’ve been represented, I would expect at least one smoking gun, beyond complaints about biased syllabi or “A was concerned that B would be mad”.
For example: Phil Torres claims to have spent months gathering reports of censorship from people all over EA, but the resulting article was remarkably insubstantial. The single actual incident he mentions in the “canceled” section is a Facebook post being deleted by an unknown moderator in 2013. I know more detail about this case than Phil shares, and he left out some critical points:
The post being from 2013, when EA as a whole was much younger/less professional
The CEA employee who called the poster being a personal friend of theirs who wanted to talk about the post’s ideas
The person who took down the post seeing this as a mistake, and something they wouldn’t do today (did Phil try to find them, so he could ask them about the incident?)
If this was Phil’s best example, where’s the rest?
I’d be sad to see a smoking gun because of what it would mean for my relationship with a community I value. But I’ve spent a lot of time trying to find one anyway, because if my work is built on sand I want to know sooner rather than later. I’ve yet to find what I seek.
*****
There was one line that really concerned me:
“Lacking in rigor” sounds like a critique of the type the authors solicited (albeit one that I can imagine being presented unhelpfully).
“Harboring bad intentions” is a serious accusation to throw around, and one I’m actively angry to hear reviewers using in a case like this, where people are trying to present (somewhat) reasonable criticism and doing so with no clear incentive (rather than e.g. writing critical articles in outside publications to build a portfolio, as others have).
I’d rather have meta-discussion of the paper’s support be centered on this point, rather than the “hypothetical loss of funding” point, at least until we have evidence that the concerns of the senior scholars are based on actual decisions or conversations.
This is a great comment, thank you for writing it. I agree—I too have not seen sufficient evidence that could warrant the reaction of these senior scholars. We tried to get evidence from them and tried to understand why they explicitly feared that OpenPhil would not fund them because of some critical papers. Any arguments they shared with us were unconvincing. My own experience with people at OpenPhil (sorry to focus the conversation only on OpenPhil, obviously the broader conversation about funding should not only focus on them) in fact suggests the opposite.
I want to make sure that the discussion does not unduly focus on FHI or CSER in particular. I think this has little to do with the organisations as a whole and more to do with individuals who sit somewhere in the EA hierarchy. We made the choice to protect the privacy of the people whose comments we speak of here. This is out of respect but also because I think the more interesting area of focus (and that which can be changed) is what role EA as a community plays in something like this happening.
I woud caution to center the discussion only around the question of whether or not OpenPhil would reduce funding in respose to crticism. Improtantly, what happened here is that some EAs, who had influence over funding and research postions of more junior researchers in Xrisk, thought this was the case and acted on that assumption. I think it may well be the case that OpenPhil acts perfectly fine, while researchers lower in the funding hierarchy harmfully act as if OpenPhil would not act well. How can that be prevented?
To clarify: all the reviewers we approached gave critical feedback and we incorporated it and responded to it as we saw fit without feeling offended. But the only people who said the paper had a low academic standard, that it was ‘lacking in rigor’ or that it wasn’t ‘loving enough’, were EAs who were emotionally affected by reading the paper. My point here is that in an idealised objective review process it would be totally fine to hear that we do not meet some academic standard. But my sense was that these comments were not actually about academic rigor, but about community membership and friendship. This is understandable, but it’s not surprising that mixture of power, community and research can produce biased scholarship.
Very happy to have a private chat Aaron!
Thanks for writing this reply, I think this is an important clarification.
It’s hard to see what is going on and this is producing a lot of heat and speculation. I want to present an account or framing below. I want to see if this matches with your beliefs and experiences.
I like to point out the below this isn’t favorable to you, basically, but I don’t have any further deliberate goal and little knowledge in this space.
Firstly, to try to reduce the heat, I will change the situation to the cause area of global health:
For background, note that Daron Acemoglu, who is really formidable, has criticized EA global health and development.
Basically, Acemoglu believes the randomista approaches used in EA could be net negative for human welfare because it supplants health institutions, reduces the functionality of the state and slows economic development. It’s also hard to measure. I don’t agree and most EA don’t agree.
The account begins: imagine with dramatically increased funding, Givewell expands and hires a bunch of researchers. GiveWell is more confident and hires less orthodox researchers that seem passionate and talented.
One year later, the very first paper of one of the newly hired researchers makes a strong negative view of the randomista approach and directly criticizes GiveWell’s work.
The paper says EA global health and development is misguided and gives plausible reasons, but these closely follow Acemoglu and randomista critics. The paper makes statements that many aligned EAs find disagreeable, such as saying AMF’s work is unmeasurable. There are also direct criticisms of senior EAs that seem uncharitable.
However, there isn’t a lot of original research or claims in the paper. Also, while not stated, the paper implies the need to restructure and change the fundamental work of GiveWell, including deleting several major programs.
Accompanying the paper, the new researcher also states they had very negative experiences when pushing out the paper, including getting heavily pressured to self-censor. They state people had suggested they had bad intentions, low scholarship ability, and that people said future funding might be pulled.
They state this too on the EA Forum.
Publicly, we never hear any more substantive details about the above. This is because people don’t want to commit to writing when it’s easy to misrepresent the facts on either side, and certain claims make the benign appeal to authority and norms unworkable.
However, the truth of what happened is prosaic:
When the researcher was getting reviews, peer and senior EAs in the space pointed out that the researcher joined GiveWell knowing full well its mission and approaches, and their paper seemed mainly political, simply drawing in and recasting existing outside arguments. Given this, some explicitly questioned the intent and motivation of the researcher.
The director of Givewell research hired the researcher because the director herself wanted to push the frontier of EA global health and development into new policy approaches, maybe even make inroads to people doing work advanced by Acemoglu. Now, her newly hired researcher seems to be a wild activist. It is a nightmare communicating with them. Frustrated, the director loses sleep and doubts herself, was this her fault and incompetence?
The director knows that saying things to the researcher, like they seem unable to do original research, have no value alignment to EA approaches, or that the researcher’s path has no future in GiveWell, seem true to the director, but can make her a lifelong enemy.
The director is also unwilling or unable to be a domineering boss over an underling.
So the director punts by saying that GiveWell’s funding is dependent on executing it’s current mission, and papers directly undermining the current mission will undermine funding too.
This all happens over many meetings and days, where both sides are heated and highly emotional, and many things are said.
The researcher is a true believer against randomista, and think that millions of lives are at stake, and definitely don’t think they are unaligned (it is GiveWell that is). The researcher views all the above as hostile, a reaction of the establishment.
Question: Do you find my account above plausible, or unfair and wildly distorted? Can you give any details or characterizations of how it differs?
What on Earth is this thinly-veiled attempt at character assassination? Do you actually have any substantive evidence that your “account” is accurate (your disclaimer at the start suggests not), or are you just fishing for a reaction?
Honestly, what did you hope to gain from this? You think this researcher is just gonna respond and say “Yep, you’re right, I’m ill-fit for my job and incapable of good academic work!” “Not favorable to you” is the understatement of the century. Not to mention your change of the area concerned in no way lowers the temperature. It just functions as CYA to avoid making forthright accusations that this researcher’s actual boss might then be called upon to publicly refute. This is one of the slimiest posts I’ve ever seen, to be perfectly honest.
Edit: I would love to see anyone who has downvoted this post explain why they think the above is defensible and how they’d react if someone did it to them.
Nah
Nah what? Nah you don’t have any evidence? That would confirm my prior.
Now why don’t you explain what you hoped to get out of that comment besides being grossly insulting to someone you don’t know on no evidential basis.
I don’t agree with your comment on its merits, or the practice of confronting this way with an anonymous throwaway.
(It’s unclear, but it may be unwise and worthwhile for me to think about the consequent effects of this attitude) but it seems justifiable that throwaways that begin this quality sort of debate (the quality being a matter of perspective that you won’t agree upon) can be treated really dismissively.
If you want, you can write with your real name (or PM me) and I will respond, if that’s what you really want.
Also, the downvote(s) on your comment(s) are mine.
I would be more worried about making comments of the kind that you produced above under my real name. Your comment was full of highly negative accusations about a named poster’s professional life and academic capabilities, veiled under utterly transparent hypotheticals, made in public. You offered no evidence whatsoever for any of these accusations nor did you even attempt to justify why that sort of engagement was warranted. Airing such negative judgments publicly and about a named person is an extremely serious matter whether you have evidence or not. I don’t think that you treated it with a fraction of the seriousness it deserves.
I honestly have negative interest in telling you my real name after seeing how you treat others in public, much less making an account here with my real name attached to it. I would prefer to limit your ability to do reputational damage (to me or others) on spurious or non-existent grounds as far as possible. I am honestly extremely curious as to why you thought what you did above was remotely acceptable, but I am not willing to put myself in the line of fire to find out.
I think that you think I don’t like your comments, but this isn’t close to true.
I really hope you will put your real name so I can give a real response.
(I wouldn’t share your name and generally wouldn’t use PII if you PMed me.)
Well, thanks for that. Admittedly, the downvotes seemed like good evidence to the contrary.
Unfortunately, I also couldn’t really give you my real name even if I wanted to, because the name of this account shares the name of my online persona elsewhere and I place a very high premium on anonymity. If I had thought to give it a different name, then I’d probably just PM you my real name. But I didn’t think that far ahead.
Anyway, whatever else may be, I’m sorry that I came in so hot. Sometimes I just see something that really sets me off and I consequently approach things too aggressively for my own (and others’) good.
Many of the comments in this comment chain, including the original narrative I wrote, which I view as closer to reality (as opposed to the implicit, difficult, narrative I see in the OP, which seems highly motivated and for which I find evidence that it is contradicted in the subsequent comments by the OP) has been visited by likely a single person, who has made strong downvotes and strong upvotes, of magnitude 9.
So probably a single person has come in and used a strong upvote or downvote of magnitude 9.
While I am totally petty and vain, I don’t usually comment on upvotes or downvotes, because it seems sort of unseemly (unless it is hilarious to do so).
In this case, because of the way strong upvotes are designed, there appears to be literally only 4 accounts who could have this ability, and their judgement is well respected.
So I address you directly: If you have information about this, especially object level information about the underlying situation relative to my original narrative, it would be great to discuss.
The underlying motivation is that truth is a thing, and in some sense having the recent commentor come in and stir this up, was useful.
In an even deeper sense, as we all agree, EA isn’t a social club for people who got here first. EA doesn’t belong to you or me, or even a large subset of the original founders (to be clear, for all intents and purposes, all reasonable paths will include their leadership and guidance for a long time).
Importantly, I think some good operationalizations of the above perspective, combined with awareness of all the possible instantiations of EA, and the composition of people and attitudes, would rationalize a different tone and culture than exists.
So, RE: “I would be more worried about making comments of the kind that you produced above under my real name.” I think could be exactly, perfectly, the opposite of what is true, yet is one of the comments you strong upvoted.
To be even more direct I suspect, but I am unsure, that the culture of discussion in EA has accumulated defects that are costly to effectiveness and truth (under the direct tenure of one of the four people who could have voted +/-9 by the way).
So the most important topic here might not be about the OP at all, which I view as just one instance of an ongoing issue—in a deep sense, it was really about the very person who came in and strong voted!
I’m not sure you see this (or that I see this fully either).
From the very beginning, I specifically constructed this account and persona to interrogate whether this is true, or something.
Circling back to the original topic. The above perspective, the related hysteresis, the consequent effects, implies that the existence of my narrative in this thread, or myself, should be challenged or removed if it’s wrong.
But I can’t really elaborate on my narrative. I can’t defend myself, because it slags the OP, which isn’t appropriate and opens wounds, which is unfair and harmful to everyone involved (but I sort of hoped the new commentor was the OP or a friend, which would have waived this and that’s why I wanted their identity).
But you, the strong downvoter/upvoter, +9 dude, this is a really promising line of discussion. So come and reply?
I think it’s reasonable to not want to respond to an anonymous throwaway, but not reasonable to ask them to PM you their real name.
So, there is some normal sense where I might have a reason to want to them “legitimize” their criticism by identifying themselves (this reason is debatable, it could be weak or very strong).
But the first comments from this person aren’t just vitriolic and a personal attack, they are adamant demands for a significant amount of writing—they disagree greatly with me and so the explanation needed to bridge the opinion could be very long.
The content of this writing has consequences, which is hidden to people without the explanation.
Here, I have special additional reasons to know their identity, because the best way to communicate the underlying events and what my comment meant, depend on who they are.
Some explanations or accounts will be inflammatory, and others useless. For example, the person could be entirely new to EA, or be the OP themselves. Certain explanations, justification or “evidence”, could be hurtful and stir up wounds. Others won’t make sense at all.
In this situation, it’s reasonable to see the commenters demands impose the further, additional burdens on me of having to weigh this harm (just to defend my comment), which is hidden from them. Separately and additionally, I probably view this as particularly unfair, as from my perspective, I think the very reason/issue why I commented and why things are so problematic/sensitive, was because the original environment around the post was inflammatory and hard to approach by design.
Hmm I think I have some different ideas about discussion norms but not sure if I understand them coherently myself/think it’s worth going into. I agree it’s often worthwhile to not engage.
I agree with this.