Note: I discuss Open Phil to some degree in this comment. I also start work there on January 3rd. These are my personal views, and do not represent my employer.
Epistemic status: Written late at night, in a rush, Iâll probably regret some of this in the morning but (a) if I donât publish now, it wonât happen, and (b) I did promise extra spice after I retired.
I think you contributed something important, and wish you had been met with more support.
It seems valuable to separate âsupport for the action of writing the paperâ from âsupport for the arguments in the paperâ. My read is that the authors had a lot of the former, but less of the latter.
From the original post:
We were told by some that our critique is invalid because the community is already very cognitively diverse and in fact welcomes criticism. They also told us that there is no TUA, and if the approach does exist then it certainly isnât dominant.
While âinvalidâ seems like too strong a word for a critic to use (and Iâd be disappointed in any critic who did use it), this sounds like people were asked to review/âcritique the paper and then offered reviews and critiques of the paper.
Still, to the degree that there was any opposition for the action of writing the paper, thatâs a problem. To address something more concerning:
It was these same people that then tried to prevent this paper from being published. They did so largely out of fear that publishing might offend key funders who are aligned with the TUA.
These individualsâoften senior scholars within the fieldâtold us in private that they were concerned that any critique of central figures in EA would result in an inability to secure funding from EA sources, such as Open Philanthropy. We donât know if these concerns are warranted. Nonetheless, any field that operates under such a chilling effect is neither free nor fair.
Iâm not sure what âprevent this paper from being publishedâ means, but in the absence of other points, I assume it refers to the next point of discussion (the concern around access to funding).
Iâm glad the authors point out that the concerns may not be warranted. But Iâve seen many people (not necessarily the authors) make arguments like âthese concerns could be real, therefore they are realâ. Thereâs a pervasive belief that Open Philanthropy must have a specific agenda they try to fund where X-risk is concerned, and that entire orgs might be blacklisted because individual authors within those orgs criticize that agenda.
The Future of Humanity Institute (one authorâs org) has dozens of researchers and has received a consistent flow of new grants from Open Phil. Based on everything Iâve ever seen Open Phil publish, and my knowledge of FHIâs place in the X-risk world, it seems inconceivable that theyâd have funding cut because of a single paper that presents a particular point of view.
The same point applies beyond FHI, to other Open Phil grants. Theyâve funded dozens of organizations in the AI field, with (I assume) hundreds of total scholars/âthinkers in their employ; could it really be the case that at the time those grants were made, none of the people so funded had written things that ran counter to Open Philâs agenda (for example, calls for greater academic diversity within X-risk)?
Meanwhile, CSER (the other authorâs org) doesnât appear in Open Philâs grants database at all, and I canât find anything that looks like funding to CSER online at any point after 2015. If you assume this is related to ideological differences between Open Phil and CSER (I have no idea), this particular paper seems like it wouldnât change much. Open Phil canât cut funding it doesnât provide.
That is to say, if senior scholars expressed these concerns, I think they were unwarranted.
*****
Of course, Iâm not a senior scholar myself. But I am someone who worked at CEA for three years, attended two Leaders Forums, and heard many internal/ââbackroomâ conversations between senior leaders and/âor big funders.
I think there are things we donât do well. Iâve seen important people present weak counterarguments to good criticism without giving the questions as much thought as seemed warranted. Iâve seen interesting opportunities get lost because people were (in my view) too worried about the criticism that might follow. Iâve seen the kinds of things Ozzie Gooen talks about here (humans making human mistakes in prioritization, communication, etc.) I think that Ben Hoffman and Zvi have made a number of good points about problems with centralized funding and bad incentives.
But despite all that, I just canât wrap my head around the idea that the major EA figures Iâve known would see a solid, well-thought-through critique and decide, as a result, to stop funding the people or organizations involved. It seems counter to who they are as people, and counter to the vast effort they expend on reading criticism, asking for criticism, re-evaluating their own work and each otherâs work with a critical eye, etc.
I do think that Iâm more trusting of people than the average person. Itâs possible that things are happening in backrooms that would appall me, and I just havenât seen them. But whenever one of these conversations comes up, it always seems to end in vague accusations without names attached or supporting documentation, even in cases where someone straight-up left the community. If things were anywhere near as bad as theyâve been represented, I would expect at least one smoking gun, beyond complaints about biased syllabi or âA was concerned that B would be madâ.
For example: Phil Torres claims to have spent months gathering reports of censorship from people all over EA, but the resulting article was remarkably insubstantial. The single actual incident he mentions in the âcanceledâ section is a Facebook post being deleted by an unknown moderator in 2013. I know more detail about this case than Phil shares, and he left out some critical points:
The post being from 2013, when EA as a whole was much younger/âless professional
The CEA employee who called the poster being a personal friend of theirs who wanted to talk about the postâs ideas
The person who took down the post seeing this as a mistake, and something they wouldnât do today (did Phil try to find them, so he could ask them about the incident?)
If this was Philâs best example, whereâs the rest?
Iâd be sad to see a smoking gun because of what it would mean for my relationship with a community I value. But Iâve spent a lot of time trying to find one anyway, because if my work is built on sand I want to know sooner rather than later. Iâve yet to find what I seek.
*****
There was one line that really concerned me:
By others we were accused of lacking academic rigour and harbouring bad intentions.
âLacking in rigorâ sounds like a critique of the type the authors solicited (albeit one that I can imagine being presented unhelpfully).
âHarboring bad intentionsâ is a serious accusation to throw around, and one Iâm actively angry to hear reviewers using in a case like this, where people are trying to present (somewhat) reasonable criticism and doing so with no clear incentive (rather than e.g. writing critical articles in outside publications to build a portfolio, as others have).
Iâd rather have meta-discussion of the paperâs support be centered on this point, rather than the âhypothetical loss of fundingâ point, at least until we have evidence that the concerns of the senior scholars are based on actual decisions or conversations.
This is a great comment, thank you for writing it. I agreeâI too have not seen sufficient evidence that could warrant the reaction of these senior scholars. We tried to get evidence from them and tried to understand why they explicitly feared that OpenPhil would not fund them because of some critical papers. Any arguments they shared with us were unconvincing. My own experience with people at OpenPhil (sorry to focus the conversation only on OpenPhil, obviously the broader conversation about funding should not only focus on them) in fact suggests the opposite.
I want to make sure that the discussion does not unduly focus on FHI or CSER in particular. I think this has little to do with the organisations as a whole and more to do with individuals who sit somewhere in the EA hierarchy. We made the choice to protect the privacy of the people whose comments we speak of here. This is out of respect but also because I think the more interesting area of focus (and that which can be changed) is what role EA as a community plays in something like this happening.
I woud caution to center the discussion only around the question of whether or not OpenPhil would reduce funding in respose to crticism. Improtantly, what happened here is that some EAs, who had influence over funding and research postions of more junior researchers in Xrisk, thought this was the case and acted on that assumption. I think it may well be the case that OpenPhil acts perfectly fine, while researchers lower in the funding hierarchy harmfully act as if OpenPhil would not act well. How can that be prevented?
To clarify: all the reviewers we approached gave critical feedback and we incorporated it and responded to it as we saw fit without feeling offended. But the only people who said the paper had a low academic standard, that it was âlacking in rigorâ or that it wasnât âloving enoughâ, were EAs who were emotionally affected by reading the paper. My point here is that in an idealised objective review process it would be totally fine to hear that we do not meet some academic standard. But my sense was that these comments were not actually about academic rigor, but about community membership and friendship. This is understandable, but itâs not surprising that mixture of power, community and research can produce biased scholarship.
Itâs hard to see what is going on and this is producing a lot of heat and speculation. I want to present an account or framing below. I want to see if this matches with your beliefs and experiences.
I like to point out the below this isnât favorable to you, basically, but I donât have any further deliberate goal and little knowledge in this space.
Firstly, to try to reduce the heat, I will change the situation to the cause area of global health:
Basically, Acemoglu believes the randomista approaches used in EA could be net negative for human welfare because it supplants health institutions, reduces the functionality of the state and slows economic development. Itâs also hard to measure. I donât agree and most EA donât agree.
The account begins: imagine with dramatically increased funding, Givewell expands and hires a bunch of researchers. GiveWell is more confident and hires less orthodox researchers that seem passionate and talented.
One year later, the very first paper of one of the newly hired researchers makes a strong negative view of the randomista approach and directly criticizes GiveWellâs work.
The paper says EA global health and development is misguided and gives plausible reasons, but these closely follow Acemoglu and randomista critics. The paper makes statements that many aligned EAs find disagreeable, such as saying AMFâs work is unmeasurable. There are also direct criticisms of senior EAs that seem uncharitable.
However, there isnât a lot of original research or claims in the paper. Also, while not stated, the paper implies the need to restructure and change the fundamental work of GiveWell, including deleting several major programs.
Accompanying the paper, the new researcher also states they had very negative experiences when pushing out the paper, including getting heavily pressured to self-censor. They state people had suggested they had bad intentions, low scholarship ability, and that people said future funding might be pulled.
They state this too on the EA Forum.
Publicly, we never hear any more substantive details about the above. This is because people donât want to commit to writing when itâs easy to misrepresent the facts on either side, and certain claims make the benign appeal to authority and norms unworkable.
However, the truth of what happened is prosaic:
When the researcher was getting reviews, peer and senior EAs in the space pointed out that the researcher joined GiveWell knowing full well its mission and approaches, and their paper seemed mainly political, simply drawing in and recasting existing outside arguments. Given this, some explicitly questioned the intent and motivation of the researcher.
The director of Givewell research hired the researcher because the director herself wanted to push the frontier of EA global health and development into new policy approaches, maybe even make inroads to people doing work advanced by Acemoglu. Now, her newly hired researcher seems to be a wild activist. It is a nightmare communicating with them. Frustrated, the director loses sleep and doubts herself, was this her fault and incompetence?
The director knows that saying things to the researcher, like they seem unable to do original research, have no value alignment to EA approaches, or that the researcherâs path has no future in GiveWell, seem true to the director, but can make her a lifelong enemy.
The director is also unwilling or unable to be a domineering boss over an underling.
So the director punts by saying that GiveWellâs funding is dependent on executing itâs current mission, and papers directly undermining the current mission will undermine funding too.
This all happens over many meetings and days, where both sides are heated and highly emotional, and many things are said.
The researcher is a true believer against randomista, and think that millions of lives are at stake, and definitely donât think they are unaligned (it is GiveWell that is). The researcher views all the above as hostile, a reaction of the establishment.
Question: Doyou find my account above plausible, or unfair and wildly distorted? Can you give any details or characterizations of how it differs?
What on Earth is this thinly-veiled attempt at character assassination? Do you actually have any substantive evidence that your âaccountâ is accurate (your disclaimer at the start suggests not), or are you just fishing for a reaction?
Honestly, what did you hope to gain from this? You think this researcher is just gonna respond and say âYep, youâre right, Iâm ill-fit for my job and incapable of good academic work!â âNot favorable to youâ is the understatement of the century. Not to mention your change of the area concerned in no way lowers the temperature. It just functions as CYA to avoid making forthright accusations that this researcherâs actual boss might then be called upon to publicly refute. This is one of the slimiest posts Iâve ever seen, to be perfectly honest.
Edit: I would love to see anyone who has downvoted this post explain why they think the above is defensible and how theyâd react if someone did it to them.
Nah what? Nah you donât have any evidence? That would confirm my prior.
Now why donât you explain what you hoped to get out of that comment besides being grossly insulting to someone you donât know on no evidential basis.
I donât agree with your comment on its merits, or the practice of confronting this way with an anonymous throwaway.
(Itâs unclear, but it may be unwise and worthwhile for me to think about the consequent effects of this attitude) but it seems justifiable that throwaways that begin this quality sort of debate (the quality being a matter of perspective that you wonât agree upon) can be treated really dismissively.
If you want, you can write with your real name (or PM me) and I will respond, if thatâs what you really want.
Also, the downvote(s) on your comment(s) are mine.
I would be more worried about making comments of the kind that you produced above under my real name. Your comment was full of highly negative accusations about a named posterâs professional life and academic capabilities, veiled under utterly transparent hypotheticals, made in public. You offered no evidence whatsoever for any of these accusations nor did you even attempt to justify why that sort of engagement was warranted. Airing such negative judgments publicly and about a named person is an extremely serious matter whether you have evidence or not. I donât think that you treated it with a fraction of the seriousness it deserves.
I honestly have negative interest in telling you my real name after seeing how you treat others in public, much less making an account here with my real name attached to it. I would prefer to limit your ability to do reputational damage (to me or others) on spurious or non-existent grounds as far as possible. I am honestly extremely curious as to why you thought what you did above was remotely acceptable, but I am not willing to put myself in the line of fire to find out.
Well, thanks for that. Admittedly, the downvotes seemed like good evidence to the contrary.
Unfortunately, I also couldnât really give you my real name even if I wanted to, because the name of this account shares the name of my online persona elsewhere and I place a very high premium on anonymity. If I had thought to give it a different name, then Iâd probably just PM you my real name. But I didnât think that far ahead.
Anyway, whatever else may be, Iâm sorry that I came in so hot. Sometimes I just see something that really sets me off and I consequently approach things too aggressively for my own (and othersâ) good.
Many of the comments in this comment chain, including the original narrative I wrote, which I view as closer to reality (as opposed to the implicit, difficult, narrative I see in the OP, which seems highly motivated and for which I find evidence that it is contradicted in the subsequent comments by the OP) has been visited by likely a single person, who has made strong downvotes and strong upvotes, of magnitude 9.
So probably a single person has come in and used a strong upvote or downvote of magnitude 9.
While I am totally petty and vain, I donât usually comment on upvotes or downvotes, because it seems sort of unseemly (unless it is hilarious to do so).
In this case, because of the way strong upvotes are designed, there appears to be literally only 4 accounts who could have this ability, and their judgement is well respected.
So I address you directly: If you have information about this, especially object level information about the underlying situation relative to my original narrative, it would be great to discuss.
The underlying motivation is that truth is a thing, and in some sense having the recent commentor come in and stir this up, was useful.
In an even deeper sense, as we all agree, EA isnât a social club for people who got here first. EA doesnât belong to you or me, or even a large subset of the original founders (to be clear, for all intents and purposes, all reasonable paths will include their leadership and guidance for a long time).
Importantly, I think some good operationalizations of the above perspective, combined with awareness of all the possible instantiations of EA, and the composition of people and attitudes, would rationalize a different tone and culture than exists.
So, RE: âI would be more worried about making comments of the kind that you produced above under my real name.â I think could be exactly, perfectly, the opposite of what is true, yet is one of the comments you strong upvoted.
To be even more direct I suspect, but I am unsure, that the culture of discussion in EA has accumulated defects that are costly to effectiveness and truth (under the direct tenure of one of the four people who could have voted +/â-9 by the way).
So the most important topic here might not be about the OP at all, which I view as just one instance of an ongoing issueâin a deep sense, it was really about the very person who came in and strong voted!
Iâm not sure you see this (or that I see this fully either).
From the very beginning, I specifically constructed this account and persona to interrogate whether this is true, or something.
Circling back to the original topic. The above perspective, the related hysteresis, the consequent effects, implies that the existence of my narrative in this thread, or myself, should be challenged or removed if itâs wrong.
But I canât really elaborate on my narrative. I canât defend myself, because it slags the OP, which isnât appropriate and opens wounds, which is unfair and harmful to everyone involved (but I sort of hoped the new commentor was the OP or a friend, which would have waived this and thatâs why I wanted their identity).
But you, the strong downvoter/âupvoter, +9 dude, this is a really promising line of discussion. So come and reply?
So, there is some normal sense where I might have a reason to want to them âlegitimizeâ their criticism by identifying themselves (this reason is debatable, it could be weak or very strong).
But the first comments from this person arenât just vitriolic and a personal attack, they are adamant demands for a significant amount of writingâthey disagree greatly with me and so the explanation needed to bridge the opinion could be very long.
The content of this writing has consequences, which is hidden to people without the explanation.
Here, I have special additional reasons to know their identity, because the best way to communicate the underlying events and what my comment meant, depend on who they are.
Some explanations or accounts will be inflammatory, and others useless. For example, the person could be entirely new to EA, or be the OP themselves. Certain explanations, justification or âevidenceâ, could be hurtful and stir up wounds. Others wonât make sense at all.
In this situation, itâs reasonable to see the commenters demands impose the further, additional burdens on me of having to weigh this harm (just to defend my comment), which is hidden from them. Separately and additionally, I probably view this as particularly unfair, as from my perspective, I think the very reason/âissue why I commented and why things are so problematic/âsensitive, was because the original environment around the post was inflammatory and hard to approach by design.
Hmm I think I have some different ideas about discussion norms but not sure if I understand them coherently myself/âthink itâs worth going into. I agree itâs often worthwhile to not engage.
I think the very reason/âissue why I commented and why things are so problematic/âsensitive, was because the original environment around the post was inflammatory and hard to approach by design.
Note: I discuss Open Phil to some degree in this comment. I also start work there on January 3rd. These are my personal views, and do not represent my employer.
Epistemic status: Written late at night, in a rush, Iâll probably regret some of this in the morning but (a) if I donât publish now, it wonât happen, and (b) I did promise extra spice after I retired.
It seems valuable to separate âsupport for the action of writing the paperâ from âsupport for the arguments in the paperâ. My read is that the authors had a lot of the former, but less of the latter.
From the original post:
While âinvalidâ seems like too strong a word for a critic to use (and Iâd be disappointed in any critic who did use it), this sounds like people were asked to review/âcritique the paper and then offered reviews and critiques of the paper.
Still, to the degree that there was any opposition for the action of writing the paper, thatâs a problem. To address something more concerning:
Iâm not sure what âprevent this paper from being publishedâ means, but in the absence of other points, I assume it refers to the next point of discussion (the concern around access to funding).
Iâm glad the authors point out that the concerns may not be warranted. But Iâve seen many people (not necessarily the authors) make arguments like âthese concerns could be real, therefore they are realâ. Thereâs a pervasive belief that Open Philanthropy must have a specific agenda they try to fund where X-risk is concerned, and that entire orgs might be blacklisted because individual authors within those orgs criticize that agenda.
The Future of Humanity Institute (one authorâs org) has dozens of researchers and has received a consistent flow of new grants from Open Phil. Based on everything Iâve ever seen Open Phil publish, and my knowledge of FHIâs place in the X-risk world, it seems inconceivable that theyâd have funding cut because of a single paper that presents a particular point of view.
The same point applies beyond FHI, to other Open Phil grants. Theyâve funded dozens of organizations in the AI field, with (I assume) hundreds of total scholars/âthinkers in their employ; could it really be the case that at the time those grants were made, none of the people so funded had written things that ran counter to Open Philâs agenda (for example, calls for greater academic diversity within X-risk)?
Meanwhile, CSER (the other authorâs org) doesnât appear in Open Philâs grants database at all, and I canât find anything that looks like funding to CSER online at any point after 2015. If you assume this is related to ideological differences between Open Phil and CSER (I have no idea), this particular paper seems like it wouldnât change much. Open Phil canât cut funding it doesnât provide.
That is to say, if senior scholars expressed these concerns, I think they were unwarranted.
*****
Of course, Iâm not a senior scholar myself. But I am someone who worked at CEA for three years, attended two Leaders Forums, and heard many internal/ââbackroomâ conversations between senior leaders and/âor big funders.
Iâm also someone who doesnât rely on the EA world for funding (I have marketable skills and ample savings), is willing to criticize popular people even when it costs time and energy, and cares a lot about getting incentives and funding dynamics right. I created several of the Forumâs criticism tags and helped to populate them. I put Zviâs recent critical post in the EA Forum Digest.
I think there are things we donât do well. Iâve seen important people present weak counterarguments to good criticism without giving the questions as much thought as seemed warranted. Iâve seen interesting opportunities get lost because people were (in my view) too worried about the criticism that might follow. Iâve seen the kinds of things Ozzie Gooen talks about here (humans making human mistakes in prioritization, communication, etc.) I think that Ben Hoffman and Zvi have made a number of good points about problems with centralized funding and bad incentives.
But despite all that, I just canât wrap my head around the idea that the major EA figures Iâve known would see a solid, well-thought-through critique and decide, as a result, to stop funding the people or organizations involved. It seems counter to who they are as people, and counter to the vast effort they expend on reading criticism, asking for criticism, re-evaluating their own work and each otherâs work with a critical eye, etc.
I do think that Iâm more trusting of people than the average person. Itâs possible that things are happening in backrooms that would appall me, and I just havenât seen them. But whenever one of these conversations comes up, it always seems to end in vague accusations without names attached or supporting documentation, even in cases where someone straight-up left the community. If things were anywhere near as bad as theyâve been represented, I would expect at least one smoking gun, beyond complaints about biased syllabi or âA was concerned that B would be madâ.
For example: Phil Torres claims to have spent months gathering reports of censorship from people all over EA, but the resulting article was remarkably insubstantial. The single actual incident he mentions in the âcanceledâ section is a Facebook post being deleted by an unknown moderator in 2013. I know more detail about this case than Phil shares, and he left out some critical points:
The post being from 2013, when EA as a whole was much younger/âless professional
The CEA employee who called the poster being a personal friend of theirs who wanted to talk about the postâs ideas
The person who took down the post seeing this as a mistake, and something they wouldnât do today (did Phil try to find them, so he could ask them about the incident?)
If this was Philâs best example, whereâs the rest?
Iâd be sad to see a smoking gun because of what it would mean for my relationship with a community I value. But Iâve spent a lot of time trying to find one anyway, because if my work is built on sand I want to know sooner rather than later. Iâve yet to find what I seek.
*****
There was one line that really concerned me:
âLacking in rigorâ sounds like a critique of the type the authors solicited (albeit one that I can imagine being presented unhelpfully).
âHarboring bad intentionsâ is a serious accusation to throw around, and one Iâm actively angry to hear reviewers using in a case like this, where people are trying to present (somewhat) reasonable criticism and doing so with no clear incentive (rather than e.g. writing critical articles in outside publications to build a portfolio, as others have).
Iâd rather have meta-discussion of the paperâs support be centered on this point, rather than the âhypothetical loss of fundingâ point, at least until we have evidence that the concerns of the senior scholars are based on actual decisions or conversations.
This is a great comment, thank you for writing it. I agreeâI too have not seen sufficient evidence that could warrant the reaction of these senior scholars. We tried to get evidence from them and tried to understand why they explicitly feared that OpenPhil would not fund them because of some critical papers. Any arguments they shared with us were unconvincing. My own experience with people at OpenPhil (sorry to focus the conversation only on OpenPhil, obviously the broader conversation about funding should not only focus on them) in fact suggests the opposite.
I want to make sure that the discussion does not unduly focus on FHI or CSER in particular. I think this has little to do with the organisations as a whole and more to do with individuals who sit somewhere in the EA hierarchy. We made the choice to protect the privacy of the people whose comments we speak of here. This is out of respect but also because I think the more interesting area of focus (and that which can be changed) is what role EA as a community plays in something like this happening.
I woud caution to center the discussion only around the question of whether or not OpenPhil would reduce funding in respose to crticism. Improtantly, what happened here is that some EAs, who had influence over funding and research postions of more junior researchers in Xrisk, thought this was the case and acted on that assumption. I think it may well be the case that OpenPhil acts perfectly fine, while researchers lower in the funding hierarchy harmfully act as if OpenPhil would not act well. How can that be prevented?
To clarify: all the reviewers we approached gave critical feedback and we incorporated it and responded to it as we saw fit without feeling offended. But the only people who said the paper had a low academic standard, that it was âlacking in rigorâ or that it wasnât âloving enoughâ, were EAs who were emotionally affected by reading the paper. My point here is that in an idealised objective review process it would be totally fine to hear that we do not meet some academic standard. But my sense was that these comments were not actually about academic rigor, but about community membership and friendship. This is understandable, but itâs not surprising that mixture of power, community and research can produce biased scholarship.
Very happy to have a private chat Aaron!
Thanks for writing this reply, I think this is an important clarification.
Itâs hard to see what is going on and this is producing a lot of heat and speculation. I want to present an account or framing below. I want to see if this matches with your beliefs and experiences.
I like to point out the below this isnât favorable to you, basically, but I donât have any further deliberate goal and little knowledge in this space.
Firstly, to try to reduce the heat, I will change the situation to the cause area of global health:
For background, note that Daron Acemoglu, who is really formidable, has criticized EA global health and development.
Basically, Acemoglu believes the randomista approaches used in EA could be net negative for human welfare because it supplants health institutions, reduces the functionality of the state and slows economic development. Itâs also hard to measure. I donât agree and most EA donât agree.
The account begins: imagine with dramatically increased funding, Givewell expands and hires a bunch of researchers. GiveWell is more confident and hires less orthodox researchers that seem passionate and talented.
One year later, the very first paper of one of the newly hired researchers makes a strong negative view of the randomista approach and directly criticizes GiveWellâs work.
The paper says EA global health and development is misguided and gives plausible reasons, but these closely follow Acemoglu and randomista critics. The paper makes statements that many aligned EAs find disagreeable, such as saying AMFâs work is unmeasurable. There are also direct criticisms of senior EAs that seem uncharitable.
However, there isnât a lot of original research or claims in the paper. Also, while not stated, the paper implies the need to restructure and change the fundamental work of GiveWell, including deleting several major programs.
Accompanying the paper, the new researcher also states they had very negative experiences when pushing out the paper, including getting heavily pressured to self-censor. They state people had suggested they had bad intentions, low scholarship ability, and that people said future funding might be pulled.
They state this too on the EA Forum.
Publicly, we never hear any more substantive details about the above. This is because people donât want to commit to writing when itâs easy to misrepresent the facts on either side, and certain claims make the benign appeal to authority and norms unworkable.
However, the truth of what happened is prosaic:
When the researcher was getting reviews, peer and senior EAs in the space pointed out that the researcher joined GiveWell knowing full well its mission and approaches, and their paper seemed mainly political, simply drawing in and recasting existing outside arguments. Given this, some explicitly questioned the intent and motivation of the researcher.
The director of Givewell research hired the researcher because the director herself wanted to push the frontier of EA global health and development into new policy approaches, maybe even make inroads to people doing work advanced by Acemoglu. Now, her newly hired researcher seems to be a wild activist. It is a nightmare communicating with them. Frustrated, the director loses sleep and doubts herself, was this her fault and incompetence?
The director knows that saying things to the researcher, like they seem unable to do original research, have no value alignment to EA approaches, or that the researcherâs path has no future in GiveWell, seem true to the director, but can make her a lifelong enemy.
The director is also unwilling or unable to be a domineering boss over an underling.
So the director punts by saying that GiveWellâs funding is dependent on executing itâs current mission, and papers directly undermining the current mission will undermine funding too.
This all happens over many meetings and days, where both sides are heated and highly emotional, and many things are said.
The researcher is a true believer against randomista, and think that millions of lives are at stake, and definitely donât think they are unaligned (it is GiveWell that is). The researcher views all the above as hostile, a reaction of the establishment.
Question: Do you find my account above plausible, or unfair and wildly distorted? Can you give any details or characterizations of how it differs?
What on Earth is this thinly-veiled attempt at character assassination? Do you actually have any substantive evidence that your âaccountâ is accurate (your disclaimer at the start suggests not), or are you just fishing for a reaction?
Honestly, what did you hope to gain from this? You think this researcher is just gonna respond and say âYep, youâre right, Iâm ill-fit for my job and incapable of good academic work!â âNot favorable to youâ is the understatement of the century. Not to mention your change of the area concerned in no way lowers the temperature. It just functions as CYA to avoid making forthright accusations that this researcherâs actual boss might then be called upon to publicly refute. This is one of the slimiest posts Iâve ever seen, to be perfectly honest.
Edit: I would love to see anyone who has downvoted this post explain why they think the above is defensible and how theyâd react if someone did it to them.
Nah
Nah what? Nah you donât have any evidence? That would confirm my prior.
Now why donât you explain what you hoped to get out of that comment besides being grossly insulting to someone you donât know on no evidential basis.
I donât agree with your comment on its merits, or the practice of confronting this way with an anonymous throwaway.
(Itâs unclear, but it may be unwise and worthwhile for me to think about the consequent effects of this attitude) but it seems justifiable that throwaways that begin this quality sort of debate (the quality being a matter of perspective that you wonât agree upon) can be treated really dismissively.
If you want, you can write with your real name (or PM me) and I will respond, if thatâs what you really want.
Also, the downvote(s) on your comment(s) are mine.
I would be more worried about making comments of the kind that you produced above under my real name. Your comment was full of highly negative accusations about a named posterâs professional life and academic capabilities, veiled under utterly transparent hypotheticals, made in public. You offered no evidence whatsoever for any of these accusations nor did you even attempt to justify why that sort of engagement was warranted. Airing such negative judgments publicly and about a named person is an extremely serious matter whether you have evidence or not. I donât think that you treated it with a fraction of the seriousness it deserves.
I honestly have negative interest in telling you my real name after seeing how you treat others in public, much less making an account here with my real name attached to it. I would prefer to limit your ability to do reputational damage (to me or others) on spurious or non-existent grounds as far as possible. I am honestly extremely curious as to why you thought what you did above was remotely acceptable, but I am not willing to put myself in the line of fire to find out.
I think that you think I donât like your comments, but this isnât close to true.
I really hope you will put your real name so I can give a real response.
(I wouldnât share your name and generally wouldnât use PII if you PMed me.)
Well, thanks for that. Admittedly, the downvotes seemed like good evidence to the contrary.
Unfortunately, I also couldnât really give you my real name even if I wanted to, because the name of this account shares the name of my online persona elsewhere and I place a very high premium on anonymity. If I had thought to give it a different name, then Iâd probably just PM you my real name. But I didnât think that far ahead.
Anyway, whatever else may be, Iâm sorry that I came in so hot. Sometimes I just see something that really sets me off and I consequently approach things too aggressively for my own (and othersâ) good.
Many of the comments in this comment chain, including the original narrative I wrote, which I view as closer to reality (as opposed to the implicit, difficult, narrative I see in the OP, which seems highly motivated and for which I find evidence that it is contradicted in the subsequent comments by the OP) has been visited by likely a single person, who has made strong downvotes and strong upvotes, of magnitude 9.
So probably a single person has come in and used a strong upvote or downvote of magnitude 9.
While I am totally petty and vain, I donât usually comment on upvotes or downvotes, because it seems sort of unseemly (unless it is hilarious to do so).
In this case, because of the way strong upvotes are designed, there appears to be literally only 4 accounts who could have this ability, and their judgement is well respected.
So I address you directly: If you have information about this, especially object level information about the underlying situation relative to my original narrative, it would be great to discuss.
The underlying motivation is that truth is a thing, and in some sense having the recent commentor come in and stir this up, was useful.
In an even deeper sense, as we all agree, EA isnât a social club for people who got here first. EA doesnât belong to you or me, or even a large subset of the original founders (to be clear, for all intents and purposes, all reasonable paths will include their leadership and guidance for a long time).
Importantly, I think some good operationalizations of the above perspective, combined with awareness of all the possible instantiations of EA, and the composition of people and attitudes, would rationalize a different tone and culture than exists.
So, RE: âI would be more worried about making comments of the kind that you produced above under my real name.â I think could be exactly, perfectly, the opposite of what is true, yet is one of the comments you strong upvoted.
To be even more direct I suspect, but I am unsure, that the culture of discussion in EA has accumulated defects that are costly to effectiveness and truth (under the direct tenure of one of the four people who could have voted +/â-9 by the way).
So the most important topic here might not be about the OP at all, which I view as just one instance of an ongoing issueâin a deep sense, it was really about the very person who came in and strong voted!
Iâm not sure you see this (or that I see this fully either).
From the very beginning, I specifically constructed this account and persona to interrogate whether this is true, or something.
Circling back to the original topic. The above perspective, the related hysteresis, the consequent effects, implies that the existence of my narrative in this thread, or myself, should be challenged or removed if itâs wrong.
But I canât really elaborate on my narrative. I canât defend myself, because it slags the OP, which isnât appropriate and opens wounds, which is unfair and harmful to everyone involved (but I sort of hoped the new commentor was the OP or a friend, which would have waived this and thatâs why I wanted their identity).
But you, the strong downvoter/âupvoter, +9 dude, this is a really promising line of discussion. So come and reply?
I think itâs reasonable to not want to respond to an anonymous throwaway, but not reasonable to ask them to PM you their real name.
So, there is some normal sense where I might have a reason to want to them âlegitimizeâ their criticism by identifying themselves (this reason is debatable, it could be weak or very strong).
But the first comments from this person arenât just vitriolic and a personal attack, they are adamant demands for a significant amount of writingâthey disagree greatly with me and so the explanation needed to bridge the opinion could be very long.
The content of this writing has consequences, which is hidden to people without the explanation.
Here, I have special additional reasons to know their identity, because the best way to communicate the underlying events and what my comment meant, depend on who they are.
Some explanations or accounts will be inflammatory, and others useless. For example, the person could be entirely new to EA, or be the OP themselves. Certain explanations, justification or âevidenceâ, could be hurtful and stir up wounds. Others wonât make sense at all.
In this situation, itâs reasonable to see the commenters demands impose the further, additional burdens on me of having to weigh this harm (just to defend my comment), which is hidden from them. Separately and additionally, I probably view this as particularly unfair, as from my perspective, I think the very reason/âissue why I commented and why things are so problematic/âsensitive, was because the original environment around the post was inflammatory and hard to approach by design.
Hmm I think I have some different ideas about discussion norms but not sure if I understand them coherently myself/âthink itâs worth going into. I agree itâs often worthwhile to not engage.
I agree with this.