I work for CEA, but these views are my ownāthough they are, naturally, informed by my work experience.
----
First, and most important: Thank you for taking the time to write this up. Itās not easy to summarize conversations like this, especially when they touch on controversial topics, but itās good to have this kind of thing out in public (even anonymized).
----
I found the concrete point about Open Phil research hires to be interesting, though the claimed numbers for CFAR seem higher than Iād expect, and I strongly expect that some of the most recent research hires came to Open Phil through the EA movement:
Open Phil recruited for these roles by directly contacting many people (Iād estimate well over a hundred, perhaps 300-400) using a variety of EA networks. For example, I received an email with the following statement: āI donāt know you personally, but from your technical experience and your experience as an EA student group founder and leader, I wonder if you might be a fit for an RA position at Open Philanthropy.ā
Luke Muehlhauserās writeup of the hiring round noted that there were a lot of very strong applicants, including multiple candidates who werenāt hired but might excel in a research role in the future. I canāt guarantee that many of the strong applicants applied because of their EA involvement, but it seems likely.
While I wasnāt hired as an RA, I was a finalist for the role. Bastian Stern, one of the new researchers mentioned in this post, founded a chapter of Giving What We Can in college, and new researcher Jacob Trefethen was also a member of that chapter. If there hadnāt been an EA movement for them to join, would they have heard about the role? Several other Open Phil researchers (whose work includes the long-term future) also have backgrounds in EA community-building.
Iāll be curious to see whether, if Open Phil makes another grant to CFAR, they will note CFARās usefulness as a recruiting pipeline (they didnāt in January 2018, but this was before their major 2018 hiring round happened).
Also, regarding claims about 80,000 Hours specifically:
Getting good ops hires is still very important, and I donāt think it makes sense to downplay that.
Even assuming that none of the research hires were coached by 80K (I assume itās true, but I donāt have independent knowledge of that):
We donāt know how many of the very close candidates came through 80,000 Hoursā¦
...or how many actual hires were helped by 80Kās other resourcesā¦
...or how many researchers at other organizations received career coaching.
Open Philās enormous follow-on grant to 80K in early 2019 seems to indicate their continued belief that 80Kās work is valuable in at least some of the ways Open Phil cares about.
----
As for the statements about ācompromising on a commitment to truthāā¦ there arenāt enough examples or detailed arguments to say much.
Iāve attended a CFAR workshop, a mini-workshop, and a reunion, and Iāve also run operations for two separate CFAR workshops (over a span of four years, alongside people from multiple āerasā of CFAR/ārationality). Iāve also spent nearly a year working at CEA, before which I founded two EA groups and worked loosely with various direct and meta organizations in the movement.
Some beliefs Iāve come to have, as a result of this experience (corresponding to each point):
1. āProtecting reputationā and āgaining social statusā are not limited to EA or rationality. Both movements care about this to varying degreesāsometimes too much (in my view), and sometimes not enough. Sometimes, it is good to have a good reputation and high status, because these things both make your work easier and signify actual virtues of your movement/āorganization.
2. Iāve met some of the most rigorous thinkers Iāve ever known in the rationality movementāand in the EA movement, including EA-aligned people who arenāt involved with the rationality side very much or at all. On the other hand, Iāve seen bad arguments and intellectual confusion pop up in both movements from time to time (usually quashed after a while). On the whole, Iāve been impressed by the rigor of the people who run various major EA orgs, and I donāt think that the less-rigorous people who speak at conferences have much of an influence over what the major orgs do. (Iād be really interested to hear counterarguments to this, of course!)
3. There are certainly people from whom various EA orgs have wanted to dissociate (sometimes successfully, sometimes not). My impression is that high-profile dissociation generally happens for good reasons (the highest-profile case I can think of is Gleb Tsipursky, who had some interesting ideas but on the whole exemplified what the rationalists quoted in your post were afraid ofāand was publicly criticized in exacting detail).
Iād love to hear specific examples of ālow-statusā people whose ideas have been ignored to the detriment of EA, but no one comes to mind; Forum posts attacking mainstream EA orgs are some of the most popular on the entire site, and typically produce lots of discussion/āheat (though perhaps less light).
Iāve heard from many people who are reluctant to voice their views in public around EA topicsābut as often as not, these are high-profile members of the community, or at least people whose ideas arenāt very controversial.
They arenāt reluctant to speak because they donāt have status ā itās often the opposite, because having status gives you something to lose, and being popular and widely-read often means getting more criticism over even minor points than an unknown person would. Iāve heard similar complaints about LessWrong from both well-known and āunknownā writers; many responses in EA/ārationalist spaces take a lot of time to address and arenāt especially helpful. (This isnāt unique to us, of course ā itās a symptom of the internet ā but itās not something that necessarily indicates the suppression of unpopular ideas.)
That said, I am an employee of CEA, so people with controversial views may not want to speak to me at allābut I canāt comment on what I havenāt heard.
4. Again, Iād be happy to hear specific cases, but otherwise itās hard to figure out which people are āinterested in EAās resources, instead of the missionā, or which ātruth-finding processesā have been corrupted. I donāt agree with every grant EA orgs have ever made, but on the whole, I donāt see evidence of systemic epistemic damage.
----
The same difficulties apply to much of the rest of the conversationāthereās not enough content to allow for a thorough counterargument. Part of the difficulty is that the question āwho is doing the best AI safety research?ā is controversial, not especially objective, and tinged by oneās perspective on the best ādirectionā for safety research (some directions are more associated with the rationality community than others). I can point to people in the EA community whose longtermist work has been impressive to me, but Iām not an AI expert, so my opinion means very little here.
As a final thought: I wonder what the most prominent thinkers/āpublic faces of the rationality movement would think about the claims here? My impression from working in both movements is that thereās a lot of mutual respect between the people most involved in each one, but itās possible that respect for EAās leaders wouldnāt extend to respect for its growth strategy/āoverall epistemics.
It sounds like one crux might be what counts as rigorous. I find the ābe specificā feedback to be a dodge. What is the counter party expected to do in a case like this? Point out people they think are either low status or not rigorous enough?
The damage, IMO, comes from EA sucking up a bunch of intelligent contrarian people and then having them put their effort behind status quo projects. I guess I have more sympathy for the systemic change criticisms than I used to.
I didnāt intend it as a dodge, though I understand why this information is difficult to provide. I just think that talking about problems in a case where one party is anonymous may be inherently difficult when examples canāt easily come into play.
I could try harder to come up with my own examples for the claims, but that seems like an odd way to handle discussion; it allows almost any criticism to be levied in hopes that the interlocutor will find some fitting anecdote. (Again, this isnāt the fault of the critics; itās just a difficult feature of the situation.)
What are some EA projects you consider āstatus quoā, and how is following the status quo relevant to the worthiness of the projects? (Maybe your concern comes from the idea that projects which could be handled by non-contrarians are instead taking up time/āenergy that could be spent on something more creative/ānovel?)
Yes, thatās the concern. Asking me what projects I consider status quo is the exact same move as before. Being status quo is low status, so the conversation seems unlikely to evolve in a fruitful direction if we take that tack. I think institutions tend to slide towards attractors where the surrounding discourse norms are āreasonable and defensibleā from within a certain frame while undermining criticisms of the frame in ways that make people who point it out seem like they are being unreasonable. This is how larger, older foundations calcify and stop getting things done, as the natural tendency of an org is to insulate itself from the sharp changes that being in close feedback with the world necessitates.
Sorry, I canāt respond to this in detail, because the conversation was a while back. Further, I donāt have independent confirmation on any of the factual claims.
I could PM you one name they mentioned for point three, but out of respect for their privacy I donāt want to post this publicly. Regarding point four, they mentioned article as a description of the dynamic they were worried about.
In terms of resources being directed to something that is not the mission, I canāt remember what was said by these particular people, but I can list the complaints Iāve heard in general: circling, felon voting rights, the dispute over meat at EAG, copies of HPMoR. Since this is quite a wide spread of topics, this probably doesnāt help at all.
Not a problemāI posted the reply long after the post went up, so I wouldnāt expect you to recall too many details. No need to send a PM, though I would love to read the article for point four (your link is currently broken). Thanks for coming back to reply!
I work for CEA, but these views are my ownāthough they are, naturally, informed by my work experience.
----
First, and most important: Thank you for taking the time to write this up. Itās not easy to summarize conversations like this, especially when they touch on controversial topics, but itās good to have this kind of thing out in public (even anonymized).
----
I found the concrete point about Open Phil research hires to be interesting, though the claimed numbers for CFAR seem higher than Iād expect, and I strongly expect that some of the most recent research hires came to Open Phil through the EA movement:
Open Phil recruited for these roles by directly contacting many people (Iād estimate well over a hundred, perhaps 300-400) using a variety of EA networks. For example, I received an email with the following statement: āI donāt know you personally, but from your technical experience and your experience as an EA student group founder and leader, I wonder if you might be a fit for an RA position at Open Philanthropy.ā
Luke Muehlhauserās writeup of the hiring round noted that there were a lot of very strong applicants, including multiple candidates who werenāt hired but might excel in a research role in the future. I canāt guarantee that many of the strong applicants applied because of their EA involvement, but it seems likely.
While I wasnāt hired as an RA, I was a finalist for the role. Bastian Stern, one of the new researchers mentioned in this post, founded a chapter of Giving What We Can in college, and new researcher Jacob Trefethen was also a member of that chapter. If there hadnāt been an EA movement for them to join, would they have heard about the role? Several other Open Phil researchers (whose work includes the long-term future) also have backgrounds in EA community-building.
Iāll be curious to see whether, if Open Phil makes another grant to CFAR, they will note CFARās usefulness as a recruiting pipeline (they didnāt in January 2018, but this was before their major 2018 hiring round happened).
Also, regarding claims about 80,000 Hours specifically:
Getting good ops hires is still very important, and I donāt think it makes sense to downplay that.
Even assuming that none of the research hires were coached by 80K (I assume itās true, but I donāt have independent knowledge of that):
We donāt know how many of the very close candidates came through 80,000 Hoursā¦
...or how many actual hires were helped by 80Kās other resourcesā¦
...or how many researchers at other organizations received career coaching.
Open Philās enormous follow-on grant to 80K in early 2019 seems to indicate their continued belief that 80Kās work is valuable in at least some of the ways Open Phil cares about.
----
As for the statements about ācompromising on a commitment to truthāā¦ there arenāt enough examples or detailed arguments to say much.
Iāve attended a CFAR workshop, a mini-workshop, and a reunion, and Iāve also run operations for two separate CFAR workshops (over a span of four years, alongside people from multiple āerasā of CFAR/ārationality). Iāve also spent nearly a year working at CEA, before which I founded two EA groups and worked loosely with various direct and meta organizations in the movement.
Some beliefs Iāve come to have, as a result of this experience (corresponding to each point):
1. āProtecting reputationā and āgaining social statusā are not limited to EA or rationality. Both movements care about this to varying degreesāsometimes too much (in my view), and sometimes not enough. Sometimes, it is good to have a good reputation and high status, because these things both make your work easier and signify actual virtues of your movement/āorganization.
2. Iāve met some of the most rigorous thinkers Iāve ever known in the rationality movementāand in the EA movement, including EA-aligned people who arenāt involved with the rationality side very much or at all. On the other hand, Iāve seen bad arguments and intellectual confusion pop up in both movements from time to time (usually quashed after a while). On the whole, Iāve been impressed by the rigor of the people who run various major EA orgs, and I donāt think that the less-rigorous people who speak at conferences have much of an influence over what the major orgs do. (Iād be really interested to hear counterarguments to this, of course!)
3. There are certainly people from whom various EA orgs have wanted to dissociate (sometimes successfully, sometimes not). My impression is that high-profile dissociation generally happens for good reasons (the highest-profile case I can think of is Gleb Tsipursky, who had some interesting ideas but on the whole exemplified what the rationalists quoted in your post were afraid ofāand was publicly criticized in exacting detail).
Iād love to hear specific examples of ālow-statusā people whose ideas have been ignored to the detriment of EA, but no one comes to mind; Forum posts attacking mainstream EA orgs are some of the most popular on the entire site, and typically produce lots of discussion/āheat (though perhaps less light).
Iāve heard from many people who are reluctant to voice their views in public around EA topicsābut as often as not, these are high-profile members of the community, or at least people whose ideas arenāt very controversial.
They arenāt reluctant to speak because they donāt have status ā itās often the opposite, because having status gives you something to lose, and being popular and widely-read often means getting more criticism over even minor points than an unknown person would. Iāve heard similar complaints about LessWrong from both well-known and āunknownā writers; many responses in EA/ārationalist spaces take a lot of time to address and arenāt especially helpful. (This isnāt unique to us, of course ā itās a symptom of the internet ā but itās not something that necessarily indicates the suppression of unpopular ideas.)
That said, I am an employee of CEA, so people with controversial views may not want to speak to me at allābut I canāt comment on what I havenāt heard.
4. Again, Iād be happy to hear specific cases, but otherwise itās hard to figure out which people are āinterested in EAās resources, instead of the missionā, or which ātruth-finding processesā have been corrupted. I donāt agree with every grant EA orgs have ever made, but on the whole, I donāt see evidence of systemic epistemic damage.
----
The same difficulties apply to much of the rest of the conversationāthereās not enough content to allow for a thorough counterargument. Part of the difficulty is that the question āwho is doing the best AI safety research?ā is controversial, not especially objective, and tinged by oneās perspective on the best ādirectionā for safety research (some directions are more associated with the rationality community than others). I can point to people in the EA community whose longtermist work has been impressive to me, but Iām not an AI expert, so my opinion means very little here.
As a final thought: I wonder what the most prominent thinkers/āpublic faces of the rationality movement would think about the claims here? My impression from working in both movements is that thereās a lot of mutual respect between the people most involved in each one, but itās possible that respect for EAās leaders wouldnāt extend to respect for its growth strategy/āoverall epistemics.
It sounds like one crux might be what counts as rigorous. I find the ābe specificā feedback to be a dodge. What is the counter party expected to do in a case like this? Point out people they think are either low status or not rigorous enough?
The damage, IMO, comes from EA sucking up a bunch of intelligent contrarian people and then having them put their effort behind status quo projects. I guess I have more sympathy for the systemic change criticisms than I used to.
I didnāt intend it as a dodge, though I understand why this information is difficult to provide. I just think that talking about problems in a case where one party is anonymous may be inherently difficult when examples canāt easily come into play.
I could try harder to come up with my own examples for the claims, but that seems like an odd way to handle discussion; it allows almost any criticism to be levied in hopes that the interlocutor will find some fitting anecdote. (Again, this isnāt the fault of the critics; itās just a difficult feature of the situation.)
What are some EA projects you consider āstatus quoā, and how is following the status quo relevant to the worthiness of the projects? (Maybe your concern comes from the idea that projects which could be handled by non-contrarians are instead taking up time/āenergy that could be spent on something more creative/ānovel?)
Yes, thatās the concern. Asking me what projects I consider status quo is the exact same move as before. Being status quo is low status, so the conversation seems unlikely to evolve in a fruitful direction if we take that tack. I think institutions tend to slide towards attractors where the surrounding discourse norms are āreasonable and defensibleā from within a certain frame while undermining criticisms of the frame in ways that make people who point it out seem like they are being unreasonable. This is how larger, older foundations calcify and stop getting things done, as the natural tendency of an org is to insulate itself from the sharp changes that being in close feedback with the world necessitates.
Sorry, I canāt respond to this in detail, because the conversation was a while back. Further, I donāt have independent confirmation on any of the factual claims.
I could PM you one name they mentioned for point three, but out of respect for their privacy I donāt want to post this publicly. Regarding point four, they mentioned article as a description of the dynamic they were worried about.
In terms of resources being directed to something that is not the mission, I canāt remember what was said by these particular people, but I can list the complaints Iāve heard in general: circling, felon voting rights, the dispute over meat at EAG, copies of HPMoR. Since this is quite a wide spread of topics, this probably doesnāt help at all.
Not a problemāI posted the reply long after the post went up, so I wouldnāt expect you to recall too many details. No need to send a PM, though I would love to read the article for point four (your link is currently broken). Thanks for coming back to reply!
Hereās the link: https://āāmeaningness.com/āāgeeks-mops-sociopaths