LW is a rape cult
If you wouldn’t bail out a bank then would you bail out EA?
LW is a rape cult
If you wouldn’t bail out a bank then would you bail out EA?
As a guy who used to be female (I was AMAB), Kelly’s post rings true to me. Fully endorsed. It would be particularly interesting to hear about AFAB transmen’s experiences with respect to this.
The change in how you’re treated is much more noticeable when making progress in the direction of becoming more guyish; not sure if this is because this change tends to happen quickly (testosterone is powerful + quick) or because of the offsetting stigma re: people making transition progress towards being female. I could also see this stigma making up some of the positive effect that AMAB people feel on detransitioning, though it’s mostly possible to disentangle the effect of the misogyny from that of the transmisogyny if you have good social sense.
In anticipation of being harassed (based on past experience with this community), I’ll leave it at that. I’m not going to respond to any BS or bother with politics.
I should add that I’m grateful for the many EAs who don’t engage in dishonest behavior, and that I’m equally grateful for the EAs who used to be more dishonest, and later decided that honesty was more important (either instrumentally, or for its own sake) to their system of ethics than they’d previously thought. My insecurity seems to have sadly dulled my warmth in my above comment, and I want to be better than that.
I believe you when you say that you don’t benefit much from feedback from people not already deeply engaged with your work.
There’s something really noticeable to me about the manner in which you’ve publicly engaged with the EA community through writing for the past while. You mention that you put lots of care into your writing, and what’s most noticeable about this for me is that I can’t find anything that you’ve written here that anyone interested in engaging with you might feel threatened or put down by. This might sound like faint praise, but it really isn’t meant to be; I find that writing in such a way is actually somewhat resource intensive in terms of both time, and something roughly like mental energy.
(I find it’s generally easier to develop a felt sense for when someone else is paying sufficient attention to conversational nuances regarding civility than it is to point out specific examples, but your discussion of how you feel about receiving criticism is a good example of this sort of civility).
As you and James mention, public writeups can be valuable to readers, and I think this is true to a strong extent.
I’d also say that, just as importantly, writing this kind of well thought out post which uses healthy and civil conversational norms creates value from a leadership/coordination point of view. Leadership in terms of teaching skills and knowledge is important too, but I guess I’m used to thinking of those as separate from leadership in terms of exemplifying civility and openness to sharing information. If it were more common for people and foundations to write frequently and openly, and communicate with empathy towards their audiences when they did, I think the world would be the better for it. You and other senior Open Phil and GiveWell staff are very much respected in our community, and I think it’s wonderful when people are happy to set a positive example for others.
(Apologies if I’ve conflated civility with openness to sharing information; these behaviors feel quite similar to me on a gut level—possibly because they both take some effort to do, but also nudge social norms in the right direction while helping the audience.).
It’s not a coincidence that all the fund managers work for GiveWell or Open Philanthropy.
Second, they have the best information available about what grants Open Philanthropy are planning to make, so have a good understanding of where the remaining funding gaps are, in case they feel they can use the money in the EA Fund to fill a gap that they feel is important, but isn’t currently addressed by Open Philanthropy.
It makes some sense that there could be gaps which Open Phil isn’t able to fill, even if Open Phil thinks they’re no less effective than the opportunities they’re funding instead. Was that what was meant here, or am I missing something? If not, I wonder what such a funding gap for a cost-effective opportunity might look like (an example would help)?
There’s a part of me that keeps insisting that it’s counter-intuitive that Open Phil is having trouble making as many grants as it would like, while also employing people who will manage an EA fund. I’d naively think that there would be at least some sort of tradeoff between producing new suggestions for things the EA fund might fund, and new things that Open Phil might fund. I suspect you’re already thinking closely about this, and I would be happy to hear everyone’s thoughts.
Edit: I’d meant to express general confidence in those who had been selected as fund managers. Also, I have strong positive feelings about epistemic humility in general, which also seems highly relevant to this project.
This post was incredibly well done. The fact that no similarly detailed comparison of AI risk charities had been done before you published this makes your work many times more valuable. Good job!
At the risk of distracting from the main point of this article, I’d like to notice the quote:
Xrisk organisations should consider having policies in place to prevent senior employees from espousing controversial political opinions on facebook or otherwise publishing materials that might bring their organisation into disrepute.
This seems entirely right, considering society’s take on these sorts of things. I’d suggest that this should be the case for EA-aligned organizations more widely, since PR incidents caused by one EA-related organization can generate fallout which affects both other EA-related organizations, and the EA brand in general.
I agree with your last paragraph, as written. But this conversation is about kindness, and trusting people to be competent altruists, and epistemic humility. That’s because acting indifferent to whether or not people who care about similar things as we do waste time figuring things out is cold in a way that disproportionately drives away certain types of skilled people who’d otherwise feel welcome in EA.
But this is about optimal marketing and movement growth, a very empirical question. It doesn’t seem to have much to do with personal experiences
I’m happy to discuss optimal marketing and movement growth strategies, but I don’t think the question of how to optimally grow EA is best answered as an empirical question at all. I’m generally highly supportive of trying to quantify and optimize things, but in this case, treating movement growth as something suited to empirical analysis may be harmful on net, because the underlying factors actually responsible for the way & extent to which movement growth maps to eventual impact are impossible to meaningfully track. Intersectionality comes into the picture when, due to their experiences, people from certain backgrounds are much, much likelier to be able to easily grasp how these underlying factors impact the way in which not all movement growth is equal.
The obvious-to-me way in which this could be true is if traditionally privileged people (especially first-worlders with testosterone-dominated bodies) either don’t understand or don’t appreciate that unhealthy conversation norms subtly but surely drive away valuable people. I’d expect the effect of unhealthy conversation norms to be mostly unnoticeable; for one, AB-testing EA’s overall conversation norms isn’t possible. If you’re the sort of person who doesn’t use particularly friendly conversation norms in the first place, you’re likely to underestimate how important friendly conversation norms are to the well-being of others, and overestimate the willingness of others to consider themselves a part of a movement with poor conversation norms.
“Conversation norms” might seem like a dangerously broad term, but I think it’s pointing at exactly the right thing. When people speak as if dishonesty is permissible, as if kindness is optional, or as if dominating others is ok, this makes EA’s conversation norms worse. There’s no reason to think that a decrease in quality of EA’s conversation norms would show up in quantitative metrics like number of new pledges per month. But when EA’s conversation norms become less healthy, key people are pushed away, or don’t engage with us in the first place, and this destroys utility we’d have otherwise produced.
It may be worse than this, even: if counterfactual EAs who care a lot about having healthy conversational norms are a somewhat homogeneous group of people with skill sets that are distinct from our own, this could cause us to disproportionately lack certain classes of talented people in EA.
I appreciate that the post has been improved a couple times since the criticisms below were written.
A few of you were diligent enough to beat me to saying much of this, but:
Where we’ve received criticism it has mostly been around how we can improve the website and our communication about EA Funds as opposed to criticism about the core concept.
This seems false, based on these replies. The author of this post replied to the majority of those comments, which means he’s aware that many people have in fact raised concerns about things other than communication and EA Funds’ website. To his credit, someone added a paragraph acknowledging that these concerns had been raised elsewhere, in the pages for the EA community fund and the animal welfare fund. Unfortunately, though, these concerns were never mentioned in this post. There are a number of people who would like to hear about any progress that’s been made since the discussion which happened on this thread regarding the problems of 1) how to address conflicts of interest given how many of the fund managers are tied into e.g. OPP, and 2) how centralizing funding allocation (rather than making people who aren’t OPP staff into Fund Managers) narrows the amount of new information about what effective opportunities exist that the EA Funds’ Fund Managers encounter.
I’ve spoken with a couple EAs in person who have mentioned that making the claim that “EA Funds are likely to be at least as good as OPP’s last dollar” is harmful. In this post, it’s certainly worded in a way that implies very strong belief, which, given how popular consequentialism is around here, would be likely to make certain sorts of people feel bad for not donating to EA Funds instead of whatever else they might donate to counterfactually. This is the same sort of effect people get from looking at this sort of advertising, but more subtle, since it’s less obvious on a gut level that this slogan half-implies that the reader is morally bad for not donating. Using this slogan could be net negative even without considering that it might make EAs feel bad about themselves, if, say, individual EAs had information about giving opportunities that were more effective than EA Funds, but donated to EA Funds anyways out of a sense of pressure caused by the “at least as good as OPP” slogan.
More immediately, I have negative feelings about how this post used the Net Promoter Score to evaluate the reception of EA Funds. First, it mentions that EA Funds “received an NPS of +56 (which is generally considered excellent according to the NPS Wikipedia page).” But the first sentence of the Wikipedia page for NPS, which I’m sure the author read at least the first line of given that he linked to it, states that NPS is “a management tool that can be used to gauge the loyalty of a firm’s customer relationships” (emphasis mine). However, EA Funds isn’t a firm. My view is that implicitly assuming that, as a nonprofit (or something socially equivalent), your score on a metric intended to judge how satisfied a for-profit company’s customers are can be compared side by side with the scores received by for-profit firms (and then neglecting to mention that you’ve made this assumption) belies a lack of intent to honestly inform EAs.
This post has other problems, too; it uses the NPS scoring system to analyze donors and other’s responses to the question:
How likely is it that your donation to EA Funds will do more good in expectation than where you would have donated otherwise?
The NPS scoring system was never intended to be used to evaluate responses to this question, so perhaps that makes it insignificant that an NPS score of 0 for this question just misses the mark of being “felt to be good” in industry. Worse, the post mentions that this result
could merely represent healthy skepticism of a new project or it could indicate that donors are enthusiastic about features other than the impact of donations to EA Funds.
It seems to me that including only positive (or strongly positive-sounding) interpretations of this result is incorrect and misleadingly optimistic. I’d agree that it’s a good idea to not “take NPS too seriously”, though in this case, I wouldn’t say that the benefit that came from using NPS in the first place outweighed the cost that was incurred by the resultant incorrect suggestion that we should feel there was a respectable amount of quantitative support for the conclusions drawn in this post.
I’m disappointed that I was able to point out so many things I wish the author had done better in this document. If there had only been a couple errors, it would have been plausibly deniable that anything fishy was going on here. But with as many errors as I’ve pointed out, which all point in the direction of making EA Funds look better than it is, things don’t look good. Things don’t look good regarding how well this project has been received, but that’s not the larger problem here. The larger problem is that things don’t look good because this post decreases how much I am willing to trust communications made on the behalf of EA funds in particular, and communications made by CEA staff more generally.
Writing this made me cry, a little. It’s late, and I should have gone to bed hours ago, but instead, here I am being filled with sad determination and horror that it feels like I can’t trust anyone I haven’t personally vetted to communicate honestly with me. In Effective Altruism, honesty used to mean something, consequentialism used to come with integrity, and we used to be able to work together to do the most good we could.
Some days, I like to quietly smile to myself and wonder if we might be able to take that back.
Personally, I’ve noticed that being casually aware of smaller projects that seem cash-strapped has given me the intuition that it would be better for Good Ventures to fund more of the things it thinks should be funded, since that might give some talented EAs more autonomy. On the other hand, I suspect that people who prefer the “opposite” strategy, of being more positive on the pledge and feeling quite comfortable with Givewell’s approach to splitting, are seeing a very different social landscape than I am. Maybe they’re aware of people who wouldn’t have engaged with EA in any way other than by taking the pledge, or they’ve spent relatively more time engaging with Givewell-style core EA material than I have?
Between the fact that filter bubbles exist, and the fact that I don’t get out much (see the last three characters of my username), I think I’d be likely to not notice if lots of the disagreement on this whole cluster of related topics (honesty/pledging/partial funding/etc.) was due to people having had differing social experiences with other EAs.
So, perhaps this is a nudge towards reconciliation on both the pledge and on Good Ventures’ take on partial funding. If people’s social circles tend to be homogeneous-ish, some people will know of lots of underfunded promising EAs and projects (which indirectly compete with GV and GiveWell top charities for resources), and others will know of few such EAs/projects. If this is case, we should expect most people’s intuitions on how many funding opportunities for small projects (which only small donors can identify effectively) there are, to be systematically off in one way or another. Perhaps a reasonable thing to do here would be to discuss ways to estimate how many underfunded small projects, which EAs would be eager to fund if only they knew about them, there are.
Creating a community panel that assesses potential egregious violations of those principles, and makes recommendations to the community on the basis of that assessment.
This is an exceptionally good idea! I suspect that such a panel would be taken the most seriously if you (or other notable EAs) were involved in its creation and/or maintenance, or at least endorsed it publicly.
I agree that the potential for people to harm EA by conducting harmful-to-EA behavior under the EA brand will increase as the movement continues to grow. In addition, I also think that the damage caused by such behavior is fairly easy to underestimate, for the reason that it is hard to keep track of all of the different ways in which such behavior causes harm.
It seems like there’s a disconnect between EA supposedly being awash in funds on the one hand, and stories like yours on the other.
This line is spot-on. When I look around, I see depressingly many opportunities that look under-funded, and a surplus of talented people. But I suspect that most EAs see a different picture—say, one of nearly adequate funding, and a severe lack of talented people.
This is ok, and should be expected to happen if we’re all honestly reporting what we observe! In the same way that one can end up with only Facebook friends who are more liberal than 50% of the population, so too can one end up knowing many talented people who could be much more effective with funding, since people’s social circles are often surprisingly homogeneous.
In one view, the concept post had 43 upvotes, the launch post had 28, and this post currently has 14. I don’t think this is problematic in itself, since this could just be an indication of hype dying down over time, rather than of support being retracted.
Part of what I’m tracking when I say that the EA community isn’t supportive of EA Funds is that I’ve spoken to several people in person who have said as much—I think I covered all of the reasons they brought up in my post, but one recurring theme throughout those conversations was that writing up criticism of EA was tiring and unrewarding, and that they often didn’t have the energy to do so (though one offered to proofread anything I wrote in that vein). So, a large part of my reason for feeling that there isn’t a great deal of community support for EA funds has to do with the ways in which I’d expect the data on how much support there actually is to be filtered. For example:
the method in which Kerry presented his survey data made it look like there was more support than there was
the fact that Kerry presented the data in this way suggests it’s relatively more likely that Kerry will do so again in the future if given the chance
social desirability bias should also make it look like there’s more support than there is
the fact that it’s socially encouraged to praise projects on the EA Forum and that criticism is judged more harshly than praise should make it look like there’s more support than there is. Contrast this norm with the one at LW, and notice how it affected how long it took us to get rid of Gleb.
we have a social norm of wording criticism in a very mild manner, which might make it seem like critics are less serious than they are.
It also doesn’t help that most of the core objections people have brought up have been acknowledged but not addressed. But really, given all of those filters on data relating to how well-supported the EA Funds are, and the fact that the survey data doesn’t show anything useful either way, I’m not comfortable with accepting the claim that EA Funds has been particularly well-received.
Since there are so many separate discussions surrounding this blog post, I’ll copy my response from the original discussion:
I’m grateful for this post. Honesty seems undervalued in EA.
An act-utilitarian justification for honesty in EA could run along the lines of most answers to the question, “how likely is it that strategic dishonesty by EAs would dissuade Good Ventures-sized individuals from becoming EAs in the future, and how much utility would strategic dishonesty generate directly, in comparison?” It’s easy to be biased towards dishonesty, since it’s easier to think about (and quantify!), say, the utility the movement might get from having more peripheral-to-EA donors, than it is to think about the utility the movement would get from not pushing away would-be EAs who care about honesty.
I’ve [rarely] been confident enough to publicly say anything when I’ve seen EAs and ostensibly-EA-related organizations acting in a way that I suspect is dishonest enough to cause significant net harm. I think that I’d be happy if you linked to this post from LW and the EA forum, since I’d like for it to be more socially acceptable to kindly nudge EAs to be more honest.
A more detailed discussion of the considerations for and against concluding that EA Funds had been well received would have been helpful if the added detail was spent examining people’s concerns re: conflicts of interest, and centralization of power, i.e. concerns which were commonly expressed but not resolved.
I’m concerned with the framing that you updated towards it being correct for EA Funds to persist past the three month trial period. If there was support to start out with and you mostly didn’t gather more support later on relative to what one would expect, then your prior on whether EA Funds is well received should be stronger but you shouldn’t update in favor of it being well received based on more recent data. This may sound like a nitpick, but it is actually a crucially important consideration if you’ve framed things as if you’ll continue on with the project only if you update in the direction of having more public support than before.
I also dislike that you emphasize that some people “expressed confusion at your endorsement of EA Funds”. Some people may have felt that way, but your choice of wording both downplays the seriousness of some people’s disagreements with EA Funds, while also implying that critics are in need of figuring something out that others have already settled (which itself socially implies they’re less competent than others who aren’t confused). This is a part of what some of us mean when we talk about a tax on criticism in EA.
Noted! I can understand that it’s easy to feel like you’re overstepping your bounds when trying to speak for others. Personally, I’d have been happy for you all to take a more central leadership role, and would have wanted you all to feel comfortable if you had decided to do so.
My view is that we still don’t have reliable mechanisms to deal with the sorts of problems mentioned (i.e. the Intentional Insights fiasco), so it’s valuable when people call out problems as they have the ability to. It would be better if the EA community had ways of calling out such problems by means other than requiring individuals to take on heroic responsibility, though!
This having been said, I think it’s worth explicitly thanking the people who helped expose Intentional Insight’s deceitful practices—Jeff Kaufman, for his original post on the topic, and Jeff Kaufman, Gregory Lewis, Oliver Habryka, Carl Shulman, Claire Zabel, and others who have not been mentioned or who contributed anonymously, for writing this detailed document.
You’re clearly pointing at a real problem, and the only case in which I can read this as melodramatic is the case in which the problem is already very serious. So, thank you for writing.
When the word “care” is used carelessly, or, more generally, when the emotional content of messages is not carefully tended to, this nudges EA towards being the sort of place where e.g. the word “care” is used carelessly. This has all sorts of hard to track negative effects; the sort of people who are irked by things like misuse of the word “care” are disproportionately likely to be the sort of people who are careful about this sort of thing themselves. It’s easy to see how a harmful “positive” feedback loop might be created in such a scenario if not paying attention to the connotations of words can drive our friends away.
What I’d like to see is an organization like CFAR, aimed at helping promising EAs with mental health problems and disabilities—doing actual research on what works, and then helping people in the community who are struggling to find their feet and could be doing a lot in cause areas like AI research with a few months’ investment. As it stands, the people who seem likely to work on things relevant to the far future are either working at MIRI already, or are too depressed and outcast to be able to contribute, with a few exceptions.
I’d be interested in contributing to something like this (conditional on me having enough mental energy myself to do so!). I tend to hang out mostly with EA and EA-adjacent people who fit this description, so I’ve thought a lot about how we can support each other. I’m not aware of any quick fixes, but things can get better with time. We do seem to have a lot of depressed people, though.
Speculation ahoy:
1) I wonder if, say, Bay area EAs cluster together strongly enough that some of the mental health techniques/habits/one-off-things that typically work best for us are different from the things that work for most people in important ways.
2) Also, something about the way in which status works in the social climate of the EA/LW Bay Area community is both unusual and more toxic than the way in which status works in more average social circles. I think this contributes appreciably to the number and severity of depressed people in our vicinity. (This would take an entire sequence to describe; I can elaborate if asked).
3) I wonder how much good work could be done on anyone’s mental health by sitting down with a friend who wants to focus on you and your health for, say, 30 hours over the course of a few days and just talking about yourself, being reassured and given validation and breaks, consensually trying things on each other, and, only when it feels right, trying to address mental habits you find problematic directly. I’ve never tried something like this before, but I’d eventually like to.
Well, writing that comment was a journey. I doubt I’ll stand by all of what I’ve written here tomorrow morning, but I do think that I’m correct on some points, and that I’m pointing in a few valuable directions.
This issue is very important to me, and I stopped identifying as an EA after having too many interactions with dishonest and non-cooperative individuals who claimed to be EAs. I still act in a way that’s indistinguishable from how a dedicated EA might act—but it’s not a part of my identity anymore.
I’ve also met plenty of great EAs, and it’s a shame that the poor interactions I’ve had overshadow the many good ones.
Part of what disturbs me about Sarah’s post, though, is that I see this sort of (ostensibly but not actually utilitarian) willingness to compromise on honesty and act non-cooperatively more in person than online. I’m sure that others have had better experiences, so if this isn’t as prevalent in your experience, I’m glad! It’s just that I could have used stronger examples if I had written the post, instead of Sarah.
I’m not comfortable sharing examples that might make people identifiable. I’m too scared of social backlash to even think about whether outing specific people and organizations would even be a utilitarian thing for me to do right now. But being laughed at for being an “Effective Kantian” because you’re the only one in your friend group who wasn’t willing to do something illegal? That isn’t fun. Listening to hardcore EAs approvingly talk about how other EAs have manipulated non-EAs for their own gain, because doing so might conceivably lead them to donate more if they had more resources at their disposal? That isn’t inspiring.