I appreciate that the post has been improved a couple times since the criticisms below were written.
A few of you were diligent enough to beat me to saying much of this, but:
Where we’ve received criticism it has mostly been around how we can improve the website and our communication about EA Funds as opposed to criticism about the core concept.
Thisseemsfalse, basedonthesereplies. The author of this post replied to the majority of those comments, which means he’s aware that many people have in fact raised concerns about things other than communication and EA Funds’ website. To his credit, someone added a paragraph acknowledging that these concerns had been raised elsewhere, in the pages for the EA community fund and the animal welfare fund. Unfortunately, though, these concerns were never mentioned in this post. There are a number of people who would like to hear about any progress that’s been made since the discussion which happened on this thread regarding the problems of 1) how to address conflicts of interest given how many of the fund managers are tied into e.g. OPP, and 2) how centralizing funding allocation (rather than making people who aren’t OPP staff into Fund Managers) narrows the amount of new information about what effective opportunities exist that the EA Funds’ Fund Managers encounter.
I’ve spoken with a couple EAs in person who have mentioned that making the claim that “EA Funds are likely to be at least as good as OPP’s last dollar” is harmful. In this post, it’s certainly worded in a way that implies very strong belief, which, given how popular consequentialism is around here, would be likely to make certain sorts of people feel bad for not donating to EA Funds instead of whatever else they might donate to counterfactually. This is the same sort of effect people get from looking at this sort of advertising, but more subtle, since it’s less obvious on a gut level that this slogan half-implies that the reader is morally bad for not donating. Using this slogan could be net negative even without considering that it might make EAs feel bad about themselves, if, say, individual EAs had information about giving opportunities that were more effective than EA Funds, but donated to EA Funds anyways out of a sense of pressure caused by the “at least as good as OPP” slogan.
More immediately, I have negative feelings about how this post used the Net Promoter Score to evaluate the reception of EA Funds. First, it mentions that EA Funds “received an NPS of +56 (which is generally considered excellent according to the NPS Wikipedia page).” But the first sentence of the Wikipedia page for NPS, which I’m sure the author read at least the first line of given that he linked to it, states that NPS is “a management tool that can be used to gauge the loyalty of a firm’s customer relationships” (emphasis mine). However, EA Funds isn’t a firm. My view is that implicitly assuming that, as a nonprofit (or something socially equivalent), your score on a metric intended to judge how satisfied a for-profit company’s customers are can be compared side by side with the scores received by for-profit firms (and then neglecting to mention that you’ve made this assumption) belies a lack of intent to honestly inform EAs.
This post has other problems, too; it uses the NPS scoring system to analyze donors and other’s responses to the question:
How likely is it that your donation to EA Funds will do more good in expectation than where you would have donated otherwise?
The NPS scoring system was never intended to be used to evaluate responses to this question, so perhaps that makes it insignificant that an NPS score of 0 for this question just misses the mark of being “felt to be good” in industry. Worse, the post mentions that this result
could merely represent healthy skepticism of a new project or it could indicate that donors are enthusiastic about features other than the impact of donations to EA Funds.
It seems to me that including only positive (or strongly positive-sounding) interpretations of this result is incorrect and misleadingly optimistic. I’d agree that it’s a good idea to not “take NPS too seriously”, though in this case, I wouldn’t say that the benefit that came from using NPS in the first place outweighed the cost that was incurred by the resultant incorrect suggestion that we should feel there was a respectable amount of quantitative support for the conclusions drawn in this post.
I’m disappointed that I was able to point out so many things I wish the author had done better in this document. If there had only been a couple errors, it would have been plausibly deniable that anything fishy was going on here. But with as many errors as I’ve pointed out, which all point in the direction of making EA Funds look better than it is, things don’t look good. Things don’t look good regarding how well this project has been received, but that’s not the larger problem here. The larger problem is that things don’t look good because this post decreases how much I am willing to trust communications made on the behalf of EA funds in particular, and communications made by CEA staff more generally.
Writing this made me cry, a little. It’s late, and I should have gone to bed hours ago, but instead, here I am being filled with sad determination and horror that it feels like I can’t trust anyone I haven’t personally vetted to communicate honestly with me. In Effective Altruism, honesty used to mean something, consequentialism used to come with integrity, and we used to be able to work together to do the most good we could.
Some days, I like to quietly smile to myself and wonder if we might be able to take that back.
Things don’t look good regarding how well this project has been received
I know you say that this isn’t the main point you’re making, but I think it’s the hidden assumption behind some of your other points and it was a surprise to read this. Will’s post introducing the EA funds is the 4th most upvoted post of all time on this forum. Most of the top rated comments on his post, including at least one which you link to as raising concerns, say that they are positive about the idea. Kerry then presented some survey data in this post. All those measures of support are kind of fuzzy and prone to weird biases, but putting it all together I find it much more likely than not that the community is as-a-whole positive about the funds. An alternative and more concrete angle would be money received into the funds, which was just shy of CEA’s target of $1m.
Given all that, what would ‘well-received’ look like in your view?
If you think the community is generally making a mistake in being supportive of the EA funds, that’s fine and obviously you can/should make arguments to that effect. But if you are making the empirical claim that the community is not supportive, I want to know why you think that.
Yeah, in this community it’s easy for your data to be filtered. People commonly comment with criticism, rarely with just “Yeah, this is right!”, and so your experience can be filled with negative responses even when the response is largely positive.
In one view, the concept post had 43 upvotes, the launch post had 28, and this post currently has 14. I don’t think this is problematic in itself, since this could just be an indication of hype dying down over time, rather than of support being retracted.
Part of what I’m tracking when I say that the EA community isn’t supportive of EA Funds is that I’ve spoken to several people in person who have said as much—I think I covered all of the reasons they brought up in my post, but one recurring theme throughout those conversations was that writing up criticism of EA was tiring and unrewarding, and that they often didn’t have the energy to do so (though one offered to proofread anything I wrote in that vein). So, a large part of my reason for feeling that there isn’t a great deal of community support for EA funds has to do with the ways in which I’d expect the data on how much support there actually is to be filtered. For example:
the method in which Kerry presented his survey data made it look like there was more support than there was
the fact that Kerry presented the data in this way suggests it’s relatively more likely that Kerry will do so again in the future if given the chance
social desirability bias should also make it look like there’s more support than there is
the fact that it’s socially encouraged to praise projects on the EA Forum and that criticism is judged more harshly than praise should make it look like there’s more support than there is. Contrast this norm with the one at LW, and notice how it affected how long it took us to get rid of Gleb.
we have a social norm of wording criticism in a very mild manner, which might make it seem like critics are less serious than they are.
It also doesn’t help that most of the core objections people have brought up have been acknowledged but not addressed. But really, given all of those filters on data relating to how well-supported the EA Funds are, and the fact that the survey data doesn’t show anything useful either way, I’m not comfortable with accepting the claim that EA Funds has been particularly well-received.
So I probably disagree with some of your bullet points, but unless I’m missing something I don’t think they can be the crux of our disagreement here, so for the sake of argument let’s suppose I fully agree that there are a variety of strong social norms in place here that make praise more salient, visible and common than criticism.
...I still don’t see how to get from here to (for example) ‘The community is probably net-neutral to net-negative on the EA funds, but Will’s post introducing them is the 4th most upvoted post of all time’. The relative (rather than absolute) nature of that claim is important; even if I think posts and projects on the EA forum generally get more praise, more upvotes, and less criticism than they ‘should’, why has that boosted the EA funds in particular over the dozens of other projects that have been announced on here over the past however-many years? To pick the most obviously-comparable example that quickly comes to mind, Kerry’s post introducing EA Ventures has just 16 upvotes*.
It just seems like the simplest explanation of your observed data is ‘the community at large likes the funds, and my personal geographical locus of friends is weird’.
And without meaning to pick on you in particular (because I think this mistake is super-common), in general I want to push strongly towards people recognising that EA consists of a large number of almost-disjoint filter bubbles that often barely talk to each other and in some extreme cases have next-to-nothing in common. Unless you’re very different to me, we are both selecting the people we speak to in person such that they will tend to think much like us, and like each other; we live inside one of the many bubbles. So the fact that everyone I’ve spoken to in person about the EA funds thinks they’re a good idea is particularly weak evidence that the community thinks they are good, and so is your opposing observation. I think we should both discount it ~entirely once we have anything else to go on. Relative upvotes are extremely far from perfect as a metric, but I think they are much better than in-person anecdata for this reason alone.
FWIW I’m very open to suggestions on how we could settle this question more definitively. I expect CEA pushing ahead with the funds if the community as a whole really is net-negative on them would indeed be a mistake. I don’t have any great ideas at the moment though.
It just seems like the simplest explanation of your observed data is ‘the community at large likes the funds, and my personal geographical locus of friends is weird’.
And without meaning to pick on you in particular (because I think this mistake is super-common), in general I want to push strongly towards people recognising that EA consists of a large number of almost-disjoint filter bubbles that often barely talk to each other and in some extreme cases have next-to-nothing in common. Unless you’re very different to me, we are both selecting the people we speak to in person such that they will tend to think much like us, and like each other; we live inside one of the many bubbles. So the fact that everyone I’ve spoken to in person about the EA funds thinks they’re a good idea is particularly weak evidence that the community thinks they are good, and so is your opposing observation.
I’d say this is correct. The EA Forum itself has such a selection effect, though it’s weaker than the ones either of our friend groups have. One idea would be to do a survey, as Peter suggests, though this makes me feel slightly uneasy given that a survey may weight the opinions of people who have considered the problem less or feel less strongly about it equally with the opinions of others. A relevant factor here is that it sometimes takes people a fair bit of reading or reflection to develop a sense for why integrity is particularly valuable from a consequentialist’s perspective, and then link this up to why EA Funds continuing has the consequence of showing people that projects others use relatively lower-integrity methods to report on and market can succeed despite (or even because?) of this.
I’d also agree that, at the time of Will’s post, it would have been incorrect to say:
The community is probably net-neutral to net-negative on the EA funds, but Will’s post introducing them is the 4th most upvoted post of all time
But what we likely care about is whether or not the community is positive on EA Funds at the moment, which may or may not be different from whether it was positive on EA Funds in the past.
My view is further that the community’s response to this sort of thing is partly a function of how debates on honesty and integrity have been resolved in the past; if lack of integrity in EA has been an issue in the past, the sort of people who care about integrity are less likely to stick around in EA, such that the remaining population of EAs will have fewer people who care about integrity, which itself affects how the average EA feels about future incidents relating to integrity (such as this one), and so on. So, on some level I’m positing that the public response to EA Funds would be more negative if we hadn’t filtered certain people out of EA by having an integrity problem in the first place.
(Sorry for the slower response, your last paragraph gave me pause and I wanted to think about it. I still don’t feel like I have a satisfactory handle on it, but also feel I should reply at this point.)
this makes me feel slightly uneasy given that a survey may weight the opinions of people who have considered the problem less or feel less strongly about it equally with the opinions of others.
This makes total sense to me, and I do currently perceive something of an inverse correlation between how hard people have thought about the funds and how positively they feel about them. I agree this is a cause for concern. The way I would describe that situation from your perspective is not ‘the funds have not been well-received’, but rather ‘the funds have been well-received but only because too many (most?) people are analysing the idea in a superficial way’. Maybe that is what you were aiming for originally and I just didn’t read it that way.
But what we likely care about is whether or not the community is positive on EA Funds at the moment, which may or may not be different from whether it was positive on EA Funds in the past.
True. That post was only a couple of months before this one though; not a lot of time for new data/arguments to emerge or opinions to change. The only major new data point I can think of since then is the funds raising ~$1m, which I think is mostly orthogonal to what we are discussing. I’m curious whether you personally a perceive a change (drop) in popularity in your circles?
My view is further that the community’s response to this sort of thing is partly a function of how debates on honesty and integrity have been resolved in the past; if lack of integrity in EA has been an issue in the past, the sort of people who care about integrity are less likely to stick around in EA, such that the remaining population of EAs will have fewer people who care about integrity, which itself affects how the average EA feels about future incidents relating to integrity (such as this one), and so on. So, on some level I’m positing that the public response to EA Funds would be more negative if we hadn’t filtered certain people out of EA by having an integrity problem in the first place.
This story sounds plausibly true. It’s a difficult one to falsify though (I could flip all the language and get something that also sounds plausibly true), so turning it over in my head for the past few days I’m still not sure how much weight to put on it.
It also doesn’t help that most of the core objections people have brought up have been acknowledged but not addressed.
My sense (and correct me if I’m wrong) is that the biggest concerns seem to be related to the fact that there is only one fund for each cause area and the fact that Open Phil/GiveWell people are running each of the funds.
I share this concern and I agree that it is true that EA Funds has not been changed to reflect this. This is mostly because EA Funds simply hasn’t been around for very long and we’re currently working on improving the core product before we expand it.
What I’ve tried to do instead is precommit to 50% or less of the funds being managed by Open Phil/GiveWell and give a general timeline for when we expect to start making good on that committment. I know that doesn’t solve the problem, but hopefully you agree that it’s a step in the right direction.
That said, I’m sure there are other concerns that we haven’t sufficiently addressed so far. If you know of some off the top of your head, feel free to post them as a reply to this comment. I’d be happy to either expand on my thoughts or address the issue immediately.
Will’s post introducing the EA funds is the 4th most upvoted post of all time on this forum.
Generally I upvote a post because I am glad that the post has been posted in this venue, not because I am happy about the facts being reported. Your comment has reminded me to upvote Will’s post, because I’m glad he posted it (and likewise Tara’s) - thanks!
That seems like a good use of the upvote function, and I’m glad you try to do things that way. But my nit-picking brain generates a couple of immediate thoughts:
I don’t think it’s a coincidence that a development you were concerned about was also one where you forgot* to apply your general rule. In practice I think upvotes track ‘I agree with this’ extremely strongly, even though lots of people (myself included) agree that ideally they shouldn’t.
In the hypothetical where there’s lots of community concern about the funds but people are happy they have a venue to discuss it, I expect the top-rated comments to be those expressing those concerns. This possibility is what I was trying to address in my next sentence:
Most of the top rated comments on his post, including at least one which you link to as raising concerns, say that they are positive about the idea.
*Not sure if ‘forgot’ is quite the right word here, just mirroring your description of my comment as ‘reminding’ you.
This seems false, based on these replies. The author of this post replied to the majority of those comments, which means he’s aware that many people have in fact raised concerns about things other than communication and EA Funds’ website.
Thanks for taking the time to provide such detailed feedback.
I agree. This was a mistake on my part. I was implicitly thinking about some of the recent feedback I’d read on Facebook and was not thinking about responses to the initial launch post.
I agree that it’s not fair to say that the criticism have been predominately about website copy. I’ve changed the relevant section in the post to include links to some of the concerns we received in the launch post.
I’d like to be as exhaustive as possible, so please provide links to any areas I missed so that I can include them (note that I didn’t include all of the comments you linked to if I thought our launch post already addressed the issue).
I’m disappointed that I was able to point out so many things I wish the author had done better in this document. If there had only been a couple errors, it would have been plausibly deniable that anything fishy was going on here. But with as many errors as I’ve pointed out, which all point in the direction of making EA Funds look better than it is, things don’t look good.
From my point of view, the context for the first section was to explain why we updated in favor of EA Funds persisting past the three-month trial before the trial was over. This was important to communicate because several people expressed confusion about our endorsement of EA Funds while the project was still technically in beta. This is why the first section highlights mostly positive information about EA Funds whereas later sections highlight challenges, mistakes etc.
I think the update that your comment is suggesting is that I should have made the first section longer and should have provided a more detailed discussion of the considerations for and against concluding that EA Funds has been well-received so far. Is that what you think or do you think I should make a different update?
A more detailed discussion of the considerations for and against concluding that EA Funds had been well received would have been helpful if the added detail was spent examining people’s concerns re: conflicts of interest, and centralization of power, i.e. concerns which were commonly expressed but not resolved.
I’m concerned with the framing that you updated towards it being correct for EA Funds to persist past the three month trial period. If there was support to start out with and you mostly didn’t gather more support later on relative to what one would expect, then your prior on whether EA Funds is well received should be stronger but you shouldn’t update in favor of it being well received based on more recent data. This may sound like a nitpick, but it is actually a crucially important consideration if you’ve framed things as if you’ll continue on with the project only if you update in the direction of having more public support than before.
I also dislike that you emphasize that some people “expressed confusion at your endorsement of EA Funds”. Some people may have felt that way, but your choice of wording both downplays the seriousness of some people’s disagreements with EA Funds, while also implying that critics are in need of figuring something out that others have already settled (which itself socially implies they’re less competent than others who aren’t confused). This is a part of what some of us mean when we talk about a tax on criticism in EA.
I also dislike that you emphasize that some people “expressed confusion at your endorsement of EA Funds”. Some people may have felt that way, but your choice of wording both downplays the seriousness of some people’s disagreements with EA Funds, while also implying that critics are in need of figuring something out that others have already settled (which itself socially implies they’re less competent than others who aren’t confused).
I definitely perceived the sort of strong exclusive endorsement and pushing EA Funds got as a direct contradiction of what I’d been told earlier, privately and publicly—that this was an MVP experiment to gauge interest and feasibility, to be reevaluated after three months. If I’m confused, I’m confused about how this wasn’t just a lie. My initial response was “HOW IS THIS OK???” (verbatim quote). I’m willing to be persuaded, of course. But, barring an actual resolution of the issue, simply describing this as confusion is a pretty substantial understatement.
ETA: I’m happy with the update to the OP and don’t think I have any unresolved complaint on this particular wording issue.
I’m concerned with the framing that you updated towards it being correct for EA Funds to persist past the three month trial period. If there was support to start out with and you mostly didn’t gather more support later on relative to what one would expect...
In the OP Kerry wrote:
The donation amounts we’ve received so far are greater than we expected, especially given that donations typically decrease early in the year after ramping up towards the end of the year.
CEA’s original expectation of donations could just have been wrong, of course. But I don’t see a failure of logic here.
Re. your last paragraph, Kerry can confirm or deny but I think he’s referring to the fact that a bunch of people were surprised to see (e.g.? Not sure if there were other cases.) GWWC start recommending the EA funds and closing down the GWWC trust recently when CEA hadn’t actually officially given the funds a ‘green light’ yet. So not referring to the same set of criticisms you are talking about. I think ‘confusion at GWWC’s endorsement of EA funds’ is a reasonable description of how I felt when I received this e-mail, at the very least*; I like the funds but prominently recommending something that is in beta and might be discontinued at any minute seemed odd.
*I got the e-mail from GWWC announcing this on 11th April. I got CEA’s March 2017 update saying they’d decided to continue with the funds later on the same day, but I think that goes to a much narrower list and in the interim I was confused and was going to ask someone about it. Checking now it looks like CEA actually announced this on their blog on 10th April (see below link), but again presumably lots of GWWC members don’t read that.
Kerry can confirm or deny but I think he’s referring to the fact that a bunch of people were surprised to see (e.g.? Not sure if there were other cases.) GWWC start recommending the EA funds and closing down the GWWC trust recently when CEA hadn’t actually officially given the funds a ‘green light’ yet.
Correct. We had updated in favor of EA Funds internally but hadn’t communicated that fact in public. When we started linking to EA Funds on the GWWC website, people were justifiably confused.
I’m concerned with the framing that you updated towards it being correct for EA Funds to persist past the three month trial period. If there was support to start out with and you mostly didn’t gather more support later on relative to what one would expect, then your prior on whether EA Funds is well received should be stronger but you shouldn’t update in favor of it being well received based on more recent data.
The money moved is the strongest new data point.
It seemed quite plausible to me that we could have the community be largely supportive of the idea of EA Funds without actually using the product. This is more or less what happened with EA Ventures—lots of people thought it was a good idea, but not many promising projects showed up and not many funders actually donated to the projects we happened to find.
Do you feel that the post as currently written still overhypes the communities perception of the project? If so, what changes would you suggest to bring it more in line with the observable evidence?
This is more or less what happened with EA Ventures—lots of people thought it was a good idea, but not many promising projects showed up and not many funders actually donated to the projects we happened to find.
It seems like the character of the EA movement needs to be improved somehow, (probably, as always, there are marginal improvements to the implementation too) but especially the character of the movement because arguably if EA could spawn many projects, its impact would be increased many-fold.
But the first sentence of the Wikipedia page for NPS, which I’m sure the author read at least the first line of given that he linked to it, states that NPS is “a management tool that can be used to gauge the loyalty of a firm’s customer relationships” (emphasis mine). However, EA Funds isn’t a firm. My view is that implicitly assuming that, as a nonprofit (or something socially equivalent), your score on a metric intended to judge how satisfied a for-profit company’s customers are can be compared side by side with the scores received by for-profit firms (and then neglecting to mention that you’ve made this assumption) belies a lack of intent to honestly inform EAs.
I think your concern is that since NPS was developed with for-profit companies in mind, we shouldn’t assume that a +50 NPS is good for a nonprofit.
If so, that’s fair and I agree.
When people benchmark NPS scores, they usually do it by comparing NPS scores in similar industries. Unfortunately, I don’t know of any data for NPS scores of nonprofits like ours (e.g. consumer-facing and providing a donation service). I think the information about what NPS score is generally considered good is helpful to understanding why we updated in favor of EA Funds persisting past the three month trial.
Is it your view that I a) shouldn’t have included NPS data at all or b) shoulnd’t have included information about what scores are good or c) that I should have caveated the paragraph more carefully?
I’ve spoken with a couple EAs in person who have mentioned that making the claim that “EA Funds are likely to be at least as good as OPP’s last dollar” is harmful… would be likely to make certain sorts of people feel bad for not donating to EA Funds instead of whatever else they might donate to counterfactually
I’m not sure I follow the concern here.
Are you arguing that a) the “OPP’s last dollar” content is not attempting to provide an argument or that b) it’s wrong to give an argument if the argument causes guilt as a side effect or are you arguing for something else?
I’d be willing to defend that it’s acceptable to make arguments for a position even if those arguments have the unintended consquence of causing guilt.
Writing this made me cry, a little. It’s late, and I should have gone to bed hours ago, but instead, here I am being filled with sad determination and horror that it feels like I can’t trust anyone I haven’t personally vetted to communicate honestly with me.
There are a range of reasons that this is not really an appropriate way to communicate. It’s socially inappropriate, it could be interpreted as emotional blackmail, and it could encourage trolling.
It’s a shame you’ve been upset. Still, one can call others’ writing upsetting, immoral, mean-spirited, etc etc etc—there is a lot of leeway to make other reasonable conversational moves.
Ryan, I substantially disagree and actually think all of your suggested alternatives are worse. The original is reporting on a response to the writing, not staking out a claim to an objective assessment of it.
I think that reporting honest responses is one of the best tools we have for dealing with emotional inferential gaps—particularly if it’s made explicit that this is a function of the reader and writing, and not the writing alone.
I’ve discussed this with Owen a bit further. How emotions relate to norms of discourse is a tricky topic but I personally think many people would agree on the following pointers going forward (not addressed to Fluttershy in particular):
Dos:
flag your emotions when they are relevant to the discussion. e.g. “I became sick of redrafting this post so please excuse if it comes across as grumpy”, or “These research problems seem hard and I’m unmotivated to try to work more on them”.
discuss emotional issues relevant to many EAs
Don’ts:
use emotion as a rhetorical boost for your arguments (appeal to emotion)
mix arguments together with calls for social support
mix arguments with personal emotional information that would make an EA (or regular) audience uncomfortable.
Of course, if you want to engage emotionally with a specific people, you can use private messages.
given how popular consequentialism is around here, would be likely to make certain sorts of people feel bad for not donating to EA Funds
This is wholly speculative. I’ve seen no evidence that consequentialists “feel bad” in any emotionally meaningful sense for having made donations to the wrong cause.
This is the same sort of effect people get from looking at this sort of advertising, but more subtle
Looking at that advertising slightly dulled my emotional state. Then I went on about my day. And you are worried about something that would even be more subtle? Why can’t we control our feelings and not fall to pieces at the thought that we might have been responsible for injustice? The world sucks and when one person screws up, someone else is suffering and dying at the other end. Being cognizant of this is far more important than protecting feelings.
if, say, individual EAs had information about giving opportunities that were more effective than EA Funds, but donated to EA Funds anyways out of a sense of pressure caused by the “at least as good as OPP” slogan.
I think you ought to place a bit more faith in the ability of effective altruists to make rational decisions.
I appreciate that the post has been improved a couple times since the criticisms below were written.
A few of you were diligent enough to beat me to saying much of this, but:
This seems false, based on these replies. The author of this post replied to the majority of those comments, which means he’s aware that many people have in fact raised concerns about things other than communication and EA Funds’ website. To his credit, someone added a paragraph acknowledging that these concerns had been raised elsewhere, in the pages for the EA community fund and the animal welfare fund. Unfortunately, though, these concerns were never mentioned in this post. There are a number of people who would like to hear about any progress that’s been made since the discussion which happened on this thread regarding the problems of 1) how to address conflicts of interest given how many of the fund managers are tied into e.g. OPP, and 2) how centralizing funding allocation (rather than making people who aren’t OPP staff into Fund Managers) narrows the amount of new information about what effective opportunities exist that the EA Funds’ Fund Managers encounter.
I’ve spoken with a couple EAs in person who have mentioned that making the claim that “EA Funds are likely to be at least as good as OPP’s last dollar” is harmful. In this post, it’s certainly worded in a way that implies very strong belief, which, given how popular consequentialism is around here, would be likely to make certain sorts of people feel bad for not donating to EA Funds instead of whatever else they might donate to counterfactually. This is the same sort of effect people get from looking at this sort of advertising, but more subtle, since it’s less obvious on a gut level that this slogan half-implies that the reader is morally bad for not donating. Using this slogan could be net negative even without considering that it might make EAs feel bad about themselves, if, say, individual EAs had information about giving opportunities that were more effective than EA Funds, but donated to EA Funds anyways out of a sense of pressure caused by the “at least as good as OPP” slogan.
More immediately, I have negative feelings about how this post used the Net Promoter Score to evaluate the reception of EA Funds. First, it mentions that EA Funds “received an NPS of +56 (which is generally considered excellent according to the NPS Wikipedia page).” But the first sentence of the Wikipedia page for NPS, which I’m sure the author read at least the first line of given that he linked to it, states that NPS is “a management tool that can be used to gauge the loyalty of a firm’s customer relationships” (emphasis mine). However, EA Funds isn’t a firm. My view is that implicitly assuming that, as a nonprofit (or something socially equivalent), your score on a metric intended to judge how satisfied a for-profit company’s customers are can be compared side by side with the scores received by for-profit firms (and then neglecting to mention that you’ve made this assumption) belies a lack of intent to honestly inform EAs.
This post has other problems, too; it uses the NPS scoring system to analyze donors and other’s responses to the question:
The NPS scoring system was never intended to be used to evaluate responses to this question, so perhaps that makes it insignificant that an NPS score of 0 for this question just misses the mark of being “felt to be good” in industry. Worse, the post mentions that this result
It seems to me that including only positive (or strongly positive-sounding) interpretations of this result is incorrect and misleadingly optimistic. I’d agree that it’s a good idea to not “take NPS too seriously”, though in this case, I wouldn’t say that the benefit that came from using NPS in the first place outweighed the cost that was incurred by the resultant incorrect suggestion that we should feel there was a respectable amount of quantitative support for the conclusions drawn in this post.
I’m disappointed that I was able to point out so many things I wish the author had done better in this document. If there had only been a couple errors, it would have been plausibly deniable that anything fishy was going on here. But with as many errors as I’ve pointed out, which all point in the direction of making EA Funds look better than it is, things don’t look good. Things don’t look good regarding how well this project has been received, but that’s not the larger problem here. The larger problem is that things don’t look good because this post decreases how much I am willing to trust communications made on the behalf of EA funds in particular, and communications made by CEA staff more generally.
Writing this made me cry, a little. It’s late, and I should have gone to bed hours ago, but instead, here I am being filled with sad determination and horror that it feels like I can’t trust anyone I haven’t personally vetted to communicate honestly with me. In Effective Altruism, honesty used to mean something, consequentialism used to come with integrity, and we used to be able to work together to do the most good we could.
Some days, I like to quietly smile to myself and wonder if we might be able to take that back.
I know you say that this isn’t the main point you’re making, but I think it’s the hidden assumption behind some of your other points and it was a surprise to read this. Will’s post introducing the EA funds is the 4th most upvoted post of all time on this forum. Most of the top rated comments on his post, including at least one which you link to as raising concerns, say that they are positive about the idea. Kerry then presented some survey data in this post. All those measures of support are kind of fuzzy and prone to weird biases, but putting it all together I find it much more likely than not that the community is as-a-whole positive about the funds. An alternative and more concrete angle would be money received into the funds, which was just shy of CEA’s target of $1m.
Given all that, what would ‘well-received’ look like in your view?
If you think the community is generally making a mistake in being supportive of the EA funds, that’s fine and obviously you can/should make arguments to that effect. But if you are making the empirical claim that the community is not supportive, I want to know why you think that.
Yeah, in this community it’s easy for your data to be filtered. People commonly comment with criticism, rarely with just “Yeah, this is right!”, and so your experience can be filled with negative responses even when the response is largely positive.
In one view, the concept post had 43 upvotes, the launch post had 28, and this post currently has 14. I don’t think this is problematic in itself, since this could just be an indication of hype dying down over time, rather than of support being retracted.
Part of what I’m tracking when I say that the EA community isn’t supportive of EA Funds is that I’ve spoken to several people in person who have said as much—I think I covered all of the reasons they brought up in my post, but one recurring theme throughout those conversations was that writing up criticism of EA was tiring and unrewarding, and that they often didn’t have the energy to do so (though one offered to proofread anything I wrote in that vein). So, a large part of my reason for feeling that there isn’t a great deal of community support for EA funds has to do with the ways in which I’d expect the data on how much support there actually is to be filtered. For example:
the method in which Kerry presented his survey data made it look like there was more support than there was
the fact that Kerry presented the data in this way suggests it’s relatively more likely that Kerry will do so again in the future if given the chance
social desirability bias should also make it look like there’s more support than there is
the fact that it’s socially encouraged to praise projects on the EA Forum and that criticism is judged more harshly than praise should make it look like there’s more support than there is. Contrast this norm with the one at LW, and notice how it affected how long it took us to get rid of Gleb.
we have a social norm of wording criticism in a very mild manner, which might make it seem like critics are less serious than they are.
It also doesn’t help that most of the core objections people have brought up have been acknowledged but not addressed. But really, given all of those filters on data relating to how well-supported the EA Funds are, and the fact that the survey data doesn’t show anything useful either way, I’m not comfortable with accepting the claim that EA Funds has been particularly well-received.
So I probably disagree with some of your bullet points, but unless I’m missing something I don’t think they can be the crux of our disagreement here, so for the sake of argument let’s suppose I fully agree that there are a variety of strong social norms in place here that make praise more salient, visible and common than criticism.
...I still don’t see how to get from here to (for example) ‘The community is probably net-neutral to net-negative on the EA funds, but Will’s post introducing them is the 4th most upvoted post of all time’. The relative (rather than absolute) nature of that claim is important; even if I think posts and projects on the EA forum generally get more praise, more upvotes, and less criticism than they ‘should’, why has that boosted the EA funds in particular over the dozens of other projects that have been announced on here over the past however-many years? To pick the most obviously-comparable example that quickly comes to mind, Kerry’s post introducing EA Ventures has just 16 upvotes*.
It just seems like the simplest explanation of your observed data is ‘the community at large likes the funds, and my personal geographical locus of friends is weird’.
And without meaning to pick on you in particular (because I think this mistake is super-common), in general I want to push strongly towards people recognising that EA consists of a large number of almost-disjoint filter bubbles that often barely talk to each other and in some extreme cases have next-to-nothing in common. Unless you’re very different to me, we are both selecting the people we speak to in person such that they will tend to think much like us, and like each other; we live inside one of the many bubbles. So the fact that everyone I’ve spoken to in person about the EA funds thinks they’re a good idea is particularly weak evidence that the community thinks they are good, and so is your opposing observation. I think we should both discount it ~entirely once we have anything else to go on. Relative upvotes are extremely far from perfect as a metric, but I think they are much better than in-person anecdata for this reason alone.
FWIW I’m very open to suggestions on how we could settle this question more definitively. I expect CEA pushing ahead with the funds if the community as a whole really is net-negative on them would indeed be a mistake. I don’t have any great ideas at the moment though.
*http://effective-altruism.com/ea/fo/announcing_effective_altruism_ventures/
I’d say this is correct. The EA Forum itself has such a selection effect, though it’s weaker than the ones either of our friend groups have. One idea would be to do a survey, as Peter suggests, though this makes me feel slightly uneasy given that a survey may weight the opinions of people who have considered the problem less or feel less strongly about it equally with the opinions of others. A relevant factor here is that it sometimes takes people a fair bit of reading or reflection to develop a sense for why integrity is particularly valuable from a consequentialist’s perspective, and then link this up to why EA Funds continuing has the consequence of showing people that projects others use relatively lower-integrity methods to report on and market can succeed despite (or even because?) of this.
I’d also agree that, at the time of Will’s post, it would have been incorrect to say:
But what we likely care about is whether or not the community is positive on EA Funds at the moment, which may or may not be different from whether it was positive on EA Funds in the past.
My view is further that the community’s response to this sort of thing is partly a function of how debates on honesty and integrity have been resolved in the past; if lack of integrity in EA has been an issue in the past, the sort of people who care about integrity are less likely to stick around in EA, such that the remaining population of EAs will have fewer people who care about integrity, which itself affects how the average EA feels about future incidents relating to integrity (such as this one), and so on. So, on some level I’m positing that the public response to EA Funds would be more negative if we hadn’t filtered certain people out of EA by having an integrity problem in the first place.
(Sorry for the slower response, your last paragraph gave me pause and I wanted to think about it. I still don’t feel like I have a satisfactory handle on it, but also feel I should reply at this point.)
This makes total sense to me, and I do currently perceive something of an inverse correlation between how hard people have thought about the funds and how positively they feel about them. I agree this is a cause for concern. The way I would describe that situation from your perspective is not ‘the funds have not been well-received’, but rather ‘the funds have been well-received but only because too many (most?) people are analysing the idea in a superficial way’. Maybe that is what you were aiming for originally and I just didn’t read it that way.
True. That post was only a couple of months before this one though; not a lot of time for new data/arguments to emerge or opinions to change. The only major new data point I can think of since then is the funds raising ~$1m, which I think is mostly orthogonal to what we are discussing. I’m curious whether you personally a perceive a change (drop) in popularity in your circles?
This story sounds plausibly true. It’s a difficult one to falsify though (I could flip all the language and get something that also sounds plausibly true), so turning it over in my head for the past few days I’m still not sure how much weight to put on it.
Perhaps a simple (random) survey? Or, if that’s not possible, a poll of some sort?
My sense (and correct me if I’m wrong) is that the biggest concerns seem to be related to the fact that there is only one fund for each cause area and the fact that Open Phil/GiveWell people are running each of the funds.
I share this concern and I agree that it is true that EA Funds has not been changed to reflect this. This is mostly because EA Funds simply hasn’t been around for very long and we’re currently working on improving the core product before we expand it.
What I’ve tried to do instead is precommit to 50% or less of the funds being managed by Open Phil/GiveWell and give a general timeline for when we expect to start making good on that committment. I know that doesn’t solve the problem, but hopefully you agree that it’s a step in the right direction.
That said, I’m sure there are other concerns that we haven’t sufficiently addressed so far. If you know of some off the top of your head, feel free to post them as a reply to this comment. I’d be happy to either expand on my thoughts or address the issue immediately.
Generally I upvote a post because I am glad that the post has been posted in this venue, not because I am happy about the facts being reported. Your comment has reminded me to upvote Will’s post, because I’m glad he posted it (and likewise Tara’s) - thanks!
That seems like a good use of the upvote function, and I’m glad you try to do things that way. But my nit-picking brain generates a couple of immediate thoughts:
I don’t think it’s a coincidence that a development you were concerned about was also one where you forgot* to apply your general rule. In practice I think upvotes track ‘I agree with this’ extremely strongly, even though lots of people (myself included) agree that ideally they shouldn’t.
In the hypothetical where there’s lots of community concern about the funds but people are happy they have a venue to discuss it, I expect the top-rated comments to be those expressing those concerns. This possibility is what I was trying to address in my next sentence:
*Not sure if ‘forgot’ is quite the right word here, just mirroring your description of my comment as ‘reminding’ you.
Thanks for taking the time to provide such detailed feedback.
I agree. This was a mistake on my part. I was implicitly thinking about some of the recent feedback I’d read on Facebook and was not thinking about responses to the initial launch post.
I agree that it’s not fair to say that the criticism have been predominately about website copy. I’ve changed the relevant section in the post to include links to some of the concerns we received in the launch post.
I’d like to be as exhaustive as possible, so please provide links to any areas I missed so that I can include them (note that I didn’t include all of the comments you linked to if I thought our launch post already addressed the issue).
From my point of view, the context for the first section was to explain why we updated in favor of EA Funds persisting past the three-month trial before the trial was over. This was important to communicate because several people expressed confusion about our endorsement of EA Funds while the project was still technically in beta. This is why the first section highlights mostly positive information about EA Funds whereas later sections highlight challenges, mistakes etc.
I think the update that your comment is suggesting is that I should have made the first section longer and should have provided a more detailed discussion of the considerations for and against concluding that EA Funds has been well-received so far. Is that what you think or do you think I should make a different update?
A more detailed discussion of the considerations for and against concluding that EA Funds had been well received would have been helpful if the added detail was spent examining people’s concerns re: conflicts of interest, and centralization of power, i.e. concerns which were commonly expressed but not resolved.
I’m concerned with the framing that you updated towards it being correct for EA Funds to persist past the three month trial period. If there was support to start out with and you mostly didn’t gather more support later on relative to what one would expect, then your prior on whether EA Funds is well received should be stronger but you shouldn’t update in favor of it being well received based on more recent data. This may sound like a nitpick, but it is actually a crucially important consideration if you’ve framed things as if you’ll continue on with the project only if you update in the direction of having more public support than before.
I also dislike that you emphasize that some people “expressed confusion at your endorsement of EA Funds”. Some people may have felt that way, but your choice of wording both downplays the seriousness of some people’s disagreements with EA Funds, while also implying that critics are in need of figuring something out that others have already settled (which itself socially implies they’re less competent than others who aren’t confused). This is a part of what some of us mean when we talk about a tax on criticism in EA.
I definitely perceived the sort of strong exclusive endorsement and pushing EA Funds got as a direct contradiction of what I’d been told earlier, privately and publicly—that this was an MVP experiment to gauge interest and feasibility, to be reevaluated after three months. If I’m confused, I’m confused about how this wasn’t just a lie. My initial response was “HOW IS THIS OK???” (verbatim quote). I’m willing to be persuaded, of course. But, barring an actual resolution of the issue, simply describing this as confusion is a pretty substantial understatement.
ETA: I’m happy with the update to the OP and don’t think I have any unresolved complaint on this particular wording issue.
In the OP Kerry wrote:
CEA’s original expectation of donations could just have been wrong, of course. But I don’t see a failure of logic here.
Re. your last paragraph, Kerry can confirm or deny but I think he’s referring to the fact that a bunch of people were surprised to see (e.g.? Not sure if there were other cases.) GWWC start recommending the EA funds and closing down the GWWC trust recently when CEA hadn’t actually officially given the funds a ‘green light’ yet. So not referring to the same set of criticisms you are talking about. I think ‘confusion at GWWC’s endorsement of EA funds’ is a reasonable description of how I felt when I received this e-mail, at the very least*; I like the funds but prominently recommending something that is in beta and might be discontinued at any minute seemed odd.
*I got the e-mail from GWWC announcing this on 11th April. I got CEA’s March 2017 update saying they’d decided to continue with the funds later on the same day, but I think that goes to a much narrower list and in the interim I was confused and was going to ask someone about it. Checking now it looks like CEA actually announced this on their blog on 10th April (see below link), but again presumably lots of GWWC members don’t read that.
https://www.centreforeffectivealtruism.org/blog/cea-update-march-2017/
Correct. We had updated in favor of EA Funds internally but hadn’t communicated that fact in public. When we started linking to EA Funds on the GWWC website, people were justifiably confused.
The money moved is the strongest new data point.
It seemed quite plausible to me that we could have the community be largely supportive of the idea of EA Funds without actually using the product. This is more or less what happened with EA Ventures—lots of people thought it was a good idea, but not many promising projects showed up and not many funders actually donated to the projects we happened to find.
Do you feel that the post as currently written still overhypes the communities perception of the project? If so, what changes would you suggest to bring it more in line with the observable evidence?
It seems like the character of the EA movement needs to be improved somehow, (probably, as always, there are marginal improvements to the implementation too) but especially the character of the movement because arguably if EA could spawn many projects, its impact would be increased many-fold.
I think your concern is that since NPS was developed with for-profit companies in mind, we shouldn’t assume that a +50 NPS is good for a nonprofit.
If so, that’s fair and I agree.
When people benchmark NPS scores, they usually do it by comparing NPS scores in similar industries. Unfortunately, I don’t know of any data for NPS scores of nonprofits like ours (e.g. consumer-facing and providing a donation service). I think the information about what NPS score is generally considered good is helpful to understanding why we updated in favor of EA Funds persisting past the three month trial.
Is it your view that I a) shouldn’t have included NPS data at all or b) shoulnd’t have included information about what scores are good or c) that I should have caveated the paragraph more carefully?
I’m not sure I follow the concern here.
Are you arguing that a) the “OPP’s last dollar” content is not attempting to provide an argument or that b) it’s wrong to give an argument if the argument causes guilt as a side effect or are you arguing for something else?
I’d be willing to defend that it’s acceptable to make arguments for a position even if those arguments have the unintended consquence of causing guilt.
There are a range of reasons that this is not really an appropriate way to communicate. It’s socially inappropriate, it could be interpreted as emotional blackmail, and it could encourage trolling.
It’s a shame you’ve been upset. Still, one can call others’ writing upsetting, immoral, mean-spirited, etc etc etc—there is a lot of leeway to make other reasonable conversational moves.
Ryan, I substantially disagree and actually think all of your suggested alternatives are worse. The original is reporting on a response to the writing, not staking out a claim to an objective assessment of it.
I think that reporting honest responses is one of the best tools we have for dealing with emotional inferential gaps—particularly if it’s made explicit that this is a function of the reader and writing, and not the writing alone.
I’ve discussed this with Owen a bit further. How emotions relate to norms of discourse is a tricky topic but I personally think many people would agree on the following pointers going forward (not addressed to Fluttershy in particular):
Dos:
flag your emotions when they are relevant to the discussion. e.g. “I became sick of redrafting this post so please excuse if it comes across as grumpy”, or “These research problems seem hard and I’m unmotivated to try to work more on them”.
discuss emotional issues relevant to many EAs
Don’ts:
use emotion as a rhetorical boost for your arguments (appeal to emotion)
mix arguments together with calls for social support
mix arguments with personal emotional information that would make an EA (or regular) audience uncomfortable.
Of course, if you want to engage emotionally with a specific people, you can use private messages.
This is wholly speculative. I’ve seen no evidence that consequentialists “feel bad” in any emotionally meaningful sense for having made donations to the wrong cause.
Looking at that advertising slightly dulled my emotional state. Then I went on about my day. And you are worried about something that would even be more subtle? Why can’t we control our feelings and not fall to pieces at the thought that we might have been responsible for injustice? The world sucks and when one person screws up, someone else is suffering and dying at the other end. Being cognizant of this is far more important than protecting feelings.
I think you ought to place a bit more faith in the ability of effective altruists to make rational decisions.