I learned from stories 1 and 2 - thanks for the information!
Story 3 feels like it suffers from lack of familiarity with EA and argues against a straw version. E.g you write (emphasis added):
As the community grew it spread into new areas – Animal Charity Evaluators was founded in 2012 looking at animal welfare – the community also connected to the rationalist community that was worried about AI and to academics at FHI thinking about the long term future. Throughout all of this expected value calculations remained the gold star for making decisions on how to do good. The idea was to shut up and multiply. Even as effective altruism decision makers spread into areas of greater and greater uncertainty they (as far as I can tell) have mostly continued to use the same decision making tools (expected value calculations), without questioning if these were the best tools.
By 2011 GiveWell had already published Why we can’t take expected value estimates literally (even when they’re unbiased), arguing against, well, taking expected value calculations literally, critiquing GWWC’s work on that basis, and discussing how their solution avoided Pascal’s Mugging. There was a healthy discussion in the comments and the cross-post on LessWrong got 100 upvotes and 250 comments.
I just voted for the GFI, AMF, and GD videos because of your comment!
Even if that’s not what edoard meant, I would be interested in hearing the answer to ‘what are things you would say if you didn’t need to be risk averse?’!
I hope ImpactMatters does well!
Meta: A big thank you to Buck for doing this and putting so much effort into it! This was very interesting and will hopefully encourage more dissemination of knowledge and opinions publicly
I agree with Issa about the costs of not giving reasons. My guess is that over the long run, giving reasons why you believe what you believe will be a better strategy to avoid convincing people of false things. Saying you believed X and now believe ~X seems like it’s likely to convince people of ~X even more strongly.
What other crazy ideas do you have about EA outreach?
I think there may be a misunderstanding – the title of this post is “Feedback Collected by CEA”, not “for” CEA.
This is fair, but I want to give some examples of why I thought this document was about feedback about CEA, with the hope of helping with communication around this in the future. Even after your clarification, the document still gives a strong impression to me of the feedback being about CEA, rather than about the community in general. Below are some quotes that make it sound that way to me, with emphasis added:
Summary of Core Feedback Collected by CEA in Spring/Summer 2019
The title doesn’t mention what the feedback is about. I think most people would assume that it refers to feedback about CEA, rather than the community overall. That’s what I assumed.
CEA collects feedback from community members in a variety of ways (see “CEA’s Feedback Process” below). In the spring and summer of 2019, we reached out to about a dozen people who work in senior positions in EA-aligned organizations to solicit their feedback. We were particularly interested to get their take on execution, communication, and branding issues in EA. Despite this focus, the interviews were open-ended and tended to cover the areas each person felt was important.
This document is a summary of their feedback. The feedback is presented “as is,” without any endorsement by CEA.
It’s not clearly stated what the feedback is about (“CEA collects feedback”, “solicit their feedback” without elaboration of what the feedback is about). The closest it gets to specifying what feedback might pertain to is when it’s mentioned that CEA was particularly interested in feedback on execution, communication, and branding issues in EA. This is still fairly vague, and “branding” to me implies that the feedback is about CEA. It does say ”...issues in EA”, but I didn’t pay that much importance.
This post is the first in a series of upcoming posts where we aim to share summaries of the feedback we have received.
In general, I assume that feedback to an organization is about the organization itself.
CEA has, historically, been much better at collecting feedback than at publishing the results of what we collect.
While unclear again about what “feedback” refers to, in general I would expect this to mean feedback about CEA.
As some examples of other sources of feedback CEA has collected this year:
We have received about 2,000 questions, comments and suggestions via Intercom (a chat widget on many of CEA’s websites) so far this year
We hosted a group leaders retreat (27 attendees), a community builders retreat (33 attendees), and had calls with organizers from 20 EA groups asking about what’s currently going on in their groups and how CEA can be helpful
Calls with 18 of our most prolific EA Forum users, to ask how the Forum can be made better.
A “medium-term events” survey, where we asked everyone who had attended an Individual Outreach retreat how the retreat impacted them 6-12 months later. (53 responses)
EA Global has an advisory board of ~25 people who are asked for opinions about content, conference size, format, etc., and we receive 200-400 responses to the EA Global survey from attendees each time.
All of these are examples of feedback about CEA or its events and activities. There are no examples of feedback about the community.
I think the confusion comes from the lack of clear elaboration in the title and/or beginning of the document of what the scope of the feedback was. Clarifying this in the future should eliminate this problem.
Note: The comment you and Ben replied to seems to have disappeared
I’m really excited about subscribing and bookmarking! Pingbacks also seem useful
EA headlining money and health as a cause priority while dropping education. + spending no money on education is straight out saying a lot about the priorities of EA.
EA gives zero value to education, and that is fundamentally wrong.
I don’t think the last sentence follows from the ones before it. EA is fundamentally about doing the most good possible, not about doing good in every area that is valuable. EA will (hopefully) always be about focusing on the relatively few areas where we can do the most good. Not funding almost everything in the world doesn’t mean that EA thinks that almost everything in the world has zero value. It is true that education for the sake of education is not a priority for EAs, but it doesn’t mean that EAs think that education isn’t important. In fact EA is very disproportionately composed of highly educated people—presumably at least some of these people value education highly.
I’ve been impressed by the work being produced by Rethink Priorities over the past several months. I appreciate the thought and nuance that went into this. Great job again!
I want to echo this. I would love to see CEA talk more about what they see as their mistakes and achievements, but this felt like a confusing mixture of feedback about some aspects of CEA (mostly EA Global, EA Forum, and the Community Health team) and some general feedback about the EA community that CEA only has partial control over. While CEA occupies an important position in EA, there are many factors beyond CEA that contribute to whether EA community members are smart and thoughtful or whether they’re not welcoming enough.
Update: The pictures load for me now
None of the images display for me either. This is what it looks like for me:
Let’s see how this works graphically. First, we start with tractability as a function of dollars (crowdedness), as in Figure 1. With diminishing marginal returns, “% solved/$” is decreasing in resources.
Next, we multiply tractability by importance to obtain MU/$ as a function of resources, in Figure 2. Assuming that Importance = “utility gained/% solved” is a constant, all this does is change the units on the y-axis, since we’re multiplying a function by a constant.
Now we can clearly see the amount of good done for an additional dollar, for every level of resources invested. To decide whether we should invest more in a cause, we calculate the current level of resources invested, then evaluate the MU/$ function at that level of resources. We do this for all causes, and allocate resources to the highest MU/$ causes, ultimately equalizing MU/$ across all causes as diminishing returns take effect. (Note the similarity to the utility maximization problem from intermediate microeconomics, where you choose consumption of goods to maximize utility, given their prices and subject to a budget constraint.)
It might be good to have a small number of runner of up posts without cash prizes. That would certainly help motivate me to post more.
Can you expand on how this influenced you?
While I think that was a valuable post, the definition of ideology in it is so broad that even things like science and the study of climate change would be ideologies (as kbog points out in the comments). I’m not sure what system or way of thinking wouldn’t qualify as an ideology based on the definition used.
Datapoint for Hauke: I also am very interested in this topic and Hauke’s thoughts on it but found the formatting made it difficult for me to read it fully
A general comment about this thread rather than a reply to Khorton in particular: The original post didn’t suggest that this should be a brainstorming thread, and I didn’t interpret it like that. I interpreted it as a question looking for answers that the posters believe, rather than only hypothesis generation/brainstorming.