I’m not a community builder, but I’d like to share some observations as someone who has been involved with EA since around 2016 and has only gotten heavily involved with the “EA community” over the past year. I’m not sure how helpful they’ll be but I hope it’s useful to you and others.
I strongly agree with Karthik’s comment about the focus on highly-engaged EA’s as the desired result of community building being counterproductive to learning. I think part of this definitely comes down to the relative inexperience of both members and groups leaders, particularly in university groups. There seems to be a lot of focus on convincing people to get involved in EA rather than facilitating their engagement with ideas, and this seems to lead to a selection effect where only members who wholly buy into EA as a complete guideline for doing good stick with the community, creating an intellectual echo chamber of sorts where people don’t feel very motivated to meaningfully engage with non-EA perspectives.
One reflection of this unwillingness to engage that I’ve come across recently is EAs online asking how to best defend point X or Y or how to best respond to a certain criticism. The framing of these questions as “how do I convince person that X is right/wrong”, “which arguments work best on people who believe Y” or “how do I respond to criticism Z” makes it apparent to me that they are not interested in understanding the other points perspective as much as “defeating” it, and that they are trying to defend ideas or points that they are not convinced of themselves (as demonstrated in the way that they are not able to respond to criticisms themselves but feel the need to defend the point), presumably because it’s an EA talking point.
Another issue I’ve seen in similar online spaces is a sneer-y and morally superior attitude towards “fuzzies” and non-utilitarian approached to doing good. This is both hostile to non-EAs and thus makes it less likely for them to be willing to engage, and it is demonstrating an unwillingness on the side of the EAs to engage as well. I’m not sure how prevalent this kind of thing is or how it can be counteracted, but it may be worth thinking about.
While not as severe, I think it may be worth looking into discussion norms in this context as well. EAs as a community tend to value relatively highly polished arguments that are backed with evidence and their preferred modes of analysis (Bayesian analysis, utilitarian calculus, expected value etc.) and presented in a very “neutral”, “unemotional” tone. There have been posts on this forum over the past few weeks both pointing this out and exemplifying it in the responses. While I do agree with criticisms of discussion norms , I think that it’s fairly easy to see that this presents an obstacle to learning regardless of how one feels about it. If our intention is to learn from others, EAs need to be able to meaningfully engage with perspectives that are presented in their preferred style and engage with content over style and presentation, particularly where criticisms or fundamental differences of opinion are concerned.
I’ve spoken to multiple community builders, both for university groups and local groups, who expressed frustration or disappointment in not being able to get members to “engage” with EA because members weren’t making career changes into direct work on EA causes. I think this is not only a bad approach to community building for reasons stated above, but that it also creates a dynamic where people who could be doing good work and learning elsewhere are implicitly told that this kind of work is not valuable, thus both alienating people are not able to find direct work and further implying that non-EA work is valueless. This is probably something that can be addressed both in community building best practices and by tweaking any existing incentive structures for community building to emphasize highly-engaged EAs less as a desirable end result.
We’ve been getting flak for being over reliant on quantitative analysis for some time. However, critics of EA insider insularity are also taking aim at times when EA has invested money in interventions, like Wyndham Abbey, based on qualitative judgments of insider EAs. I think there’s also concern that our quantitative analysis may simply be done poorly, or even be just a quantitative veneer for what is essentially a qualitative judgment.
I think it’s time for us to go past the “qualitative vs quantitative” debate, and try to identify what an appropriate context and high-quality work looks like using both reasoning styles.
One change I’d like to see are some standards for legibility for spends above a certain size. If we’re going to spend $15 million on a conference center based on intuitions about the benefit, we should still publish the rationale, maintenance costs, an analysis of how much time will be saved on logistics in a prominent, accessible location so that people can see what we’re up to. That doesn’t mean we need to have some sort of public comment or democratic decision making on all this stuff—we don’t need to bog ourselves down with regulation. But a little more effort to maintain legibility around qualitative decisions might go a long way.
When you buy a conference center you get an asset worth around the cost that you paid it. Please, can people stop saying that “we spent $15 million on a conference center”? If we wanted to sell it today, my best guess is we probably could do that for $13-14 million, so the total cost here is around $1-2 million, which is really not much compared to all the other spending in the ecosystem.
There is a huge difference between buying an asset you can utilize and spending money on services, rent, etc. If you compare them directly you will make crazily wrong decisions. The primary thing to pay attention to is depreciation, interest and counterfactual returns, all of which suggest numbers an order of magnitude lower (and indeed move it out of the space where anyone should really worry much about this).
I’m aware that the conference center can be sold. The point is that there wasn’t an accessible, legible explanation available. To accept that it was a wise purchase, you either have to do all the thinking for yourself, or defer to the person who made the decision to buy it.
That’s a paradigm EA tried to get away from in the past, and what made it popular, I think, was the emphasis on legibility. That’s partly why 80,000 hours is popular—while in theory, anyone could come to the same conclusions about careers by doing their own research, or just blindly accept recommendations to pursue a career in X, it’s very helpful to have a legible, clearly argued explanation.
The EA brand doesn’t have to be about quantification, but I think it is about legibility, and we see the consequences when we don’t achieve that: people can’t make sense of our decisions, they perceive it as insular intuitive decision making, they get mad, they exaggerate downsides and ignore mitigating factors, and they pan us. Because we made an implicit promise, that with EA, you would get good, clear reasons you can understand for why we wanted to spend your donations on X and not Y. We were going to give you access to our thought process and let you participate in it. And clearly, a lot of people don’t feel that EA is consistently following through on that.
EA may be suffering from expert syndrome. It’s actually not obvious to casual observers that buying an old plush-looking country house might be a sensible choice for hosting conferences rather than a status symbol, or that we can always sell it and get most of our money back. If we don’t overcome this and explain our spending on a way where an interested outsider can read it and say “yes, this makes sense and I trust that this summary reflects smart thinking about the details I’m not inspecting,” then I think we’ll continue to generate heated confusion in our ever-growing cohort of casual onlookers.
If we want to be a large movement, then managing this communication gap seems key.
Assuming that it costs around 6000£ to save a life, these 1-2 million come down to around 200-300 lives saved. EAs claim to have a very high standard in evaluating the money spent by charities, this shouldn’t stop at the ‘discretionary spending’ of the evaluators.
I’m not sure what part of my comment this comment is in response to, I initially thought it was posted under my response to Berke’s comment below and am responding with that in mind, so I’m not 100% sure I’m reading your response correctly and apologies if this is off the mark.
We’ve been getting flak for being over reliant on quantitative analysis for some time. However, critics of EA insider insularity are also taking aim at times when EA has invested money in interventions, like Wyndham Abbey, based on qualitative judgments of insider EAs.
I think the issue around qualitative vs. quantitative judgement in this context is mainly on two axes:
When it comes to cause prioritization, the causality behind some factors and interventions can be harder to measure more or less definitively in clear, quantitative terms. For example, it’s relatively easy to figure out how many lives something like a vaccine or bed net distribution can save with RCTs, but it’s much harder to figure out what the actual effect of, say, 3 extra years of education is for the average person—you can get some estimations but it’s not easy to clearly delineate between what the actual cause of the observed results are (is it the diploma, the space for intellectual exploration, the peer engagement, the structured environment, the actual content of education, the opportunities for maturing in a relatively low-stakes environment… ). This is because there are a lot of confounding and intertwined factors and it’s not easy to isolate the cause—I had a professor who loved to point to single parent households as an example of difficulty in establishing causality: is it the absence of one parent the problem, or is it the reasons that the parent is absence? These kind of questions are better answered with qualitative research, but don’t quantify easily and you can’t run something like an RCT on them. This makes them a bit less measurable in a clear cut way. I’m personally a huge fan of qualitative research for impact assessment, but they have smaller sample sizes don’t tend to “generalize” the same way RCTs etc do (andhow well other types of study generalize is a whole other question, but seems to be taken more or less as given here and I don’t think the way it’s treated is problematic on a practical scale)
That being said, there is a big difference between a qualitative research study and the “qualitative judgments of insider EAs”—I think that the qualitative reasoning presented in comments in the thread about the Abbey (personal experiences with conferences etc.) are valuable, but don’t rise to the level of rigor that an actual qualitative research does—they’re anecdotes.
I think it’s time for us to go past the “qualitative vs quantitative” debate, and try to identify what an appropriate context and high-quality work looks like using both reasoning styles.
I absolutely agree with this and am a strong proponent of methodological flexibility and mixed methods approaches, but I think it’s important to keep the difference between qualitative reasoning based on personal experiences and qualitative reasoning based on research studies and data in mind while doing so. “Quantitative reasoning” tends to implicitly include (presumably) rigorously collected data while “qualitative reasoning” as used in your comment (which I think does reflect colloquial uses, unfortunately) does not.
I’m not a community builder, but I’d like to share some observations as someone who has been involved with EA since around 2016 and has only gotten heavily involved with the “EA community” over the past year. I’m not sure how helpful they’ll be but I hope it’s useful to you and others.
I strongly agree with Karthik’s comment about the focus on highly-engaged EA’s as the desired result of community building being counterproductive to learning. I think part of this definitely comes down to the relative inexperience of both members and groups leaders, particularly in university groups. There seems to be a lot of focus on convincing people to get involved in EA rather than facilitating their engagement with ideas, and this seems to lead to a selection effect where only members who wholly buy into EA as a complete guideline for doing good stick with the community, creating an intellectual echo chamber of sorts where people don’t feel very motivated to meaningfully engage with non-EA perspectives.
One reflection of this unwillingness to engage that I’ve come across recently is EAs online asking how to best defend point X or Y or how to best respond to a certain criticism. The framing of these questions as “how do I convince person that X is right/wrong”, “which arguments work best on people who believe Y” or “how do I respond to criticism Z” makes it apparent to me that they are not interested in understanding the other points perspective as much as “defeating” it, and that they are trying to defend ideas or points that they are not convinced of themselves (as demonstrated in the way that they are not able to respond to criticisms themselves but feel the need to defend the point), presumably because it’s an EA talking point.
Another issue I’ve seen in similar online spaces is a sneer-y and morally superior attitude towards “fuzzies” and non-utilitarian approached to doing good. This is both hostile to non-EAs and thus makes it less likely for them to be willing to engage, and it is demonstrating an unwillingness on the side of the EAs to engage as well. I’m not sure how prevalent this kind of thing is or how it can be counteracted, but it may be worth thinking about.
While not as severe, I think it may be worth looking into discussion norms in this context as well. EAs as a community tend to value relatively highly polished arguments that are backed with evidence and their preferred modes of analysis (Bayesian analysis, utilitarian calculus, expected value etc.) and presented in a very “neutral”, “unemotional” tone. There have been posts on this forum over the past few weeks both pointing this out and exemplifying it in the responses. While I do agree with criticisms of discussion norms , I think that it’s fairly easy to see that this presents an obstacle to learning regardless of how one feels about it. If our intention is to learn from others, EAs need to be able to meaningfully engage with perspectives that are presented in their preferred style and engage with content over style and presentation, particularly where criticisms or fundamental differences of opinion are concerned.
I’ve spoken to multiple community builders, both for university groups and local groups, who expressed frustration or disappointment in not being able to get members to “engage” with EA because members weren’t making career changes into direct work on EA causes. I think this is not only a bad approach to community building for reasons stated above, but that it also creates a dynamic where people who could be doing good work and learning elsewhere are implicitly told that this kind of work is not valuable, thus both alienating people are not able to find direct work and further implying that non-EA work is valueless. This is probably something that can be addressed both in community building best practices and by tweaking any existing incentive structures for community building to emphasize highly-engaged EAs less as a desirable end result.
We’ve been getting flak for being over reliant on quantitative analysis for some time. However, critics of EA insider insularity are also taking aim at times when EA has invested money in interventions, like Wyndham Abbey, based on qualitative judgments of insider EAs. I think there’s also concern that our quantitative analysis may simply be done poorly, or even be just a quantitative veneer for what is essentially a qualitative judgment.
I think it’s time for us to go past the “qualitative vs quantitative” debate, and try to identify what an appropriate context and high-quality work looks like using both reasoning styles.
One change I’d like to see are some standards for legibility for spends above a certain size. If we’re going to spend $15 million on a conference center based on intuitions about the benefit, we should still publish the rationale, maintenance costs, an analysis of how much time will be saved on logistics in a prominent, accessible location so that people can see what we’re up to. That doesn’t mean we need to have some sort of public comment or democratic decision making on all this stuff—we don’t need to bog ourselves down with regulation. But a little more effort to maintain legibility around qualitative decisions might go a long way.
When you buy a conference center you get an asset worth around the cost that you paid it. Please, can people stop saying that “we spent $15 million on a conference center”? If we wanted to sell it today, my best guess is we probably could do that for $13-14 million, so the total cost here is around $1-2 million, which is really not much compared to all the other spending in the ecosystem.
There is a huge difference between buying an asset you can utilize and spending money on services, rent, etc. If you compare them directly you will make crazily wrong decisions. The primary thing to pay attention to is depreciation, interest and counterfactual returns, all of which suggest numbers an order of magnitude lower (and indeed move it out of the space where anyone should really worry much about this).
I’m aware that the conference center can be sold. The point is that there wasn’t an accessible, legible explanation available. To accept that it was a wise purchase, you either have to do all the thinking for yourself, or defer to the person who made the decision to buy it.
That’s a paradigm EA tried to get away from in the past, and what made it popular, I think, was the emphasis on legibility. That’s partly why 80,000 hours is popular—while in theory, anyone could come to the same conclusions about careers by doing their own research, or just blindly accept recommendations to pursue a career in X, it’s very helpful to have a legible, clearly argued explanation.
The EA brand doesn’t have to be about quantification, but I think it is about legibility, and we see the consequences when we don’t achieve that: people can’t make sense of our decisions, they perceive it as insular intuitive decision making, they get mad, they exaggerate downsides and ignore mitigating factors, and they pan us. Because we made an implicit promise, that with EA, you would get good, clear reasons you can understand for why we wanted to spend your donations on X and not Y. We were going to give you access to our thought process and let you participate in it. And clearly, a lot of people don’t feel that EA is consistently following through on that.
EA may be suffering from expert syndrome. It’s actually not obvious to casual observers that buying an old plush-looking country house might be a sensible choice for hosting conferences rather than a status symbol, or that we can always sell it and get most of our money back. If we don’t overcome this and explain our spending on a way where an interested outsider can read it and say “yes, this makes sense and I trust that this summary reflects smart thinking about the details I’m not inspecting,” then I think we’ll continue to generate heated confusion in our ever-growing cohort of casual onlookers.
If we want to be a large movement, then managing this communication gap seems key.
Assuming that it costs around 6000£ to save a life, these 1-2 million come down to around 200-300 lives saved. EAs claim to have a very high standard in evaluating the money spent by charities, this shouldn’t stop at the ‘discretionary spending’ of the evaluators.
I’m not sure what part of my comment this comment is in response to, I initially thought it was posted under my response to Berke’s comment below and am responding with that in mind, so I’m not 100% sure I’m reading your response correctly and apologies if this is off the mark.
I think the issue around qualitative vs. quantitative judgement in this context is mainly on two axes:
When it comes to cause prioritization, the causality behind some factors and interventions can be harder to measure more or less definitively in clear, quantitative terms. For example, it’s relatively easy to figure out how many lives something like a vaccine or bed net distribution can save with RCTs, but it’s much harder to figure out what the actual effect of, say, 3 extra years of education is for the average person—you can get some estimations but it’s not easy to clearly delineate between what the actual cause of the observed results are (is it the diploma, the space for intellectual exploration, the peer engagement, the structured environment, the actual content of education, the opportunities for maturing in a relatively low-stakes environment… ). This is because there are a lot of confounding and intertwined factors and it’s not easy to isolate the cause—I had a professor who loved to point to single parent households as an example of difficulty in establishing causality: is it the absence of one parent the problem, or is it the reasons that the parent is absence? These kind of questions are better answered with qualitative research, but don’t quantify easily and you can’t run something like an RCT on them. This makes them a bit less measurable in a clear cut way. I’m personally a huge fan of qualitative research for impact assessment, but they have smaller sample sizes don’t tend to “generalize” the same way RCTs etc do (andhow well other types of study generalize is a whole other question, but seems to be taken more or less as given here and I don’t think the way it’s treated is problematic on a practical scale)
That being said, there is a big difference between a qualitative research study and the “qualitative judgments of insider EAs”—I think that the qualitative reasoning presented in comments in the thread about the Abbey (personal experiences with conferences etc.) are valuable, but don’t rise to the level of rigor that an actual qualitative research does—they’re anecdotes.
I absolutely agree with this and am a strong proponent of methodological flexibility and mixed methods approaches, but I think it’s important to keep the difference between qualitative reasoning based on personal experiences and qualitative reasoning based on research studies and data in mind while doing so. “Quantitative reasoning” tends to implicitly include (presumably) rigorously collected data while “qualitative reasoning” as used in your comment (which I think does reflect colloquial uses, unfortunately) does not.