I’m not sure what part of my comment this comment is in response to, I initially thought it was posted under my response to Berke’s comment below and am responding with that in mind, so I’m not 100% sure I’m reading your response correctly and apologies if this is off the mark.
We’ve been getting flak for being over reliant on quantitative analysis for some time. However, critics of EA insider insularity are also taking aim at times when EA has invested money in interventions, like Wyndham Abbey, based on qualitative judgments of insider EAs.
I think the issue around qualitative vs. quantitative judgement in this context is mainly on two axes:
When it comes to cause prioritization, the causality behind some factors and interventions can be harder to measure more or less definitively in clear, quantitative terms. For example, it’s relatively easy to figure out how many lives something like a vaccine or bed net distribution can save with RCTs, but it’s much harder to figure out what the actual effect of, say, 3 extra years of education is for the average person—you can get some estimations but it’s not easy to clearly delineate between what the actual cause of the observed results are (is it the diploma, the space for intellectual exploration, the peer engagement, the structured environment, the actual content of education, the opportunities for maturing in a relatively low-stakes environment… ). This is because there are a lot of confounding and intertwined factors and it’s not easy to isolate the cause—I had a professor who loved to point to single parent households as an example of difficulty in establishing causality: is it the absence of one parent the problem, or is it the reasons that the parent is absence? These kind of questions are better answered with qualitative research, but don’t quantify easily and you can’t run something like an RCT on them. This makes them a bit less measurable in a clear cut way. I’m personally a huge fan of qualitative research for impact assessment, but they have smaller sample sizes don’t tend to “generalize” the same way RCTs etc do (andhow well other types of study generalize is a whole other question, but seems to be taken more or less as given here and I don’t think the way it’s treated is problematic on a practical scale)
That being said, there is a big difference between a qualitative research study and the “qualitative judgments of insider EAs”—I think that the qualitative reasoning presented in comments in the thread about the Abbey (personal experiences with conferences etc.) are valuable, but don’t rise to the level of rigor that an actual qualitative research does—they’re anecdotes.
I think it’s time for us to go past the “qualitative vs quantitative” debate, and try to identify what an appropriate context and high-quality work looks like using both reasoning styles.
I absolutely agree with this and am a strong proponent of methodological flexibility and mixed methods approaches, but I think it’s important to keep the difference between qualitative reasoning based on personal experiences and qualitative reasoning based on research studies and data in mind while doing so. “Quantitative reasoning” tends to implicitly include (presumably) rigorously collected data while “qualitative reasoning” as used in your comment (which I think does reflect colloquial uses, unfortunately) does not.
I’m not sure what part of my comment this comment is in response to, I initially thought it was posted under my response to Berke’s comment below and am responding with that in mind, so I’m not 100% sure I’m reading your response correctly and apologies if this is off the mark.
I think the issue around qualitative vs. quantitative judgement in this context is mainly on two axes:
When it comes to cause prioritization, the causality behind some factors and interventions can be harder to measure more or less definitively in clear, quantitative terms. For example, it’s relatively easy to figure out how many lives something like a vaccine or bed net distribution can save with RCTs, but it’s much harder to figure out what the actual effect of, say, 3 extra years of education is for the average person—you can get some estimations but it’s not easy to clearly delineate between what the actual cause of the observed results are (is it the diploma, the space for intellectual exploration, the peer engagement, the structured environment, the actual content of education, the opportunities for maturing in a relatively low-stakes environment… ). This is because there are a lot of confounding and intertwined factors and it’s not easy to isolate the cause—I had a professor who loved to point to single parent households as an example of difficulty in establishing causality: is it the absence of one parent the problem, or is it the reasons that the parent is absence? These kind of questions are better answered with qualitative research, but don’t quantify easily and you can’t run something like an RCT on them. This makes them a bit less measurable in a clear cut way. I’m personally a huge fan of qualitative research for impact assessment, but they have smaller sample sizes don’t tend to “generalize” the same way RCTs etc do (andhow well other types of study generalize is a whole other question, but seems to be taken more or less as given here and I don’t think the way it’s treated is problematic on a practical scale)
That being said, there is a big difference between a qualitative research study and the “qualitative judgments of insider EAs”—I think that the qualitative reasoning presented in comments in the thread about the Abbey (personal experiences with conferences etc.) are valuable, but don’t rise to the level of rigor that an actual qualitative research does—they’re anecdotes.
I absolutely agree with this and am a strong proponent of methodological flexibility and mixed methods approaches, but I think it’s important to keep the difference between qualitative reasoning based on personal experiences and qualitative reasoning based on research studies and data in mind while doing so. “Quantitative reasoning” tends to implicitly include (presumably) rigorously collected data while “qualitative reasoning” as used in your comment (which I think does reflect colloquial uses, unfortunately) does not.