I skimmed through the article; thanks for sharing!
Some quick thoughts:
community-members are fully aware that EA is not actually an open-ended question but a set of conclusions and specific cause areas
The cited evidence here is one user claiming this is the case; I think they are wrong. For example, if there were a dental hygiene intervention that could help, let’s say, a hundred million individuals and government / other philanthropic aid were not addressing this, I would expect a CE-incubated charity to jump on it immediately.
There are other places where the author makes what I would consider sweeping generalizations or erroneous inferences. For instance:
″...given the high level of control leading organizations like the Centre for Effective Altruism (CEA) exercise over how EA is presented to outsiders” — The evidence cited here is mostly all the guides that CEA has made, but I don’t see how this translates to “high level of control.” EAs and EA organizations don’t have to adhere to what CEA suggests.
“The general consensus seems to be that re-emphasizing a norm of donating to global poverty and animal welfare charities provides reputational benefits...” — upvotes to a comment ≠ general consensus.
Table 1, especially the Cause neutrality section, seems to wedge a line where one doesn’t exist.
The author acknowledges in the Methodology section that they didn’t participate in EA events or groups and mainly used internet forums to guide their qualitative study. I think this is the critical drawback of this study. Some of the most exciting things happen in EA groups and conferences, and I think the conclusion presented would be vastly different if the qualitative study included this data point.
I don’t know what convinces the article’s author to imply that there is some highly coordinated approach to funnel people into the “real parts of EA.” If this is true (and my tongue-in-cheek remark here), I would suggest these core people not spend>50% of the money on global health as there could be cheaper ways of maintaining this supposed illusion.
Overall, I like the background research done by the author, but I think the author’s takeaways are inaccurate and seem too forced. At least to me, the conclusion is reminiscent of the discourse around conspiracies such as the deep state or the “plandemic,” where there is always a secret group, a “they,” advancing their agenda while puppeteering tens of thousands of others.
Much more straightforward explanations exist, which aren’t entertained in this study.
EA is more centralized than most other movements, and it would be ideal to have several big donors with different priorities and worldviews. However, EA is also more functionally diverse and consists of some ten thousand folks (and growing), each of whom is a stakeholder in this endeavor and will collectively define the movement’s future.
I skimmed through the article; thanks for sharing!
Some quick thoughts:
The cited evidence here is one user claiming this is the case; I think they are wrong. For example, if there were a dental hygiene intervention that could help, let’s say, a hundred million individuals and government / other philanthropic aid were not addressing this, I would expect a CE-incubated charity to jump on it immediately.
There are other places where the author makes what I would consider sweeping generalizations or erroneous inferences. For instance:
″...given the high level of control leading organizations like the Centre for Effective Altruism (CEA) exercise over how EA is presented to outsiders” — The evidence cited here is mostly all the guides that CEA has made, but I don’t see how this translates to “high level of control.” EAs and EA organizations don’t have to adhere to what CEA suggests.
“The general consensus seems to be that re-emphasizing a norm of donating to global poverty and animal welfare charities provides reputational benefits...” — upvotes to a comment ≠ general consensus.
Table 1, especially the Cause neutrality section, seems to wedge a line where one doesn’t exist.
The author acknowledges in the Methodology section that they didn’t participate in EA events or groups and mainly used internet forums to guide their qualitative study. I think this is the critical drawback of this study. Some of the most exciting things happen in EA groups and conferences, and I think the conclusion presented would be vastly different if the qualitative study included this data point.
I don’t know what convinces the article’s author to imply that there is some highly coordinated approach to funnel people into the “real parts of EA.” If this is true (and my tongue-in-cheek remark here), I would suggest these core people not spend>50% of the money on global health as there could be cheaper ways of maintaining this supposed illusion.
Overall, I like the background research done by the author, but I think the author’s takeaways are inaccurate and seem too forced. At least to me, the conclusion is reminiscent of the discourse around conspiracies such as the deep state or the “plandemic,” where there is always a secret group, a “they,” advancing their agenda while puppeteering tens of thousands of others.
Much more straightforward explanations exist, which aren’t entertained in this study.
EA is more centralized than most other movements, and it would be ideal to have several big donors with different priorities and worldviews. However, EA is also more functionally diverse and consists of some ten thousand folks (and growing), each of whom is a stakeholder in this endeavor and will collectively define the movement’s future.