I’ve been thinking of distilling some of the criticism of EA that I hear into similar, clearly attackable foundational claims.
One thing I would add is the very individualistic view of impact. We act as individuals to maximize (expected) individual impact. This means things like founding an org, choosing your career, spending time deciding where your money goes. Collective action would mean empowering community-controlled institutions that make decisions by going through a democratic process of consensus-building. Instead our coordination mechanisms rely on trusting a few decision-makers that direct large amounts of funding. This is a consequence of the EA movement having been really small in the past.
Also, it seems we are obsessed with the measurable. That goes as far as defining “good” in a way that does not directly include complex relationships. Strict QUALY maximizers would be okay with eugenics. I don’t even know how to approach a topic like ecosystem conservation from an EA perspective.
I think in general we should be aware that our foundational assumptions are only a simplified model of what we actually want. They can serve us fine for directly comparing interventions, but when they lead to surprising conclusions, we should take a step back and examine if we just found a weak spot of the model.
One thing I would add is the very individualistic view of impact. We act as individuals to maximize (expected) individual impact.
Personally I wouldn’t agree with that. Effective altruists have been at pains to emphasise that we “do good together”—that was even the theme of a past EA Global, if I don’t misremember.
Also, it seems we are obsessed with the measurable.
I take a different view on that, too. For instance, Katja Grace wrote a post already in 2014 arguing that we shouldn’t refrain from interventions that are high-impact but hard to measure. That article was included in the first version of the EA Handbook (2015).
In fact, many of the causes currently popular with effective altruists, like AI safety and biosecurity, seem hard to measure.
Thanks for the very useful links, Stefan! I think the usefulness of coordination is widely agreed upon, but we’re still not working together as well as possible. The 80000hours article you linked even states:
Instead, especially in effective altruism, people engage in “single-player” thinking. They work out what would be the best course of action if others weren’t responding to what they do.
I expect most EAs would be self-critical enough to see these both as frequently occurring flaws in the movement, but I’d dispute the claim that they’re foundational. For the first criticism, some people track personal impact, and 80k talks a lot about your individual career impact, but people working for EA orgs are surely thinking of their collective impact as an org rather than anything individual. In the same way, ‘core EAs’ have the privilege of actually identifying with the movement enough that they can internalise the impact of the EA community as a whole.
As for measurability, I agree that it is a bias in the movement, albeit probably a necessary one. The ecosystem example is an interesting one- I’d argue that it’s not that difficult to approach ecosystem conservation from an EA perspective. We generally understand how ecosystems work and how they provide measurable valuable services to humans. A cost-effectiveness calculation would provide the human value of ecosystem services (which environmental economists usually do)and, if you want to give inherent value to species diversity, add the number of species within a given area, the number of individuals of these species and rarity/ external value of species etc. Then add weights according to various criteria to give something like an ‘ecosystem value per square metre’, and you’d get to a value that you could compare to other ecosystems. Calculate the price that it costs to conserve various ecosystems around the world, and voila, you have a cost-effectiveness analysis that feels at home on an EA platform. The reason this process doesn’t feel 100% EA is not that it’s difficult to measure, but because it can include value judgements that aren’t related to the welfare of conscious beings.
I’ve been thinking of distilling some of the criticism of EA that I hear into similar, clearly attackable foundational claims.
One thing I would add is the very individualistic view of impact. We act as individuals to maximize (expected) individual impact. This means things like founding an org, choosing your career, spending time deciding where your money goes. Collective action would mean empowering community-controlled institutions that make decisions by going through a democratic process of consensus-building. Instead our coordination mechanisms rely on trusting a few decision-makers that direct large amounts of funding. This is a consequence of the EA movement having been really small in the past.
Also, it seems we are obsessed with the measurable. That goes as far as defining “good” in a way that does not directly include complex relationships. Strict QUALY maximizers would be okay with eugenics. I don’t even know how to approach a topic like ecosystem conservation from an EA perspective.
I think in general we should be aware that our foundational assumptions are only a simplified model of what we actually want. They can serve us fine for directly comparing interventions, but when they lead to surprising conclusions, we should take a step back and examine if we just found a weak spot of the model.
Personally I wouldn’t agree with that. Effective altruists have been at pains to emphasise that we “do good together”—that was even the theme of a past EA Global, if I don’t misremember.
80,000 hours had a long article on this theme already in 2018: Doing good together: how to coordinate effectively, and avoid single-player thinking. There was also a 2016 piece called The value of coordination on similar themes.
I take a different view on that, too. For instance, Katja Grace wrote a post already in 2014 arguing that we shouldn’t refrain from interventions that are high-impact but hard to measure. That article was included in the first version of the EA Handbook (2015).
In fact, many of the causes currently popular with effective altruists, like AI safety and biosecurity, seem hard to measure.
Thanks for the very useful links, Stefan!
I think the usefulness of coordination is widely agreed upon, but we’re still not working together as well as possible. The 80000hours article you linked even states:
I’ll go and spend some time with these topics
I expect most EAs would be self-critical enough to see these both as frequently occurring flaws in the movement, but I’d dispute the claim that they’re foundational. For the first criticism, some people track personal impact, and 80k talks a lot about your individual career impact, but people working for EA orgs are surely thinking of their collective impact as an org rather than anything individual. In the same way, ‘core EAs’ have the privilege of actually identifying with the movement enough that they can internalise the impact of the EA community as a whole.
As for measurability, I agree that it is a bias in the movement, albeit probably a necessary one. The ecosystem example is an interesting one- I’d argue that it’s not that difficult to approach ecosystem conservation from an EA perspective. We generally understand how ecosystems work and how they provide measurable valuable services to humans. A cost-effectiveness calculation would provide the human value of ecosystem services (which environmental economists usually do)and, if you want to give inherent value to species diversity, add the number of species within a given area, the number of individuals of these species and rarity/ external value of species etc. Then add weights according to various criteria to give something like an ‘ecosystem value per square metre’, and you’d get to a value that you could compare to other ecosystems. Calculate the price that it costs to conserve various ecosystems around the world, and voila, you have a cost-effectiveness analysis that feels at home on an EA platform. The reason this process doesn’t feel 100% EA is not that it’s difficult to measure, but because it can include value judgements that aren’t related to the welfare of conscious beings.