The criticism of the concept of “effective altruism,” and the second main criticism to the extent that it’s related to it, also feels odd to me. Altruism in the sense of only producing good is not realistically possible. By writing your Wired article, say, it is overwhelmingly likely that you set off a chain of events that will cause a different sperm to fertilize a different egg, which will set off all sorts of other chains of events meaning that in hundreds of years we will have different people, different weather patterns, different all sorts of stuff. So writing this will cause untold deaths, untold suffering, that wouldn’t have occurred otherwise. So too for your friend Aaron in the Wired article helping the people on the island, and so too for anything else you or anyone might do.
So either altruism is something other than doing only good, or altruism is impossible and the most we can hope for is some kind of approximation. It wouldn’t follow that maximizing EV is the best way to be (or approximate being) altruistic, but the mere fact that the actions EAs take are like all other actions in that they have some negative consequences is not in itself much of a criticism.
The criticism of the concept of “effective altruism,” and the second main criticism to the extent that it’s related to it, also feels odd to me. Altruism in the sense of only producing good is not realistically possible. By writing your Wired article, say, it is overwhelmingly likely that you set off a chain of events that will cause a different sperm to fertilize a different egg, which will set off all sorts of other chains of events meaning that in hundreds of years we will have different people, different weather patterns, different all sorts of stuff. So writing this will cause untold deaths, untold suffering, that wouldn’t have occurred otherwise. So too for your friend Aaron in the Wired article helping the people on the island, and so too for anything else you or anyone might do.
So either altruism is something other than doing only good, or altruism is impossible and the most we can hope for is some kind of approximation. It wouldn’t follow that maximizing EV is the best way to be (or approximate being) altruistic, but the mere fact that the actions EAs take are like all other actions in that they have some negative consequences is not in itself much of a criticism.