These criticisms are neither new nor particularly compelling to me.
The argument about justice has no real force unless Gabriel wants to unpack it on an actual intervention. While it’s possible to concoct thought experiments in which Gabriel’s notion of a thick effective altruist makes a decision that Gabriel doesn’t like, IMO this rarely comes up in practice and there are no real-world examples in the paper. So, not very exciting. There’s already a whole cottage industry of thought experiments in which utilitarians do silly things.
Also, the claim that EA systematically neglects the worst-off is ridiculous on its face; many EAs explicitly have a heuristic of trying to help the worst-off and it seems that most other attempts at improving the world fare vastly worse than EAs here.
WRT the charges of “materialism”, “individualism” and “instrumentalism”, again, people have been making these for a while and I still don’t find them compelling. First, it seems that Gabriel has no idea what Open Phil (or other non-global-poverty focused EAers) are up to, as the description of “EA methodology” really only applies to a pretty narrow segment of organizations.
Second, I think it’s pretty clear that GW are aware of the limits of RCT evidence and try to think through the consequences of their interventions that might not get picked up by them. If Gabriel wants to argue that their “blindness” here has caused them to make actual bad decisions, then I’d be interested in hearing that argument—but claiming that “something is wrong with EA because I think that GiveWell would make this obviously bad call in a contrived thought experiment” is a long way from such an argument. Again, claims of “more needs to be done about this” without specific criticisms of actual decisions (instead of criticisms of what Gabriel imagines GW would do) lack much force to me.
These criticisms are neither new nor particularly compelling to me.
The argument about justice has no real force unless Gabriel wants to unpack it on an actual intervention. While it’s possible to concoct thought experiments in which Gabriel’s notion of a thick effective altruist makes a decision that Gabriel doesn’t like, IMO this rarely comes up in practice and there are no real-world examples in the paper. So, not very exciting. There’s already a whole cottage industry of thought experiments in which utilitarians do silly things.
Also, the claim that EA systematically neglects the worst-off is ridiculous on its face; many EAs explicitly have a heuristic of trying to help the worst-off and it seems that most other attempts at improving the world fare vastly worse than EAs here.
WRT the charges of “materialism”, “individualism” and “instrumentalism”, again, people have been making these for a while and I still don’t find them compelling. First, it seems that Gabriel has no idea what Open Phil (or other non-global-poverty focused EAers) are up to, as the description of “EA methodology” really only applies to a pretty narrow segment of organizations.
Second, I think it’s pretty clear that GW are aware of the limits of RCT evidence and try to think through the consequences of their interventions that might not get picked up by them. If Gabriel wants to argue that their “blindness” here has caused them to make actual bad decisions, then I’d be interested in hearing that argument—but claiming that “something is wrong with EA because I think that GiveWell would make this obviously bad call in a contrived thought experiment” is a long way from such an argument. Again, claims of “more needs to be done about this” without specific criticisms of actual decisions (instead of criticisms of what Gabriel imagines GW would do) lack much force to me.