Hello! Thank you for such a thoughtful comment. You’re obviously right on the first point that Singer/Ord/MacAskill have tried to appeal to non-utilitarians, and I think that’s great—I just wish, I suppose, that this was more deeply culturally embedded, if that’s a helpful way to put it. (But the fact this is already happening is why I really don’t want to be too critical!)
And I fully, completely agree that you can’t do effective altruism without philosophy or making value-judgements. (Peter made a similar point to yours in a comment to my blog). But I think that what I’m trying to get at is something slightly different: I’m trying to say that at a very basic level, most moral theories can get on board with what the EA community wants to do, and while there might be disagreements between utilitarians and other theories down the line, there’s no reason they shouldn’t be able to share these common goals, nor that non-utilitarians’ contributions to EA should be anything other than net-positive by a utilitarian standard. To me that’s quite important, because I think a great benefit of effective altruism as a whole is how well it focusses the mind on making a positive marginal impact, and I would really like to see many more people adopt that kind of mindset, even if they ultimately make subjective choices much further down the line of impact-making that a pure utilitarian disagrees with. (And indeed such subjective and contentious moral choices within EA already happen, because utilitarianism doesn’t tell you straightforwardly how to e.g. decide how to weight animal welfare, for example. So I suppose I really don’t think this kind of more culturally value-plural form of EA would encounter philosophical trouble any more than EAs already do.)
On Gates and Singer’s philosophical similarities, I agree! But I think Gates wears his philosophy much more lightly than most effective altruists do, and has escaped some ire because of it, which is what I was trying to get at—although I realise this was probably unhelpfully unclear.
Thanks for your response. I don’t think we disagree on as much as I thought, then! I suppose I’m less confident than you that those disagreements down the line aren’t going to lead to the same sort of backlash that we currently see.
If we see EA as a community of individuals who are attempting to do good better (by their own lights), then while I certainly agree that the contributions of non-utilitarians are net-positive from a utilitarian perspective, we utilitarian EAs (including leaders of the movement, who some might say have an obligation to be more neutral for PR purposes) may still think it’s best to try to persuade others that our preferred causes should be prioritised even if it comes at the expense of bad PR and turning away some non-utilitarians. Given that philosophy may cause people to decisively change their views on prioritisation, spreading certain philosophical views may also be important.
I guess I am somewhat cheekily attempting to shift the burden of responsibility back onto non-utilitarians. As you say, even people like Torres are on board with the core ideas of EA, so in my view they should be engaging in philosophical and cause prioritisation debates from within the movement (as EAs do all the time, as you note) instead of trying to sabotage the entire project. But I do appreciate that this has become more difficult to do. I think it’s true that the ‘official messaging’ has subtly moved away from the idea that there are different ‘wings’ of EA (global health, animal welfare, existential risk) and toward an idea that not everyone will be able to get on board with (though I still think they should be able to, like many existing non-utilitarian EAs).
Trust seems to be important here. EAs can have philosophical and cause prioritisation disagreements while trusting that people who disagree with them are committed to doing good and are probably doing some amount of good (longtermists can think global health people are doing some good, and vice-versa). Similarly, two utilitarians can as you say disagree empirically about the relative intensity of pleasure and suffering in different species without suspecting that the other isn‘t making a good faith attempt to understand how to maximise utility. On the other hand, critics like Torres and possibly some of the others you mentioned may think that EA is actively doing harm (and/or that prominent EAs are actively evil). One way it could be doing harm is by diverting resources away from the causes they think are important (and instead of trying to argue for their causes from within the movement, they may, on consequentialist grounds, think it’s better to try to damage the movement).
All of this is to say that I think these ‘disagreements down the line’ are mostly to blame for the current state of affairs and can’t really be avoided, while conceding that ‘official EA messaging’ has also played its part (but, as a take-no-prisoners utilitarian, I’m not really sure whether that’s net-negative or not!)
Hello! Thank you for such a thoughtful comment. You’re obviously right on the first point that Singer/Ord/MacAskill have tried to appeal to non-utilitarians, and I think that’s great—I just wish, I suppose, that this was more deeply culturally embedded, if that’s a helpful way to put it. (But the fact this is already happening is why I really don’t want to be too critical!)
And I fully, completely agree that you can’t do effective altruism without philosophy or making value-judgements. (Peter made a similar point to yours in a comment to my blog). But I think that what I’m trying to get at is something slightly different: I’m trying to say that at a very basic level, most moral theories can get on board with what the EA community wants to do, and while there might be disagreements between utilitarians and other theories down the line, there’s no reason they shouldn’t be able to share these common goals, nor that non-utilitarians’ contributions to EA should be anything other than net-positive by a utilitarian standard. To me that’s quite important, because I think a great benefit of effective altruism as a whole is how well it focusses the mind on making a positive marginal impact, and I would really like to see many more people adopt that kind of mindset, even if they ultimately make subjective choices much further down the line of impact-making that a pure utilitarian disagrees with. (And indeed such subjective and contentious moral choices within EA already happen, because utilitarianism doesn’t tell you straightforwardly how to e.g. decide how to weight animal welfare, for example. So I suppose I really don’t think this kind of more culturally value-plural form of EA would encounter philosophical trouble any more than EAs already do.)
On Gates and Singer’s philosophical similarities, I agree! But I think Gates wears his philosophy much more lightly than most effective altruists do, and has escaped some ire because of it, which is what I was trying to get at—although I realise this was probably unhelpfully unclear.
Thanks for your response. I don’t think we disagree on as much as I thought, then! I suppose I’m less confident than you that those disagreements down the line aren’t going to lead to the same sort of backlash that we currently see.
If we see EA as a community of individuals who are attempting to do good better (by their own lights), then while I certainly agree that the contributions of non-utilitarians are net-positive from a utilitarian perspective, we utilitarian EAs (including leaders of the movement, who some might say have an obligation to be more neutral for PR purposes) may still think it’s best to try to persuade others that our preferred causes should be prioritised even if it comes at the expense of bad PR and turning away some non-utilitarians. Given that philosophy may cause people to decisively change their views on prioritisation, spreading certain philosophical views may also be important.
I guess I am somewhat cheekily attempting to shift the burden of responsibility back onto non-utilitarians. As you say, even people like Torres are on board with the core ideas of EA, so in my view they should be engaging in philosophical and cause prioritisation debates from within the movement (as EAs do all the time, as you note) instead of trying to sabotage the entire project. But I do appreciate that this has become more difficult to do. I think it’s true that the ‘official messaging’ has subtly moved away from the idea that there are different ‘wings’ of EA (global health, animal welfare, existential risk) and toward an idea that not everyone will be able to get on board with (though I still think they should be able to, like many existing non-utilitarian EAs).
Trust seems to be important here. EAs can have philosophical and cause prioritisation disagreements while trusting that people who disagree with them are committed to doing good and are probably doing some amount of good (longtermists can think global health people are doing some good, and vice-versa). Similarly, two utilitarians can as you say disagree empirically about the relative intensity of pleasure and suffering in different species without suspecting that the other isn‘t making a good faith attempt to understand how to maximise utility. On the other hand, critics like Torres and possibly some of the others you mentioned may think that EA is actively doing harm (and/or that prominent EAs are actively evil). One way it could be doing harm is by diverting resources away from the causes they think are important (and instead of trying to argue for their causes from within the movement, they may, on consequentialist grounds, think it’s better to try to damage the movement).
All of this is to say that I think these ‘disagreements down the line’ are mostly to blame for the current state of affairs and can’t really be avoided, while conceding that ‘official EA messaging’ has also played its part (but, as a take-no-prisoners utilitarian, I’m not really sure whether that’s net-negative or not!)