This is a good paper and well done to the authors.
I think section 3 is very weak. I am not flagging this as a flaw in the argument just the area that I see the most room for improvement in the paper and/or the most need for follow up research. The authors do say that more research is needed which is good.
Some examples of what I mean by the argument is weak:
The paper says it is “reasonable to believe that AMF does very well on prioritarian, egalitarian, and sufficientarian criteria”. “reasonable to believe” is not a strong claim. No one has made any concerted effort to map the values of people who are not utilitarians, to come up with metrics that may represent what such people care about and evaluate charities on these metrics. This could be done but is not happening.
The paper says Iason “fail[s] to show that effective altruist recommendations actually do rely on utilitarianism” but the paper also fail to show that effective altruist recommendations actually do not rely on utilitarianism.
Etc
Why I think more research is useful here:
Because when the strongest case you can make for EA to people with equality as a moral intuition begins by saying “it is reasonable to believe . . . ” it is so hard to make EA useful to such people. For example when I meet people new to EA who care a lot about equality making the case that: ‘if you care about minimising suffering this ‘AMF’ thing comes up top and it is reasonable to assume that if you care about equality it also could be at the top because it is effective and helps the poorest’ carries a lot less weight than perhaps saying: ‘hey we funded a bunch of people who care foremost about equality, like you do, to map out their values and rank charities and this came top.’
No one has made any concerted effort to map the values of people who are not utilitarians, to come up with metrics that may represent what such people care about and evaluate charities on these metrics.
This appears to be demonstrably false. And in very strong terms given how strong a claim you’ve made and how I only need to find one person to prove it wrong. We have many non-utilitarian egalitarian luminaries making a concerted effort to come up with exactly the metrics that would tell us, based on egalitarian/priorian principles, what charities/interventions we should prioritize:
Adam Swift: Political theorist, sociologist, specializes in liberal egalitarian ethics, family values, communitarianism, school choice, social justice.
Ole Norheim: Harvard Physician, Medical ethics prof. working on distributive theories of justice, fair priority setting in low and high income countries. Is the head of the Priority Setting in Global Health (2012-2017) research project which is aiming to do exactly what you claimed nobody is working on.
Alex Voorhoeve: Egalitarian theorist, member of Priority Setting in Global Health project, featured on BBC, has co-authored with Norheim unsurprisingly
Nir Eyal: Harvard Global Health and Social Medicine Prof., specializes in population-level bioethics. Is currently working on a book that defends an egalitarian consequentialist (i.e. instrumental egalitarianism) framework for evaluating questions in bioethics and political theory.
All of these folks are mentioned in the paper.
I don’t want to call these individuals Effective Altruists without having personally seen/heard them self-identify with it but they have all publicly pledged 10% of their lifetime income to effective charities via Giving What We Can.
So if the old adage “Actions speak louder than words” still rings true then these non-utilitarians are far “more EA” than any number of utilitarians who publicly flaunt that they are part of effective altruism, but then do nothing.
And none of this should be surprising. The 2015 EA Survey shows that only 56% of respondents identify as utilitarian. The linked survey results argue that this sample accurately estimates the actual EA population. This would mean that ~44% of all EAs are non-utilitarian. That’s a lot. So even if utilitarians are the single largest majority, of course the rest of us non-utilitarian EAs aren’t just lounging around.
Update: Nir Eyal very much appears to self-identify as an effective altruist despite being a non-utilitarian. See interview with Harvard EA here specifically about non-utilitarin effective altruism and this article on Effective Altruism back in 2015. Wikipedia even mentions him as a “leader in Effective Altruism”
We reference a number of lines of evidence suggesting that donating to AMF does well on sufficientarian, prioritarian, egalitarian criteria. See footnotes 23 and 24. Thus, we provide evidence for our conclusion that ‘it is reasonable to believe that AMF does well on these criteria’. This, of course, is epistemically weaker than claims such as ‘it is certain that AMF ought to be recommended by prioritarians, egalitarians and sufficientarians’. You seem to suggest that concluding with a weak epistemic claim is inherently problematic, but that can’t be right. Surely, if the evidence provided only justifies a weak epistemic claim, making a weak epistemic claim is entirely appropriate.
You seem to criticise us for the movement having not yet provided a comprehensive algorithm mapping values on to actions. But arguing that the movement is failing is very different to arguing that the paper fails in its own terms. It is not as though we frame the paper as: “here is a comprehensive account of where you ought to give if you are an egalitarian or a prioritarian”. As you say, more research is needed, but we already say this in the paper.
Showing that ‘Gabriel fails to show that EA recommendations rely on utiltiarianism’ is a different task to showing that ‘EA recommendations do not rely on utilitarianism’. Showing that an argument for a proposition P fails is different to showing that not-P.
This is a good paper and well done to the authors.
I think section 3 is very weak. I am not flagging this as a flaw in the argument just the area that I see the most room for improvement in the paper and/or the most need for follow up research. The authors do say that more research is needed which is good.
Some examples of what I mean by the argument is weak:
The paper says it is “reasonable to believe that AMF does very well on prioritarian, egalitarian, and sufficientarian criteria”. “reasonable to believe” is not a strong claim. No one has made any concerted effort to map the values of people who are not utilitarians, to come up with metrics that may represent what such people care about and evaluate charities on these metrics. This could be done but is not happening.
The paper says Iason “fail[s] to show that effective altruist recommendations actually do rely on utilitarianism” but the paper also fail to show that effective altruist recommendations actually do not rely on utilitarianism.
Etc
Why I think more research is useful here:
Because when the strongest case you can make for EA to people with equality as a moral intuition begins by saying “it is reasonable to believe . . . ” it is so hard to make EA useful to such people. For example when I meet people new to EA who care a lot about equality making the case that: ‘if you care about minimising suffering this ‘AMF’ thing comes up top and it is reasonable to assume that if you care about equality it also could be at the top because it is effective and helps the poorest’ carries a lot less weight than perhaps saying: ‘hey we funded a bunch of people who care foremost about equality, like you do, to map out their values and rank charities and this came top.’
Note cross-posting a summarised comment on this paper from a discussion on Facebook https://www.facebook.com/groups/798404410293244/permalink/1021820764618273/?comment_id=1022125664587783
This appears to be demonstrably false. And in very strong terms given how strong a claim you’ve made and how I only need to find one person to prove it wrong. We have many non-utilitarian egalitarian luminaries making a concerted effort to come up with exactly the metrics that would tell us, based on egalitarian/priorian principles, what charities/interventions we should prioritize:
Adam Swift: Political theorist, sociologist, specializes in liberal egalitarian ethics, family values, communitarianism, school choice, social justice.
Ole Norheim: Harvard Physician, Medical ethics prof. working on distributive theories of justice, fair priority setting in low and high income countries. Is the head of the Priority Setting in Global Health (2012-2017) research project which is aiming to do exactly what you claimed nobody is working on.
Alex Voorhoeve: Egalitarian theorist, member of Priority Setting in Global Health project, featured on BBC, has co-authored with Norheim unsurprisingly
Nir Eyal: Harvard Global Health and Social Medicine Prof., specializes in population-level bioethics. Is currently working on a book that defends an egalitarian consequentialist (i.e. instrumental egalitarianism) framework for evaluating questions in bioethics and political theory.
All of these folks are mentioned in the paper.
I don’t want to call these individuals Effective Altruists without having personally seen/heard them self-identify with it but they have all publicly pledged 10% of their lifetime income to effective charities via Giving What We Can.
So if the old adage “Actions speak louder than words” still rings true then these non-utilitarians are far “more EA” than any number of utilitarians who publicly flaunt that they are part of effective altruism, but then do nothing.
And none of this should be surprising. The 2015 EA Survey shows that only 56% of respondents identify as utilitarian. The linked survey results argue that this sample accurately estimates the actual EA population. This would mean that ~44% of all EAs are non-utilitarian. That’s a lot. So even if utilitarians are the single largest majority, of course the rest of us non-utilitarian EAs aren’t just lounging around.
Update: Nir Eyal very much appears to self-identify as an effective altruist despite being a non-utilitarian. See interview with Harvard EA here specifically about non-utilitarin effective altruism and this article on Effective Altruism back in 2015. Wikipedia even mentions him as a “leader in Effective Altruism”
Hi,
We reference a number of lines of evidence suggesting that donating to AMF does well on sufficientarian, prioritarian, egalitarian criteria. See footnotes 23 and 24. Thus, we provide evidence for our conclusion that ‘it is reasonable to believe that AMF does well on these criteria’. This, of course, is epistemically weaker than claims such as ‘it is certain that AMF ought to be recommended by prioritarians, egalitarians and sufficientarians’. You seem to suggest that concluding with a weak epistemic claim is inherently problematic, but that can’t be right. Surely, if the evidence provided only justifies a weak epistemic claim, making a weak epistemic claim is entirely appropriate.
You seem to criticise us for the movement having not yet provided a comprehensive algorithm mapping values on to actions. But arguing that the movement is failing is very different to arguing that the paper fails in its own terms. It is not as though we frame the paper as: “here is a comprehensive account of where you ought to give if you are an egalitarian or a prioritarian”. As you say, more research is needed, but we already say this in the paper.
Showing that ‘Gabriel fails to show that EA recommendations rely on utiltiarianism’ is a different task to showing that ‘EA recommendations do not rely on utilitarianism’. Showing that an argument for a proposition P fails is different to showing that not-P.