In general I would be very wary of taking definitions written for an academic philosophical audience and relying on them in other situations. Often the use of technical language by philosophers does not carry over well to other contexts
ADDITIONAL EVIDENCE FOR THE ABOVE
For example I have a very vague memory of talking to Will on this and concluding that he had a slightly odd and quite broad definition of “welfarist”, where “welfare” in this context just meant ‘good for others’ without any implications of fulfilling happiness / utility / preference / etc. This comes out in the linked paper, in the line “if we want to claim that one course of action is, as far as we know, the most effective way of increasing the welfare of all, we simply cannot avoid making philosophical assumptions. How should we value improving quality of life compared to saving lives? How should we value alleviating non-human animal suffering compared to alleviating human suffering? How should we value mitigating risks ….” etc
The thing I find confusing about what Will says is
effective altruism is the project of using evidence and reason to figure out how to benefit others
I draw attention to ‘benefit others’. Two of EA’s main causes are farm animal welfare and reducing risks of human extinction. The former is about causing happy animals to exist rather than miserable ones, and the latter is about ensuring future humans exist (and trying to improve their welfare). But it doesn’t really make sense to say that you can benefit someone by causing them to exist. It’s certainly bizarre to say it’s better for someone to exist than not to exist, because if the person doesn’t exist there’s no object to attach any predicates to. There’s been a recent move by some philosophers, such as McMahan and Parfit, to say it can be good (without being better) for someone to exist, but that just seems like philosophical sleight of hand.
A great many EA philosophers, including I think Singer, MacAskill, Greaves, Ord either are totalists or very sympathetic to it. Totalis the view the best outcome is the one with the largest sum of lifetime well-being of all people—past, present, future and it’s known as impersonal view in population ethics. Outcomes are not deemed good, on impersonal views, because they are good for anyone, or because the benefit anyone, they are good because there is more of the thing which is valuable, namely welfare.
So there’s something fishy about saying EA is trying to benefit others when many EA activities, as mentioned, don’t benefit anyone, and many EAs think we shouldn’t, strictly, be trying to benefit people so much as realising more impersonal value. It would make more sense to replace ‘benefit others as much as possible’ with ‘do as much good as possible’.
On your view, is it good for someone to prevent them from dying? Doesn’t the same argument apply—if the person doesn’t exist (is dead) there’s no object to attach any predicates to.
No, I also don’t think it makes sense to say death is good or bad for people. Hence it’s not true to say you benefit someone by keeping them alive. Given most people do want to say there’s something good about keeping people alive, it makes sense to adopt an impersonal locution.
I’m not making an argument about what the correct account of ethics is here, I’m just making a point about the correct use of language. Will’s definition can’t be capturing what he means and is thus misleading, so ‘do the most good’ is better than ‘benefit others’.
In line with the above, one could stick with the EA definition and when asked to gloss it, say that different people understand benefitting others in different ways, some in such a way that creating new people etc counts as a benefit, others not. One downside of that is that it excludes the logically possible option of [your account of benefitting others; morality isn’t all about benefitting others sometimes it’s about impersonal good]
On your account, as you say, bringing people into a life of suffering doesn’t harm them and preventing someone from dying doesn’t benefit them. So, you could also have said “lots of EA activities are devoted to preventing people from dying and preventing lives of suffering, but neither activity benefits anyone, so the definition is wrong”. This is a harder sell, and it seems like you’re just criticising the definition of EA on the basis of a weird account of the meaning of ‘benefitting others’.
I would guess that the vast majorty of people think that preventing a future life of suffering and saving lives both benefit somebody. If so, the vast majority of people would be committed to something which denies your criticism of the definition of EA.
weird account of the meaning of ‘benefitting others’.
The account might be uncommon in ordinarly langauge, but most philosophers accept creating lives doesn’t benefit the created person. I’m at least being consistent and I don’t think that consistency is objectionable. Calling it the view weird is unhelpful.
But suppose people typically think it’s odd to claim you’re benefiting someone by creating them. Then the stated definition of what’s EAs about will be at least somewhat misleading to them when you explain EA in greater detail. Consistent with other things I’ve written on this forum, I think EA should take avoiding being misleading very seriously.
I’m not claiming this is a massive point, it just stuck out to me.
I suppose there are two ways of securing neutrality—letting people pick their own meaning of ‘doing good’, and letting people pick their own meaning of ‘benefiting others’
All points make sense. I find that when introducing the idea, however, people seem slightly confused by the idea of “doing as much good as possible” (I tend to use nearly identical phrasing). I think the idea seems too abstract to them, and I feel compelled to give some kind of more concrete example to help explain. Although I haven’t really tried it out as an alternative, the idea of EA aiming to “benefit others” seems that it might be slightly clearer / more imaginable?
If you agree, this then raises the question of whether we should distinguish a definition of EA for “academic” and “outreach” / explanatory purposes. I’d argue that we should probably avoid separating a definition out for different contexts, so might need to keep thinking about how to word a definition which is clear, but also allows for nuance?
I’d agree with being hesitant to distinguish definitions of EA for “academic” and “outreach” purposes. It seems like that’s asking for someone to use the wrong definition in the wrong context.
Really? “doing as much good as possible” is confusing people? I tend to use that language, and I haven’t noticed people getting confused (maybe I haven’t been observant enough!)
Aren’t you going further from the definition though?
Any short definition about EA by itself I find to be abstract. Most people I encounter assume it’s about doing as much good small things as possible—or worse that it’s a political philosophy (red/blue thinking). It’s only when I give examples of myself or ask what their cause interests could be that they slowly break away from the abstract dictionary definitions.
Maybe “confusing” was the wrong word. But I tend to get the sense that people just have no idea what the concept means in practice when I say that, because its so vague / abstract. I’m guessing that people are thinking along the lines “what does he mean by ‘doing good’? Surely he means something else / something more specific?”
But I might just be misreading people slightly too.
Literally everything that doesn’t benefit existing beings fails to “benefit others”, under your view. E.g. banning Agent Orange is not something that “benefits others”. But banning Agent Orange, and lots of other things that benefit future generations, are regarded as benefiting others. This doesn’t depend on the totalist view, it’s largely uncontroversial in philosophy, and it’s commonly assumed in the colloquial sense of benefiting others.
Philosophical sleight of hand would be to deny that we are benefiting others, something that colloquial and common sense views would affirm, just because of a technical philosophical point.
I suggest to leave it up to the other persons to decide whether they are benefitted. For example: I have a happy, positive life, so I claim that my parents benefitted me when they caused my existence. So there does exist someone (me, now, in this situation) who claims to be benefitted by the choice of someone else (my parents 38 years ago), even if in the counterfactual I do not exist. So my parents made a choice for a situation where there is a bit more benefit added to the total benefit. If you disagree in the sense that you don’t think you were benefitted by your parents when they chose for your existence (even when you are as happy as I am), then that means your parents did not create an extra bit if benefit and you were not benefitted. More on this here: https://stijnbruers.wordpress.com/2018/02/24/variable-critical-level-utilitarianism-as-the-solution-to-population-ethics/
Good point. The choice of moral stance (ie. totalist, person-affecting, “moral uncertanitist” etc) is the biggest factor behind any preference ordering for allocation of resources and courses of action. Thus, it is possible that further rigorous study of ethics, if lesser uncertainty between the competing views or greater agreement among scholars is achieved, could bring very high returns in terms of impact
I agree it may seem to point toward some “person-affecting views” which many EAs consider to be wrong.
Possibly the aim was to describe the motivation is altruistic?
The disadvantage of ‘do as much good as possible’ may be it would associtate EA with utilitarianism even more than it is.
I think about EA as a movement trying to answer a question “how to change the world for better most effectively with limited resources” in a rational way, and act on the answer. Which seems to me a tiny bit more open than ‘do as much good as possible’ as it requires just some sort of comparison on world-sates, while ‘as much good as possible’ seems to depend on more complex structure.
In general I would be very wary of taking definitions written for an academic philosophical audience and relying on them in other situations. Often the use of technical language by philosophers does not carry over well to other contexts
The definitions and explanations used here: https://www.effectivealtruism.org and here: https://whatiseffectivealtruism.com/ are in my mind, better and more useful than the quote above for almost any situation I have been in to date.
ADDITIONAL EVIDENCE FOR THE ABOVE For example I have a very vague memory of talking to Will on this and concluding that he had a slightly odd and quite broad definition of “welfarist”, where “welfare” in this context just meant ‘good for others’ without any implications of fulfilling happiness / utility / preference / etc. This comes out in the linked paper, in the line “if we want to claim that one course of action is, as far as we know, the most effective way of increasing the welfare of all, we simply cannot avoid making philosophical assumptions. How should we value improving quality of life compared to saving lives? How should we value alleviating non-human animal suffering compared to alleviating human suffering? How should we value mitigating risks ….” etc
The thing I find confusing about what Will says is
I draw attention to ‘benefit others’. Two of EA’s main causes are farm animal welfare and reducing risks of human extinction. The former is about causing happy animals to exist rather than miserable ones, and the latter is about ensuring future humans exist (and trying to improve their welfare). But it doesn’t really make sense to say that you can benefit someone by causing them to exist. It’s certainly bizarre to say it’s better for someone to exist than not to exist, because if the person doesn’t exist there’s no object to attach any predicates to. There’s been a recent move by some philosophers, such as McMahan and Parfit, to say it can be good (without being better) for someone to exist, but that just seems like philosophical sleight of hand.
A great many EA philosophers, including I think Singer, MacAskill, Greaves, Ord either are totalists or very sympathetic to it. Totalis the view the best outcome is the one with the largest sum of lifetime well-being of all people—past, present, future and it’s known as impersonal view in population ethics. Outcomes are not deemed good, on impersonal views, because they are good for anyone, or because the benefit anyone, they are good because there is more of the thing which is valuable, namely welfare.
So there’s something fishy about saying EA is trying to benefit others when many EA activities, as mentioned, don’t benefit anyone, and many EAs think we shouldn’t, strictly, be trying to benefit people so much as realising more impersonal value. It would make more sense to replace ‘benefit others as much as possible’ with ‘do as much good as possible’.
Does it harm someone to bring them into existence with a life of intense suffering?
No. It might be impersonally bad though.
On your view, is it good for someone to prevent them from dying? Doesn’t the same argument apply—if the person doesn’t exist (is dead) there’s no object to attach any predicates to.
No, I also don’t think it makes sense to say death is good or bad for people. Hence it’s not true to say you benefit someone by keeping them alive. Given most people do want to say there’s something good about keeping people alive, it makes sense to adopt an impersonal locution.
I’m not making an argument about what the correct account of ethics is here, I’m just making a point about the correct use of language. Will’s definition can’t be capturing what he means and is thus misleading, so ‘do the most good’ is better than ‘benefit others’.
In line with the above, one could stick with the EA definition and when asked to gloss it, say that different people understand benefitting others in different ways, some in such a way that creating new people etc counts as a benefit, others not. One downside of that is that it excludes the logically possible option of [your account of benefitting others; morality isn’t all about benefitting others sometimes it’s about impersonal good]
On your account, as you say, bringing people into a life of suffering doesn’t harm them and preventing someone from dying doesn’t benefit them. So, you could also have said “lots of EA activities are devoted to preventing people from dying and preventing lives of suffering, but neither activity benefits anyone, so the definition is wrong”. This is a harder sell, and it seems like you’re just criticising the definition of EA on the basis of a weird account of the meaning of ‘benefitting others’.
I would guess that the vast majorty of people think that preventing a future life of suffering and saving lives both benefit somebody. If so, the vast majority of people would be committed to something which denies your criticism of the definition of EA.
The account might be uncommon in ordinarly langauge, but most philosophers accept creating lives doesn’t benefit the created person. I’m at least being consistent and I don’t think that consistency is objectionable. Calling it the view weird is unhelpful.
But suppose people typically think it’s odd to claim you’re benefiting someone by creating them. Then the stated definition of what’s EAs about will be at least somewhat misleading to them when you explain EA in greater detail. Consistent with other things I’ve written on this forum, I think EA should take avoiding being misleading very seriously.
I’m not claiming this is a massive point, it just stuck out to me.
Agreed, weirdness accusation retracted.
I suppose there are two ways of securing neutrality—letting people pick their own meaning of ‘doing good’, and letting people pick their own meaning of ‘benefiting others’
All points make sense. I find that when introducing the idea, however, people seem slightly confused by the idea of “doing as much good as possible” (I tend to use nearly identical phrasing). I think the idea seems too abstract to them, and I feel compelled to give some kind of more concrete example to help explain. Although I haven’t really tried it out as an alternative, the idea of EA aiming to “benefit others” seems that it might be slightly clearer / more imaginable?
If you agree, this then raises the question of whether we should distinguish a definition of EA for “academic” and “outreach” / explanatory purposes. I’d argue that we should probably avoid separating a definition out for different contexts, so might need to keep thinking about how to word a definition which is clear, but also allows for nuance?
I’d agree with being hesitant to distinguish definitions of EA for “academic” and “outreach” purposes. It seems like that’s asking for someone to use the wrong definition in the wrong context.
Really? “doing as much good as possible” is confusing people? I tend to use that language, and I haven’t noticed people getting confused (maybe I haven’t been observant enough!)
Aren’t you going further from the definition though?
Any short definition about EA by itself I find to be abstract. Most people I encounter assume it’s about doing as much good small things as possible—or worse that it’s a political philosophy (red/blue thinking). It’s only when I give examples of myself or ask what their cause interests could be that they slowly break away from the abstract dictionary definitions.
Maybe “confusing” was the wrong word. But I tend to get the sense that people just have no idea what the concept means in practice when I say that, because its so vague / abstract. I’m guessing that people are thinking along the lines “what does he mean by ‘doing good’? Surely he means something else / something more specific?” But I might just be misreading people slightly too.
It’s not confusing, but it’s vague.
I’ve often observed your lack of observance :)
Literally everything that doesn’t benefit existing beings fails to “benefit others”, under your view. E.g. banning Agent Orange is not something that “benefits others”. But banning Agent Orange, and lots of other things that benefit future generations, are regarded as benefiting others. This doesn’t depend on the totalist view, it’s largely uncontroversial in philosophy, and it’s commonly assumed in the colloquial sense of benefiting others.
Philosophical sleight of hand would be to deny that we are benefiting others, something that colloquial and common sense views would affirm, just because of a technical philosophical point.
I suggest to leave it up to the other persons to decide whether they are benefitted. For example: I have a happy, positive life, so I claim that my parents benefitted me when they caused my existence. So there does exist someone (me, now, in this situation) who claims to be benefitted by the choice of someone else (my parents 38 years ago), even if in the counterfactual I do not exist. So my parents made a choice for a situation where there is a bit more benefit added to the total benefit. If you disagree in the sense that you don’t think you were benefitted by your parents when they chose for your existence (even when you are as happy as I am), then that means your parents did not create an extra bit if benefit and you were not benefitted. More on this here: https://stijnbruers.wordpress.com/2018/02/24/variable-critical-level-utilitarianism-as-the-solution-to-population-ethics/
Good point. The choice of moral stance (ie. totalist, person-affecting, “moral uncertanitist” etc) is the biggest factor behind any preference ordering for allocation of resources and courses of action. Thus, it is possible that further rigorous study of ethics, if lesser uncertainty between the competing views or greater agreement among scholars is achieved, could bring very high returns in terms of impact
I agree it may seem to point toward some “person-affecting views” which many EAs consider to be wrong.
Possibly the aim was to describe the motivation is altruistic?
The disadvantage of ‘do as much good as possible’ may be it would associtate EA with utilitarianism even more than it is.
I think about EA as a movement trying to answer a question “how to change the world for better most effectively with limited resources” in a rational way, and act on the answer. Which seems to me a tiny bit more open than ‘do as much good as possible’ as it requires just some sort of comparison on world-sates, while ‘as much good as possible’ seems to depend on more complex structure.