That personal dietary choices are important on consequentialist effectiveness grounds.
I actually think there are lots of legitimate and powerful reasons for EAs to consider veg*nism, such as:
A ~deontological belief that it’s wrong to eat animals
A desire for lifestyle choices that help you connect with what you care about
Signalling caring
A desire for shared culture with people you share values with
… but it feels to me almost intellectually dishonest to have it be part of an answer to someone saying “OK, I’m bought into the idea that I should really go after what’s important, what do I do now?”
(I’m not vegetarian, although I do try to only consume animals I think have had reasonable welfare levels, for reasons in the vicinity of the first two listed above. I still have some visceral unease about the idea of becoming vegetarian that is like “but this might be mistaken for being taken in by intellectually dishonest arguments”.)
I almost feel cheeky responding to this as you’ve essentially been baited into providing a controversial view, which I am now choosing to argue against. Sorry!
I’d say that something doesn’t have to be the most effective thing to do for it to be worth doing, even if you’re an EA. If something is a good thing, and provided it doesn’t really have an opportunity cost, then it seems to me that a consequentialist EA should do it regardless of how good it is.
To illustrate my point, one can say it’s a good thing to donate to a seeing eye dog charity. In a way it is, but an EA would say it isn’t because there is an opportunity cost as you could instead donate to the Against Malaria Foundation for example which is more effective. So donating to a seeing eye dog charity isn’t really a good thing to do.
Choosing to follow a ve*an diet doesn’t have an opportunity cost (usually). You have to eat, and you’re just choosing to eat something different. It doesn’t stop you doing something else. Therefore even if it realises a small benefit it seems worth it (and for the record I don’t think the benefit is small).
Or perhaps you just think the personal cost to you of being ve*an is substantial enough to offset the harm to the animals. From a utilitarian view I’d imagine this is unlikely to be true. I happen to think avoiding the suffering of even one animal is significant, similarly to the fact that we think it would be highly significant to save just one human life. And following a vegan diet for a while will benefit way more than just one animal anyway.
I almost feel cheeky responding to this as you’ve essentially been baited into providing a controversial view, which I am now choosing to argue against. Sorry!
That’s fine! :)
In turn, an apology: my controversial view has baited you into response, and I’m now going to take your response as kind-of-volunteering for me to be critical. So I’m going to try and exhibit how it seems mistaken to me, and I’m going (in part) to use mockery as a rhetorical means to achieve this. I think this would usually be a violation of discourse norms, but here: the meta-level point is to try and exhibit more clearly what this controversial view I hold is and why; the thing I object to is a style of argument more than a conclusion; I think it’s helpful for the exhibition to be able to draw attention to features of a specific instance, and you’re providing what-seems-like-implicit-permission for me to do that. Sorry!
I’d say that something doesn’t have to be the most effective thing to do for it to be worth doing, even if you’re an EA.
To be clear: I strongly agree with this, and this was a big part of what I was trying say above.
So donating to a seeing eye dog charity isn’t really a good thing to do.
This is non-central, but FWIW I disagree with this. Donating to the guide dog charityusually is a good thing to do (relative to important social norms where people have property rights over their money), it’s just that it turns out there are fairly accessible actions which are quite a lot better.
Choosing to follow a ve*an diet doesn’t have an opportunity cost (usually). You have to eat, and you’re just choosing to eat something different.
This, I’m afraid, is the type of statement that really bugs me. It’s trying to collapse a complex issue onto simple dimensions, draw a simple conclusion there, and project it back to the original complex world. But in doing so it’s thrown common-sense out of the window!
If I believed that choosing to follow a ve*an diet usually didn’t have an opportunity cost, I would expect to see:
People usually willing to go ve*an for a year for some small material gain
In theory if there was no opportunity cost, even for something trivial like $10, but I think many non ve*ans would be unwilling to do this even for $1000
[As an aside, I think taxes on meat would probably be a good policy that might well be accessible]
Almost everyone who goes ve*an for ethical reasons keeping it up
In fact some significant proportion of people stop
Or perhaps you just think the personal cost to you of being ve*an is substantial enough to offset the harm to the animals.
I certainly don’t claim this in any utilitarian comparison of welfare. But now the argument seems almost precisely analogous to:
“You could help the poorest people in the world a tremendous amount for the cost of a cup of coffee. Since your welfare shouldn’t outweigh theirs, you should forgo that cup of coffee, and every other small luxury in your life, to give more to them.”
I think EA correctly rejects this argument, and that it’s correct to reject its analogue as well. (I think the argument is stronger for ve*anism than giving to the poor instead of buying coffee; but I also think that there are better giving opportunities than giving directly to the poor, and that when you work it through the coffee argument ends up being stronger than the corresponding one for ve*anism.)
---
Again, I’m not claiming that EAs shouldn’t be ve*an. I think it’s a morally virtuous thing to do!
But I don’t think EAs have a monopoly on virtue. I think the EA schtick is more like “we’ll think things through really carefully and tell you what the most efficient ways to do good are”. And so I think that if it’s presented as “you want to be an EA now? great! how about ve*anism?” then the implicature is that this is a bigger deal than, say, moving from giving away 7% of your income to giving away 8%, and that this is badly misleading.
Notes:
There may be some people for whom the opportunity cost is trivial
I think there are probably quite a few people for whom the opportunity cost is actually negative—i.e. it’s overall easier for them to be ve*an than not
I would feel very good about encouragement to check whether people fall into one of these buckets, as in cases where they do then dietary change may be a particularly efficient way to do good
I’d also feel very good about moral exhortment to be ve*an that was explicit that it wasn’t grounded in EA thinking, like:
“Many EAs try to be morally serious in all aspects of their lives, beyond just trying to optimise for the most good achievable. This leads us to ve*anism. You might want to consider it.”
I’m not 100% sure but we may be defining opportunity cost differently. I’m drawing a distinction between opportunity cost and personal cost. Opportunity cost relates to the fact that doing something may inhibit you from doing something else that is more effective. Even if going vegan didn’t have any opportunity cost (which is what I’m arguing in most cases), people may still not want to do it due to high perceived personal cost (e.g. thinking vegan food isn’t tasty). I’m not claiming there is no personal cost and that is indeed why people don’t go / stay vegan—although I do think personal costs are unfortunately overblown.
Without addressing all of your points in detail I think a useful thought experiment might be to imagine a world where we are eating humans not animals. E.g. say there are mentally-challenged humans of a comparable intelligence/capacity to suffer to non-human animals and we farm them in poor conditions and eat them causing their suffering. I’d imagine most people would judge this as morally unacceptable and go vegan on consequentialist grounds (although perhaps not and it would actually be deontological grounds?). If you would go vegan in the thought experiment but not in the real world then you’re probably speciesist to some degree which I ultimately don’t think can be defended.
I think the EA schtick is more like “we’ll think things through really carefully and tell you what the most efficient ways to do good are”. And so I think that if it’s presented as “you want to be an EA now? great! how about ve*anism?”
EA is sometimes described as doing the most good (most common definition) or I suppose is sometimes described as finding the most effective ways to do good. These can be construed as two different things. I would say under the first definition that being vegan naturally becomes part of the conversation for the reasons I have mentioned (little to no opportunity cost).
Also, we may be fundamentally disagreeing on the scale of the benefits on consequentialist grounds of going vegan as well—I think they are quite considerable. Indeed “signalling caring” as you put it can then convince others to consider veganism in which case you can get a snowball of positive effects. But that’s a whole other discussion.
P.S. I agree we can probably improve the way veganism is messaged in EA and it’s possible I am part of the problem!
I’ve thought a bit about this for personal reasons, and I found Scott Alexander’s take on it to be enlightening.
I see a tension between the following two arguments that I find plausible:
Some people run into health issues due to a vegan diet despite correct supplementation. In most cases it’s probably because of incorrect or absent supplementation, but probably not in all. This could mean that a highly productive EA doing highly important work may cease to be as productive with a small probability. Since they’ve probably been doing extremely valuable work, this decrease in output may be worse than the suffering they would’ve inflicted if they had [eaten some beef and had some milk](https://impartial-priorities.org/direct-suffering-caused-by-various-animal-foods.html). So they should at least eat a bit of beef and drink a bit of milk to reduce that risk. (These foods may increase other risks – but let’s assume for the moment that the person can make that tradeoff correctly for themselves.)
There is currently in our society a strong moral norm against stealing. We want to live in a society that has a strong norm against stealing. So whenever we steal – be it to donate the money to a place where it has much greater marginal utility than with its owner – we erode, in expectation, the norm against stealing a bit. People have to invest more into locks, safes, guards, and fences. People can’t just offer couchsurfing anymore. This increase in anomie (roughly, lack of trust and cohesion) may be small in expectation but has a vast expected societal effect. Hence we should be very careful about eroding valuable societal norms, and, conversely, we should also take care to foster new valuable societal norms or at least not stand in the way of them emerging.
I see a bit of a Laffer curve here (like an upside-down U) where upholding societal rules that are completely unheard of has little effect, and violating societal rules that are extremely well established has little effect again (except that you go to prison). The middle section is much more interesting, and this is where I generally advise to tread softly. (But I’m also against stealing.)
Because the way I resolve this tension for me is to assess whether in my immediate environment – the people who are most likely to be directly influenced by me – a norm is potentially about to emerge. If that is the case, and I approve of the norm, I try to always uphold that norm to at least an above average level.
Well, and then there are a few more random caveats:
As the norm not to harm other animals for food becomes stronger, it’ll be less socially awkward for people (outside vegan circles) to eat vegan food. Social effects were (last time I checked) still the second most common reason for vegan recidivism.
As the norm not to harm other animals for food becomes stronger, more effort will be put into providing properly fortified food to make supplementation automatic.
Eroding a budding social norm because it comes at a cost to one’s own goals seems like the sort of freeriding that I think the EA community needs to be very careful about. In some cases the conflict is only due to lacking idealization of preferences or only between instrumental rather than terminal goals or the others would defect against us in any case, but we don’t know any of this to be the case here. The first comes down to unanswered questions of population ethics, the second to the exact tradeoffs between animal suffering and health risks for a particular person, and the third to how likely animal rights activists are to badmouth AI safety, priorities research, etc. – probably rarely.
Being vegan among EAs, young, educated people, and other disproportionately antispeciesist groups may be more important than being vegan in a community of hunters.
A possible, unusual conclusion to draw from this is to be “private carnivor”: You only eat vegan food in public, and when people ask you whether you’re vegan, you tell them that you think eating meat is morally bad, a bad norm, and shameful, and so you only do it in private and as rarely as possible. No lies or pretense.
There’s also the option of moral offsetting, which I find very appealing (despite these criticisms – I think I somewhat disagree with my five-year-old comment there now), but it doesn’t seem to quite address the core issue here.
Another argument you mentioned to me at an EAGx was something along the lines that it’ll be harder to attract top talent in field X (say, AI safety) if they not only have to subscribe X being super important but have to subscribe to X being super important and be vegan. Friend of mine solve that by keeping those things separate. Yes, the catering may be vegan, but otherwise nothing indicates that there’s any need for them to be vegan themselves. (That conversation can happen, if at all, in a personal context separate of any ties to field X.)
Thanks for this interesting discussion; for others who read this and were interested, I thought I’dlinksomepreviousEAdiscussions on this topic in case it’s helpful :)
One brief addition: I think the kind of conscientious omnivorism you describe (‘I do try to only consume animals I think have had reasonable welfare levels’) might have similar opportunity costs to veg*ism, and there’s some not very conclusive psychological literature to suggest that, since it is a finer grained rule than ‘eat no animals’, it might even be harder to follow.
Obviously, this depends very much on what we mean by opportunity cost, and it also depends on how one goes about only trying to eat happy animals. I’m not sure what the best answer to either of those questions is.
I didn’t downvote your comment, but was close to doing so. (I generally downvote few comments, maybe in some sense “too few”.)
The reason why I considered downvoting: You claim that an argument implies a view widely seen as morally repugnant and (i.e. this alone is not sufficient):
You are not as clear as I think you could have been that you don’t actually ascribe the morally repugnant view to Owen, as opposed to mentioning this as an reduction ad absurdum precisely because you don’t think anyone accepts the morally repugnant conclusion.
You use more charged language than is necessary to make your point. E.g. instead of saying “repugnant” you could have said something like “which presumably no-one is willing to accept”. Similarly, it’s not relevant whether the perpetrator in your claim is a pedophile. (But it’s good to avoid even the faintest suggestion that someone in this debate is claimed to be pedophile.)
I’m not able to follow your reasoning, and suspect you may have misunderstood the comment you’re responding to. Most significantly, the above comment doesn’t argue that anything is morally okay, simpliciter - it just argues that a certain kind of moral objections, namely an appeal to bad consequences, doesn’t work for certain actions. It even explicitly lists other moral reasons against these actions. (Granted, it does suggest that these reasons aren’t so strong that the action is clearly impermissible in all circumstances.) But even setting this aside, I’m not sure why you think the above comment has the implication you think it has.
I don’t know for sure why anyone downvoted, but moderately strongly suspect they had similar reasons.
Here’s a version of your point which is still far from optimal on the above criteria (e.g. I’d probably have avoided the child abuse example altogether) but which I suspect wouldn’t have been downvoted:
I think your argument proves too much. It implies, for instance, that it’s not clearly impermissible to harm humans in similar ways in which non-human animals are being harmed because of humans slaughtering them for food. [Say 1-2 sentences about why you think this.] As a particularly drastic example, consider that virtually everyone agrees that sexual abuse of children is not permissible under any circumstances. Your argument seems to imply that there would only be a much weaker moral prohibition against child abuse. Clearly we cannot accept this conclusion. So there must be something wrong with your argument.
I strong-upvoted this because it’s super clear and detailed and the kind of thing I want to see more of on the Forum, but I haven’t actually read the original comment so don’t know if this analysis is right, just to avoid confusion
I didn’t downvote, but I also didn’t even understand whether you were agreeing with me or disagreeing with me (and strongly suspected that “would have to” was an error in either case).
What common belief in EA do you most strongly disagree with?
That personal dietary choices are important on consequentialist effectiveness grounds.
I actually think there are lots of legitimate and powerful reasons for EAs to consider veg*nism, such as:
A ~deontological belief that it’s wrong to eat animals
A desire for lifestyle choices that help you connect with what you care about
Signalling caring
A desire for shared culture with people you share values with
… but it feels to me almost intellectually dishonest to have it be part of an answer to someone saying “OK, I’m bought into the idea that I should really go after what’s important, what do I do now?”
(I’m not vegetarian, although I do try to only consume animals I think have had reasonable welfare levels, for reasons in the vicinity of the first two listed above. I still have some visceral unease about the idea of becoming vegetarian that is like “but this might be mistaken for being taken in by intellectually dishonest arguments”.)
I almost feel cheeky responding to this as you’ve essentially been baited into providing a controversial view, which I am now choosing to argue against. Sorry!
I’d say that something doesn’t have to be the most effective thing to do for it to be worth doing, even if you’re an EA. If something is a good thing, and provided it doesn’t really have an opportunity cost, then it seems to me that a consequentialist EA should do it regardless of how good it is.
To illustrate my point, one can say it’s a good thing to donate to a seeing eye dog charity. In a way it is, but an EA would say it isn’t because there is an opportunity cost as you could instead donate to the Against Malaria Foundation for example which is more effective. So donating to a seeing eye dog charity isn’t really a good thing to do.
Choosing to follow a ve*an diet doesn’t have an opportunity cost (usually). You have to eat, and you’re just choosing to eat something different. It doesn’t stop you doing something else. Therefore even if it realises a small benefit it seems worth it (and for the record I don’t think the benefit is small).
Or perhaps you just think the personal cost to you of being ve*an is substantial enough to offset the harm to the animals. From a utilitarian view I’d imagine this is unlikely to be true. I happen to think avoiding the suffering of even one animal is significant, similarly to the fact that we think it would be highly significant to save just one human life. And following a vegan diet for a while will benefit way more than just one animal anyway.
That’s fine! :)
In turn, an apology: my controversial view has baited you into response, and I’m now going to take your response as kind-of-volunteering for me to be critical. So I’m going to try and exhibit how it seems mistaken to me, and I’m going (in part) to use mockery as a rhetorical means to achieve this. I think this would usually be a violation of discourse norms, but here: the meta-level point is to try and exhibit more clearly what this controversial view I hold is and why; the thing I object to is a style of argument more than a conclusion; I think it’s helpful for the exhibition to be able to draw attention to features of a specific instance, and you’re providing what-seems-like-implicit-permission for me to do that. Sorry!
To be clear: I strongly agree with this, and this was a big part of what I was trying say above.
This is non-central, but FWIW I disagree with this. Donating to the guide dog charity usually is a good thing to do (relative to important social norms where people have property rights over their money), it’s just that it turns out there are fairly accessible actions which are quite a lot better.
This, I’m afraid, is the type of statement that really bugs me. It’s trying to collapse a complex issue onto simple dimensions, draw a simple conclusion there, and project it back to the original complex world. But in doing so it’s thrown common-sense out of the window!
If I believed that choosing to follow a ve*an diet usually didn’t have an opportunity cost, I would expect to see:
People usually willing to go ve*an for a year for some small material gain
In theory if there was no opportunity cost, even for something trivial like $10, but I think many non ve*ans would be unwilling to do this even for $1000
[As an aside, I think taxes on meat would probably be a good policy that might well be accessible]
Almost everyone who goes ve*an for ethical reasons keeping it up
In fact some significant proportion of people stop
I certainly don’t claim this in any utilitarian comparison of welfare. But now the argument seems almost precisely analogous to:
“You could help the poorest people in the world a tremendous amount for the cost of a cup of coffee. Since your welfare shouldn’t outweigh theirs, you should forgo that cup of coffee, and every other small luxury in your life, to give more to them.”
I think EA correctly rejects this argument, and that it’s correct to reject its analogue as well. (I think the argument is stronger for ve*anism than giving to the poor instead of buying coffee; but I also think that there are better giving opportunities than giving directly to the poor, and that when you work it through the coffee argument ends up being stronger than the corresponding one for ve*anism.)
---
Again, I’m not claiming that EAs shouldn’t be ve*an. I think it’s a morally virtuous thing to do!
But I don’t think EAs have a monopoly on virtue. I think the EA schtick is more like “we’ll think things through really carefully and tell you what the most efficient ways to do good are”. And so I think that if it’s presented as “you want to be an EA now? great! how about ve*anism?” then the implicature is that this is a bigger deal than, say, moving from giving away 7% of your income to giving away 8%, and that this is badly misleading.
Notes:
There may be some people for whom the opportunity cost is trivial
I think there are probably quite a few people for whom the opportunity cost is actually negative—i.e. it’s overall easier for them to be ve*an than not
I would feel very good about encouragement to check whether people fall into one of these buckets, as in cases where they do then dietary change may be a particularly efficient way to do good
I’d also feel very good about moral exhortment to be ve*an that was explicit that it wasn’t grounded in EA thinking, like:
“Many EAs try to be morally serious in all aspects of their lives, beyond just trying to optimise for the most good achievable. This leads us to ve*anism. You might want to consider it.”
I’m not 100% sure but we may be defining opportunity cost differently. I’m drawing a distinction between opportunity cost and personal cost. Opportunity cost relates to the fact that doing something may inhibit you from doing something else that is more effective. Even if going vegan didn’t have any opportunity cost (which is what I’m arguing in most cases), people may still not want to do it due to high perceived personal cost (e.g. thinking vegan food isn’t tasty). I’m not claiming there is no personal cost and that is indeed why people don’t go / stay vegan—although I do think personal costs are unfortunately overblown.
Without addressing all of your points in detail I think a useful thought experiment might be to imagine a world where we are eating humans not animals. E.g. say there are mentally-challenged humans of a comparable intelligence/capacity to suffer to non-human animals and we farm them in poor conditions and eat them causing their suffering. I’d imagine most people would judge this as morally unacceptable and go vegan on consequentialist grounds (although perhaps not and it would actually be deontological grounds?). If you would go vegan in the thought experiment but not in the real world then you’re probably speciesist to some degree which I ultimately don’t think can be defended.
EA is sometimes described as doing the most good (most common definition) or I suppose is sometimes described as finding the most effective ways to do good. These can be construed as two different things. I would say under the first definition that being vegan naturally becomes part of the conversation for the reasons I have mentioned (little to no opportunity cost).
Also, we may be fundamentally disagreeing on the scale of the benefits on consequentialist grounds of going vegan as well—I think they are quite considerable. Indeed “signalling caring” as you put it can then convince others to consider veganism in which case you can get a snowball of positive effects. But that’s a whole other discussion.
P.S. I agree we can probably improve the way veganism is messaged in EA and it’s possible I am part of the problem!
I’ve thought a bit about this for personal reasons, and I found Scott Alexander’s take on it to be enlightening.
I see a tension between the following two arguments that I find plausible:
Some people run into health issues due to a vegan diet despite correct supplementation. In most cases it’s probably because of incorrect or absent supplementation, but probably not in all. This could mean that a highly productive EA doing highly important work may cease to be as productive with a small probability. Since they’ve probably been doing extremely valuable work, this decrease in output may be worse than the suffering they would’ve inflicted if they had [eaten some beef and had some milk](https://impartial-priorities.org/direct-suffering-caused-by-various-animal-foods.html). So they should at least eat a bit of beef and drink a bit of milk to reduce that risk. (These foods may increase other risks – but let’s assume for the moment that the person can make that tradeoff correctly for themselves.)
There is currently in our society a strong moral norm against stealing. We want to live in a society that has a strong norm against stealing. So whenever we steal – be it to donate the money to a place where it has much greater marginal utility than with its owner – we erode, in expectation, the norm against stealing a bit. People have to invest more into locks, safes, guards, and fences. People can’t just offer couchsurfing anymore. This increase in anomie (roughly, lack of trust and cohesion) may be small in expectation but has a vast expected societal effect. Hence we should be very careful about eroding valuable societal norms, and, conversely, we should also take care to foster new valuable societal norms or at least not stand in the way of them emerging.
I see a bit of a Laffer curve here (like an upside-down U) where upholding societal rules that are completely unheard of has little effect, and violating societal rules that are extremely well established has little effect again (except that you go to prison). The middle section is much more interesting, and this is where I generally advise to tread softly. (But I’m also against stealing.)
Because the way I resolve this tension for me is to assess whether in my immediate environment – the people who are most likely to be directly influenced by me – a norm is potentially about to emerge. If that is the case, and I approve of the norm, I try to always uphold that norm to at least an above average level.
Well, and then there are a few more random caveats:
As the norm not to harm other animals for food becomes stronger, it’ll be less socially awkward for people (outside vegan circles) to eat vegan food. Social effects were (last time I checked) still the second most common reason for vegan recidivism.
As the norm not to harm other animals for food becomes stronger, more effort will be put into providing properly fortified food to make supplementation automatic.
Eroding a budding social norm because it comes at a cost to one’s own goals seems like the sort of freeriding that I think the EA community needs to be very careful about. In some cases the conflict is only due to lacking idealization of preferences or only between instrumental rather than terminal goals or the others would defect against us in any case, but we don’t know any of this to be the case here. The first comes down to unanswered questions of population ethics, the second to the exact tradeoffs between animal suffering and health risks for a particular person, and the third to how likely animal rights activists are to badmouth AI safety, priorities research, etc. – probably rarely.
Being vegan among EAs, young, educated people, and other disproportionately antispeciesist groups may be more important than being vegan in a community of hunters.
A possible, unusual conclusion to draw from this is to be “private carnivor”: You only eat vegan food in public, and when people ask you whether you’re vegan, you tell them that you think eating meat is morally bad, a bad norm, and shameful, and so you only do it in private and as rarely as possible. No lies or pretense.
There’s also the option of moral offsetting, which I find very appealing (despite these criticisms – I think I somewhat disagree with my five-year-old comment there now), but it doesn’t seem to quite address the core issue here.
Another argument you mentioned to me at an EAGx was something along the lines that it’ll be harder to attract top talent in field X (say, AI safety) if they not only have to subscribe X being super important but have to subscribe to X being super important and be vegan. Friend of mine solve that by keeping those things separate. Yes, the catering may be vegan, but otherwise nothing indicates that there’s any need for them to be vegan themselves. (That conversation can happen, if at all, in a personal context separate of any ties to field X.)
Thanks for this interesting discussion; for others who read this and were interested, I thought I’d link some previous EA discussions on this topic in case it’s helpful :)
One brief addition: I think the kind of conscientious omnivorism you describe (‘I do try to only consume animals I think have had reasonable welfare levels’) might have similar opportunity costs to veg*ism, and there’s some not very conclusive psychological literature to suggest that, since it is a finer grained rule than ‘eat no animals’, it might even be harder to follow.
Obviously, this depends very much on what we mean by opportunity cost, and it also depends on how one goes about only trying to eat happy animals. I’m not sure what the best answer to either of those questions is.
-
Would any of you who downvoted the comment above be willing to state why?
I didn’t downvote your comment, but was close to doing so. (I generally downvote few comments, maybe in some sense “too few”.)
The reason why I considered downvoting: You claim that an argument implies a view widely seen as morally repugnant and (i.e. this alone is not sufficient):
You are not as clear as I think you could have been that you don’t actually ascribe the morally repugnant view to Owen, as opposed to mentioning this as an reduction ad absurdum precisely because you don’t think anyone accepts the morally repugnant conclusion.
You use more charged language than is necessary to make your point. E.g. instead of saying “repugnant” you could have said something like “which presumably no-one is willing to accept”. Similarly, it’s not relevant whether the perpetrator in your claim is a pedophile. (But it’s good to avoid even the faintest suggestion that someone in this debate is claimed to be pedophile.)
I’m not able to follow your reasoning, and suspect you may have misunderstood the comment you’re responding to. Most significantly, the above comment doesn’t argue that anything is morally okay, simpliciter - it just argues that a certain kind of moral objections, namely an appeal to bad consequences, doesn’t work for certain actions. It even explicitly lists other moral reasons against these actions. (Granted, it does suggest that these reasons aren’t so strong that the action is clearly impermissible in all circumstances.) But even setting this aside, I’m not sure why you think the above comment has the implication you think it has.
I don’t know for sure why anyone downvoted, but moderately strongly suspect they had similar reasons.
Here’s a version of your point which is still far from optimal on the above criteria (e.g. I’d probably have avoided the child abuse example altogether) but which I suspect wouldn’t have been downvoted:
I strong-upvoted this because it’s super clear and detailed and the kind of thing I want to see more of on the Forum, but I haven’t actually read the original comment so don’t know if this analysis is right, just to avoid confusion
I didn’t downvote, but I also didn’t even understand whether you were agreeing with me or disagreeing with me (and strongly suspected that “would have to” was an error in either case).