Hi Harrison, thanks for the detailed feedback. I take your point and I will try to edit the article to make it less “shocking”, since there seems to be consensus that I went a bit too far. There are a couple of considerations though that I think might be relevant for me to raise. I’m not trying to excuse myself, but I do think they provide more context and might help people understand why I wrote the article in this style.
The only reason why I brought up rape in the article at all was that the vegans who opposed my argument for meat offsets explicitly used the “rape argument”: If meat offsets are permissible then rape offsets are permissible. Rape offsets can’t possibly be permissible because rape is too shocking and disgusting. Therefore, meat offsets are not permissible. I could have used another example rather than rape, but the whole point of using the rape analogy is that rape is shocking and invokes moral disgust in most of us. If I used another example, I felt it would be dishonest. I was afraid that it would look like I am evading their argument. Don’t you think this is a reasonable concern? How could I avoid the “rape” example without looking like I’m evading their argument?
Sometimes, when discussing moral philosophy, I find it important to evoke some degree of shock or disgust. Again, I write to the general public, and there are many people in the lay audience who casually espouse relativistic views, but if I press them hard enough with sufficiently gory examples, they agree that certain things are “objectively bad”. But I guess I have now learned to avoid gender-based violence in my examples. I do think “murder” is not good enough though. Would “torture” or “child decapitation” be OK? Or still too much?
I don’t have an official diagnosis but I’ve been called autistic many times in my life and after reading about the topic I concluded that people might be on to something. I’m a typical nerdy IT guy who struggled with social skills for most of my youth, and I’ve never been particularly good at reading how people feel, predicting how they’re gonna react to certain things, etc. With time, however, I’ve learned how to mask my weirdness by following certain simple algorithms and I now have a very active social life and I would say I get into conflict less often than the average person. I’m just saying this because I’ve noticed that people often assume that I use shocking language because I am callous and insensitive and don’t care about how people feel, but the truth is that I do care about how they feel, I just fail to predict how they will feel. Sure, at the end of the day the harm caused might be the same, but I do hope people will see this as an attenuating circumstance because a person whose heart is in the right place is more likely to improve their behavior in the future in response to feedback.
Another factor that I think is tricky is cultural differences. So far my experience of EA is that the cultural norms are very much dictated by the US/UK, because this is where most people are and it’s only natural that these cultural norms come to dominate. In progressive circles in the US/UK it seems that it has become mainstream to believe that people should be protected from any potential discomfort, that words can be violence, etc. Jonathan Haidt calls this the coddling of the American mind. I don’t want to argue here that people should be more resilient. I haven’t read enough about this so I prefer to refrain from judgement and remain agnostic. But my point is that in other cultures this phenomenon is not so mainstream. In Romania for example people are pretty callous comparatively and there is some tolerance for this in the culture, even in progressive circles. My girlfriend for example read the article and didn’t say anything about it being too graphic or callous. Sure, American culture influences both Brazil (where I’m originally from) and Romania (where I’ve been living for 8 years), but I think it’s a bit unfair to expect people from everywhere to flawlessly respect the sensibilities of the anglosphere without ever making any mistake.
Besides, there is the even more complicated issue of subcultures. I’m into extreme metal and gory horror movies, and people in these communities have a different relationship to violence. We tend to talk about it callously and the less triggerable you are the more metal you are. I’ve also been active in the secular humanist movement, where many people identify as “free-speech fundamentalists”, and there is more tolerance for “offensive content” than in other more mainstream progressive movements. Because of the strong rationalist component of EA, I’ve always assumed it overlapped a lot with secular humanism, but lately I am realizing that this overlap is a bit smaller than I assumed it to be.
Again, I’m not saying these things to excuse myself, I appreciate the feedback and I will adjust my behavior in response to it. At the end of the day I will always have to adopt one imperfect set of cultural norms or another, so if I want to get more involved in EA I might as well adopt EA norms. I guess I just felt the need to explain where I’m coming from so that people don’t leave with the impression that I’m a callous person who doesn’t care about how others feel. I made a mistake, I failed to predict that my article would be seen as too callous to EAs, and hopefully with this new data point I can recalibrate my algorithms and minimize the chances that I will make the same mistake in the future. I cannot promise I will never make a mistake again, but I still hope my reputation won’t be forever damaged by one honest mistake.
PS: What is the infamous Robin Hanson post? I’m curious :)
Hi Harrison, thanks for the detailed feedback. I take your point and I will try to edit the article to make it less “shocking”, since there seems to be consensus that I went a bit too far. There are a couple of considerations though that I think might be relevant for me to raise. I’m not trying to excuse myself, but I do think they provide more context and might help people understand why I wrote the article in this style.
The only reason why I brought up rape in the article at all was that the vegans who opposed my argument for meat offsets explicitly used the “rape argument”: If meat offsets are permissible then rape offsets are permissible. Rape offsets can’t possibly be permissible because rape is too shocking and disgusting. Therefore, meat offsets are not permissible. I could have used another example rather than rape, but the whole point of using the rape analogy is that rape is shocking and invokes moral disgust in most of us. If I used another example, I felt it would be dishonest. I was afraid that it would look like I am evading their argument. Don’t you think this is a reasonable concern? How could I avoid the “rape” example without looking like I’m evading their argument?
Sometimes, when discussing moral philosophy, I find it important to evoke some degree of shock or disgust. Again, I write to the general public, and there are many people in the lay audience who casually espouse relativistic views, but if I press them hard enough with sufficiently gory examples, they agree that certain things are “objectively bad”. But I guess I have now learned to avoid gender-based violence in my examples. I do think “murder” is not good enough though. Would “torture” or “child decapitation” be OK? Or still too much?
I don’t have an official diagnosis but I’ve been called autistic many times in my life and after reading about the topic I concluded that people might be on to something. I’m a typical nerdy IT guy who struggled with social skills for most of my youth, and I’ve never been particularly good at reading how people feel, predicting how they’re gonna react to certain things, etc. With time, however, I’ve learned how to mask my weirdness by following certain simple algorithms and I now have a very active social life and I would say I get into conflict less often than the average person. I’m just saying this because I’ve noticed that people often assume that I use shocking language because I am callous and insensitive and don’t care about how people feel, but the truth is that I do care about how they feel, I just fail to predict how they will feel. Sure, at the end of the day the harm caused might be the same, but I do hope people will see this as an attenuating circumstance because a person whose heart is in the right place is more likely to improve their behavior in the future in response to feedback.
Another factor that I think is tricky is cultural differences. So far my experience of EA is that the cultural norms are very much dictated by the US/UK, because this is where most people are and it’s only natural that these cultural norms come to dominate. In progressive circles in the US/UK it seems that it has become mainstream to believe that people should be protected from any potential discomfort, that words can be violence, etc. Jonathan Haidt calls this the coddling of the American mind. I don’t want to argue here that people should be more resilient. I haven’t read enough about this so I prefer to refrain from judgement and remain agnostic. But my point is that in other cultures this phenomenon is not so mainstream. In Romania for example people are pretty callous comparatively and there is some tolerance for this in the culture, even in progressive circles. My girlfriend for example read the article and didn’t say anything about it being too graphic or callous. Sure, American culture influences both Brazil (where I’m originally from) and Romania (where I’ve been living for 8 years), but I think it’s a bit unfair to expect people from everywhere to flawlessly respect the sensibilities of the anglosphere without ever making any mistake.
Besides, there is the even more complicated issue of subcultures. I’m into extreme metal and gory horror movies, and people in these communities have a different relationship to violence. We tend to talk about it callously and the less triggerable you are the more metal you are. I’ve also been active in the secular humanist movement, where many people identify as “free-speech fundamentalists”, and there is more tolerance for “offensive content” than in other more mainstream progressive movements. Because of the strong rationalist component of EA, I’ve always assumed it overlapped a lot with secular humanism, but lately I am realizing that this overlap is a bit smaller than I assumed it to be.
Again, I’m not saying these things to excuse myself, I appreciate the feedback and I will adjust my behavior in response to it. At the end of the day I will always have to adopt one imperfect set of cultural norms or another, so if I want to get more involved in EA I might as well adopt EA norms. I guess I just felt the need to explain where I’m coming from so that people don’t leave with the impression that I’m a callous person who doesn’t care about how others feel. I made a mistake, I failed to predict that my article would be seen as too callous to EAs, and hopefully with this new data point I can recalibrate my algorithms and minimize the chances that I will make the same mistake in the future. I cannot promise I will never make a mistake again, but I still hope my reputation won’t be forever damaged by one honest mistake.
PS: What is the infamous Robin Hanson post? I’m curious :)