I’m definitely not the best person to give feedback on this, but I’ll just briefly share a few thoughts:
I’ve heard that EA grant makers often have relatively little time to review grant applications. This may or may not be true for EAIF, but supposing it is, even yellow flags like that offset article might cause a reviewer to quickly become pessimistic about providing grants (for some of the following reasons).
I would have recommended not using the example of rape; murder offsets probably would have been a better alternative. I only skimmed the post, but it really didn’t help that towards the beginning you make the seemingly-intentionally-controversially phrased point about “[Sometimes rape is permissible… you probably agree deep down. That is, if it is to prevent more rape.]” This ordering (saying “you probably agree” before clarifying “if it were in some twisted trolley problem scenario”) and phrasing (e.g., “deep down…”) is needlessly controversy-inviting. To be honest, to me these kinds of details genuinely do reflect some lack of perspective/room-reading or rhetorical finesse, regardless of whether you ultimately oppose the idea of rape offsets. (It also very much gives me flashbacks to the infamous Robin Hanson post, which really hurt his reputation and reflected a similar lack of perspective…) This may not be such a problem if I am personally evaluating your character, but:
Grant making probably is justified for being cautious about downside risks, including when it comes to optics risks. “EA grant makers fund writer of blog that callously discusses ‘rape offsets’” might be a very unfair social media characterization, but fairness doesn’t really matter, and I can’t be confident it won’t get pulled into some broader narrative attacking—however fairly or unfairly—EA overall. (Speaking as someone who’s never analyzed grant applications) I suspect you would have to have a really good case for potential upside to make it worth spending a few extra hours analyzing those optics risks, and in the end there may (or may not?) be plenty of other people to fund instead.
As for your overall blog, I haven’t read it, but I wouldn’t be surprised if it is otherwise good, and I’m glad to see a blog discussing moral issues. But rape is a topic that needs to be treated with a lot of care and caution, and probably should be avoided when it is just being used to make a point separate from rape.
There is a very severe potential downside if many funders think in this manner, which is that it will discourage people from writing about potentially important ideas. I’m strongly in favor of putting more effort and funding into PR (disclaimer that I’ve worked in media relations in the past), but if we refuse to fund people with diverse, potentially provocative takes, that’s not a worthwhile trade-off, imo. I want EA to be capable of supporting an intellectual environment where we can ask about and discuss hard questions publicly without worrying about being excluded as a result. If that means bad-faith journalists have slightly more material to work with, than so be it.
Hi Harrison, thanks for the detailed feedback. I take your point and I will try to edit the article to make it less “shocking”, since there seems to be consensus that I went a bit too far. There are a couple of considerations though that I think might be relevant for me to raise. I’m not trying to excuse myself, but I do think they provide more context and might help people understand why I wrote the article in this style.
The only reason why I brought up rape in the article at all was that the vegans who opposed my argument for meat offsets explicitly used the “rape argument”: If meat offsets are permissible then rape offsets are permissible. Rape offsets can’t possibly be permissible because rape is too shocking and disgusting. Therefore, meat offsets are not permissible. I could have used another example rather than rape, but the whole point of using the rape analogy is that rape is shocking and invokes moral disgust in most of us. If I used another example, I felt it would be dishonest. I was afraid that it would look like I am evading their argument. Don’t you think this is a reasonable concern? How could I avoid the “rape” example without looking like I’m evading their argument?
Sometimes, when discussing moral philosophy, I find it important to evoke some degree of shock or disgust. Again, I write to the general public, and there are many people in the lay audience who casually espouse relativistic views, but if I press them hard enough with sufficiently gory examples, they agree that certain things are “objectively bad”. But I guess I have now learned to avoid gender-based violence in my examples. I do think “murder” is not good enough though. Would “torture” or “child decapitation” be OK? Or still too much?
I don’t have an official diagnosis but I’ve been called autistic many times in my life and after reading about the topic I concluded that people might be on to something. I’m a typical nerdy IT guy who struggled with social skills for most of my youth, and I’ve never been particularly good at reading how people feel, predicting how they’re gonna react to certain things, etc. With time, however, I’ve learned how to mask my weirdness by following certain simple algorithms and I now have a very active social life and I would say I get into conflict less often than the average person. I’m just saying this because I’ve noticed that people often assume that I use shocking language because I am callous and insensitive and don’t care about how people feel, but the truth is that I do care about how they feel, I just fail to predict how they will feel. Sure, at the end of the day the harm caused might be the same, but I do hope people will see this as an attenuating circumstance because a person whose heart is in the right place is more likely to improve their behavior in the future in response to feedback.
Another factor that I think is tricky is cultural differences. So far my experience of EA is that the cultural norms are very much dictated by the US/UK, because this is where most people are and it’s only natural that these cultural norms come to dominate. In progressive circles in the US/UK it seems that it has become mainstream to believe that people should be protected from any potential discomfort, that words can be violence, etc. Jonathan Haidt calls this the coddling of the American mind. I don’t want to argue here that people should be more resilient. I haven’t read enough about this so I prefer to refrain from judgement and remain agnostic. But my point is that in other cultures this phenomenon is not so mainstream. In Romania for example people are pretty callous comparatively and there is some tolerance for this in the culture, even in progressive circles. My girlfriend for example read the article and didn’t say anything about it being too graphic or callous. Sure, American culture influences both Brazil (where I’m originally from) and Romania (where I’ve been living for 8 years), but I think it’s a bit unfair to expect people from everywhere to flawlessly respect the sensibilities of the anglosphere without ever making any mistake.
Besides, there is the even more complicated issue of subcultures. I’m into extreme metal and gory horror movies, and people in these communities have a different relationship to violence. We tend to talk about it callously and the less triggerable you are the more metal you are. I’ve also been active in the secular humanist movement, where many people identify as “free-speech fundamentalists”, and there is more tolerance for “offensive content” than in other more mainstream progressive movements. Because of the strong rationalist component of EA, I’ve always assumed it overlapped a lot with secular humanism, but lately I am realizing that this overlap is a bit smaller than I assumed it to be.
Again, I’m not saying these things to excuse myself, I appreciate the feedback and I will adjust my behavior in response to it. At the end of the day I will always have to adopt one imperfect set of cultural norms or another, so if I want to get more involved in EA I might as well adopt EA norms. I guess I just felt the need to explain where I’m coming from so that people don’t leave with the impression that I’m a callous person who doesn’t care about how others feel. I made a mistake, I failed to predict that my article would be seen as too callous to EAs, and hopefully with this new data point I can recalibrate my algorithms and minimize the chances that I will make the same mistake in the future. I cannot promise I will never make a mistake again, but I still hope my reputation won’t be forever damaged by one honest mistake.
PS: What is the infamous Robin Hanson post? I’m curious :)
I’m definitely not the best person to give feedback on this, but I’ll just briefly share a few thoughts:
I’ve heard that EA grant makers often have relatively little time to review grant applications. This may or may not be true for EAIF, but supposing it is, even yellow flags like that offset article might cause a reviewer to quickly become pessimistic about providing grants (for some of the following reasons).
I would have recommended not using the example of rape; murder offsets probably would have been a better alternative. I only skimmed the post, but it really didn’t help that towards the beginning you make the seemingly-intentionally-controversially phrased point about “[Sometimes rape is permissible… you probably agree deep down. That is, if it is to prevent more rape.]” This ordering (saying “you probably agree” before clarifying “if it were in some twisted trolley problem scenario”) and phrasing (e.g., “deep down…”) is needlessly controversy-inviting. To be honest, to me these kinds of details genuinely do reflect some lack of perspective/room-reading or rhetorical finesse, regardless of whether you ultimately oppose the idea of rape offsets. (It also very much gives me flashbacks to the infamous Robin Hanson post, which really hurt his reputation and reflected a similar lack of perspective…) This may not be such a problem if I am personally evaluating your character, but:
Grant making probably is justified for being cautious about downside risks, including when it comes to optics risks. “EA grant makers fund writer of blog that callously discusses ‘rape offsets’” might be a very unfair social media characterization, but fairness doesn’t really matter, and I can’t be confident it won’t get pulled into some broader narrative attacking—however fairly or unfairly—EA overall. (Speaking as someone who’s never analyzed grant applications) I suspect you would have to have a really good case for potential upside to make it worth spending a few extra hours analyzing those optics risks, and in the end there may (or may not?) be plenty of other people to fund instead.
As for your overall blog, I haven’t read it, but I wouldn’t be surprised if it is otherwise good, and I’m glad to see a blog discussing moral issues. But rape is a topic that needs to be treated with a lot of care and caution, and probably should be avoided when it is just being used to make a point separate from rape.
There is a very severe potential downside if many funders think in this manner, which is that it will discourage people from writing about potentially important ideas. I’m strongly in favor of putting more effort and funding into PR (disclaimer that I’ve worked in media relations in the past), but if we refuse to fund people with diverse, potentially provocative takes, that’s not a worthwhile trade-off, imo. I want EA to be capable of supporting an intellectual environment where we can ask about and discuss hard questions publicly without worrying about being excluded as a result. If that means bad-faith journalists have slightly more material to work with, than so be it.
Hi Harrison, thanks for the detailed feedback. I take your point and I will try to edit the article to make it less “shocking”, since there seems to be consensus that I went a bit too far. There are a couple of considerations though that I think might be relevant for me to raise. I’m not trying to excuse myself, but I do think they provide more context and might help people understand why I wrote the article in this style.
The only reason why I brought up rape in the article at all was that the vegans who opposed my argument for meat offsets explicitly used the “rape argument”: If meat offsets are permissible then rape offsets are permissible. Rape offsets can’t possibly be permissible because rape is too shocking and disgusting. Therefore, meat offsets are not permissible. I could have used another example rather than rape, but the whole point of using the rape analogy is that rape is shocking and invokes moral disgust in most of us. If I used another example, I felt it would be dishonest. I was afraid that it would look like I am evading their argument. Don’t you think this is a reasonable concern? How could I avoid the “rape” example without looking like I’m evading their argument?
Sometimes, when discussing moral philosophy, I find it important to evoke some degree of shock or disgust. Again, I write to the general public, and there are many people in the lay audience who casually espouse relativistic views, but if I press them hard enough with sufficiently gory examples, they agree that certain things are “objectively bad”. But I guess I have now learned to avoid gender-based violence in my examples. I do think “murder” is not good enough though. Would “torture” or “child decapitation” be OK? Or still too much?
I don’t have an official diagnosis but I’ve been called autistic many times in my life and after reading about the topic I concluded that people might be on to something. I’m a typical nerdy IT guy who struggled with social skills for most of my youth, and I’ve never been particularly good at reading how people feel, predicting how they’re gonna react to certain things, etc. With time, however, I’ve learned how to mask my weirdness by following certain simple algorithms and I now have a very active social life and I would say I get into conflict less often than the average person. I’m just saying this because I’ve noticed that people often assume that I use shocking language because I am callous and insensitive and don’t care about how people feel, but the truth is that I do care about how they feel, I just fail to predict how they will feel. Sure, at the end of the day the harm caused might be the same, but I do hope people will see this as an attenuating circumstance because a person whose heart is in the right place is more likely to improve their behavior in the future in response to feedback.
Another factor that I think is tricky is cultural differences. So far my experience of EA is that the cultural norms are very much dictated by the US/UK, because this is where most people are and it’s only natural that these cultural norms come to dominate. In progressive circles in the US/UK it seems that it has become mainstream to believe that people should be protected from any potential discomfort, that words can be violence, etc. Jonathan Haidt calls this the coddling of the American mind. I don’t want to argue here that people should be more resilient. I haven’t read enough about this so I prefer to refrain from judgement and remain agnostic. But my point is that in other cultures this phenomenon is not so mainstream. In Romania for example people are pretty callous comparatively and there is some tolerance for this in the culture, even in progressive circles. My girlfriend for example read the article and didn’t say anything about it being too graphic or callous. Sure, American culture influences both Brazil (where I’m originally from) and Romania (where I’ve been living for 8 years), but I think it’s a bit unfair to expect people from everywhere to flawlessly respect the sensibilities of the anglosphere without ever making any mistake.
Besides, there is the even more complicated issue of subcultures. I’m into extreme metal and gory horror movies, and people in these communities have a different relationship to violence. We tend to talk about it callously and the less triggerable you are the more metal you are. I’ve also been active in the secular humanist movement, where many people identify as “free-speech fundamentalists”, and there is more tolerance for “offensive content” than in other more mainstream progressive movements. Because of the strong rationalist component of EA, I’ve always assumed it overlapped a lot with secular humanism, but lately I am realizing that this overlap is a bit smaller than I assumed it to be.
Again, I’m not saying these things to excuse myself, I appreciate the feedback and I will adjust my behavior in response to it. At the end of the day I will always have to adopt one imperfect set of cultural norms or another, so if I want to get more involved in EA I might as well adopt EA norms. I guess I just felt the need to explain where I’m coming from so that people don’t leave with the impression that I’m a callous person who doesn’t care about how others feel. I made a mistake, I failed to predict that my article would be seen as too callous to EAs, and hopefully with this new data point I can recalibrate my algorithms and minimize the chances that I will make the same mistake in the future. I cannot promise I will never make a mistake again, but I still hope my reputation won’t be forever damaged by one honest mistake.
PS: What is the infamous Robin Hanson post? I’m curious :)