I can see 1-3 being problems to some extent (and I don’t think Kelly would disagree)… but “overrepresentation of vegetarians and vegans”?? You might as well complain about an overrepresentation of people who donate to charity.
Lila
Why I left EA
The Bittersweetness of Replaceability
The big problem with how we do outreach
During freshman year of college (’09-’10), I decided to donate some money to charity on a whim. After some reflection on how much to donate, I decided that the morally correct option was to live on as little money as possible and donate the remainder. (Extremism is very attractive to college freshmen.) I lived as ascetically as I could and gave away the possessions and money that I thought I didn’t need. I looked like a homeless person, with my feet sticking through the ends of my falling apart sneakers, and I sewed patches over the holes in my clothes rather than buy new stuff. (In retrospect, these things weren’t worth the time and reputation costs. Basic clothes and shoes are cheap.) I drank 7 cups of tea a day to avoid hunger and also would scrounge whatever half-eaten food I could find around my dorm. (Some of the “candy” I ate was actually psychedelic drugs, I think.)
As a coincidence, a few months after I began this endeavor, I met Jason (Gaverick) Matheny. My mom was working as an assistant for him, though neither of my parents are EAs. He came over for dinner one night, and we talked about our shared interest in altruism. (The term “effective altruism” didn’t exist at the time.)
Jason has been a valuable mentor for me over the years. I had the altruism part down, but he’s helped me think a lot more about effectiveness. He eventually introduced me to 80K, and from there I connected with the rest of the EA community.
Thanks Kelly. I agree that this is a problem in EA in ways that people don’t realize. In retrospect, I feel stupid for not realizing how casual discussion of IQ and eugenics would be hurtful. Same thing with applying that classic EA skepticism to people’s lived experiences.
Culture isn’t the main reason I left EA, but it’s #3. And I think it contributes to the top two reasons I felt alienated: the mockery of moral views that deviate from strict utilitarianism, and what I believed were naive over-confident tactics.
- 21 Nov 2017 22:13 UTC; -1 points) 's comment on An Exploration of Sexual Violence Reduction for Effective Altruism Potential by (
Scott Alexander writes about the motte and bailey doctrine: http://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/
Basically, people will retreat to obvious platitudes (the motte) when defending their position, when in fact they’re actually trying to promote more controversial ideas (the bailey). The motte for EA is “doing the most good” and the bailey is, well, everything else we promote. Ideally the place to launch criticism is the bailey. Unfortunately, a lot of the criticism has been directed to the motte, which leads to bizarre statements like “well maybe suffering isn’t bad, we don’t want everyone to be happy all the time” or “it’s impossible to know which things are better than others”. This may be part of the reason much of the criticism has fallen flat so far.
I agree with this for the most part, but let’s not exclude people from EA who, like me, are low-IQ and high-libido.
- 14 Nov 2017 9:55 UTC; 4 points) 's comment on An Exploration of Sexual Violence Reduction for Effective Altruism Potential by (
You’re free to offer your own thoughts on the matter, but you seemed to be trying to engage me in a personal debate, which I have no interest in doing. This isn’t a clickbait title, I’m not concern trolling, I really have left the EA community. I don’t know of any other people who have changed their mind about EA like this, so I thought my story might be of some interest to people. And hey, maybe a few of y’all were wondering where I went.
That’s a good point, though my main reason for being wary of EV is related to rejecting utilitarianism. I don’t think that quantitative, systematic ways of thinking are necessarily well-suited to thinking about morality, any more than they’d be suited to thinking about aesthetics. Even in biology (my field), a priori first-principles approaches can be misleading. Biology is too squishy and context-dependent. And moral psychology is probably even squishier.
EV is one tool in our moral toolkit. I find it most insightful when comparing fairly similar actions, such as public health interventions. It’s sometimes useful when thinking about careers. But I used to feel compelled to pursue careers that I hated and probably wouldn’t be good at, just on the off chance it would work. Now I see morality as being more closely tied to what I find meaning in (again, anti-realism). And I don’t find meaning in saving a trillion EV lives or whatever.
Moral anti-realists don’t have to bite bullets
I’m really really skeptical of the claim that SENS can give every person on earth 30 additional years of healthy life for a billion dollars. Billions are spent annually on cancer research, and we still haven’t cured cancer.
There are two types of benefits of SENS’s research. First is the more mundane disease reduction stuff, which is a valuable way to promote quality of life, as GiveWell points out. However, there’s no need to focus on SENS specifically in finding cures for diseases.
Second is the life-extension stuff. Another way of understanding life-extension stuff is that it increases the number of people on earth at a given time, all else being equal. But the more obvious way to increase the number of people on earth is to promote births. Of course, there are transition costs to death: people really don’t like dying. On the other hand, there may be diminishing returns to life, and people might prefer to improve their chances of being born, whatever that means. I am floating in the ether and am offered a tradeoff: I can increase my probability of existing, but this will decrease the length of my existence if I receive an existence. I’m not sure what probability-length tradeoff I’d choose as optimal.
- 8 Jan 2015 19:50 UTC; 7 points) 's comment on Tentative Thoughts on the SENS Foundation by (
Announcement: crowdsourcing argumentation at IARPA
The p-value critique doesn’t apply to many scientific fields. As far as I can tell, it mostly applies to social science and maybe epidemiological research. In basic biological research, a paper wouldn’t be published in a good journal on the basis of a single p-value. In fact, many papers don’t have any p-values. When p-values are presented, they’re often so low (10^-15) that they’re unnecessary confirmations of a clearly visible effect. (Silly, in my opinion.) Most papers rely on many experiments, which ideally provide multiple lines of evidence. It’s also common to propose a mechanism that’s plausible given the existing literature. In some cases, you can see the fingerprints of skeptical reviewers. For example, when I see “to exclude the possibility that”, I assume that this experiment was added later at the demand of a reviewer. Published biology is often wrong, but for subtler reasons.
So I think that if you identify with or against some group (e.g. ‘anti-SJWs’), then anything that people say that pattern matches to something that this group would say triggers a reflexive negative reaction. This manifests in various ways: you’re inclined to attribute way more to the person’s statements than what they’re actually saying or you set an overly demanding bar for them to “prove” that what they’re saying is correct. And I think all of that is pretty bad for discourse.
This used to be me… It wasn’t so much my beliefs that changed (I’m not a leftist/feminist/etc). It was more a change in attitude, related to why I rejected ultra-strict interpretations of utilitarianism. Not becoming more agreeable or less opinionated… just not feeling like I was on a life-or-death mission. Anyway, happy to discuss these things privately, including with people who are still on the anti-SJW mission.
The OP itself is confusing, but I agree that EA is very focused on a narrow interpretation of utilitarianism. I used to think that EA should change this, but then I realized that I was fighting a losing battle. There’s nothing inherently valuable about the name “effective altruism”. It’s whatever people define it to be. When I stopped thinking of myself as part of this community, it was a great weight off my shoulders.
The thing that rubs me the wrong way is that it feels like a motte-bailey. “Effective altruism” is vague and appears self-evidently good, but in reality EAs are pushing for a very specific agenda and have very specific values. It would be better if they were more up-front about this.
Assuming moral anti-realism, as many EAs do, people can rationally disagree over values to an almost unlimited degree. Some strict definitions of utilitarianism would require one to equally value animal and human suffering, discounting for some metric of consciousness (though I actually roughly agree with Brian Tomasik that calling something conscious is a value judgment, not an empirical claim). But many EAs aren’t strict utilitarians.
EAs can have strong opinions about how the message of EA should be presented. For example, I think EA should discourage valuing the life of an American 1000x that of a foreigner, or valuing animal suffering at 0. But nitpicking over subjective values seems counterproductive.
One thing I liked about this post is that it was written in English, instead of math symbols. I find it extremely hard to read a series of equations without someone explaining them verbally. Overall I thought the clarity was fairly good.
False precision much? This seems like an inappropriately specific number—it makes it sound like you have concrete evidence, but in reality you’re just multiplying the number of men in EA by 6%. I hope that this number won’t start getting spread around.
A more tractable approach to reducing the trauma from sexual violence might be to change perceptions of sexuality. Many people believe that it’s important for women to be sexually “pure”, which is one reason that female victims experience trauma.
Feminists, to their credit, reject such notions, but if anything they interpret sexual violence even more symbolically—as an attempt to have power over women and “violate” them, whatever that means. According to feminist theory, rape is never about sexual gratification. However, there isn’t much evidence for this interpretation. Interviews with convicted sex offenders reveal a mix of motivations. In addition, there does seem to be a relationship between sexual attractiveness and probability of rape. For example, one study looked at female robbery victims, using age as a proxy for attractiveness. (For obvious reasons, we can’t actually study the attractiveness of victims.) Middle-aged and older women were far less likely to be raped by their assailant.
Setting aside the empirical question of whether rape is actually about destroying the victim’s autonomy, it seems unhelpful to interpret negative events in one’s life symbolically, personalize them, or cast them as part of a larger conspiracy. Cognitive behavioral therapy and other techniques may help victims overcome irrational negative beliefs.