Thanks for this post. There’s a lot I agree with here. I’m in especially vigorous agreement with your points regarding hero worship and seeing newcomers as a source of fresh ideas/arguments instead of condescending them.
There are also some points I disagree with. And in the spirit of not considering any arguments above criticism, and disagreement being critical for finding the best answers, I hope you won’t mind if I lay my disagreements out. To save time, I’ll focus on the differences between your view and mine. So if I don’t mention a point you made, you can default to assuming I agree with it.
First, I’m broadly skeptical of the social psychology research you cite. Whenever I read about a study that claims women are more analytical than men, or women are better leaders than men, I imagine whether I would hear about it if the experiment found the opposite result.
I recommend this blog post on the lack of ideological diversity in social psychology. Social psychologists are overwhelmingly liberal, and many openly admit to discriminating against conservatives in hiring. Here is a good post by a Mexican social psychologist that discusses how this plays out. There’s also the issue of publication bias at the journal level. I know someone who served on the selection committee of a (minor & unimportant, so perhaps not representative) psychology journal. The committee had an explicit philosophy of only publishing papers they liked, and espousing “problematic” views was a strike against a paper. Anyway, I think to some degree the field functions as a liberal echo chamber on controversial issues.
There’s really an entire can of worms here—social psychology is currently experiencing a major reproducibility crisis—but I don’t want to get too deep in to it, because to defend my position fully, I’d want to share evidence for positions that make people uncomfortable. Suffice to say that there’s a third layer of publication bias at the level of your Facebook feed, and I could show you a different set of research-backed thinkpieces that point to different conclusions. (Suggestion: if you wouldn’t want someone on the EA Forum to make arguments for the position not X, maybe avoid making arguments for the position X. Otherwise you put commenters in an impossible bind.)
But for me this point is really the elephant in the room:
some people in broader society now respond to correctable offenses with a mob mentality and too much readiness for ostracization, but just because some people have swung too far past the mark doesn’t mean we should default to a status quo that falls so short of it.
I would like to see a much deeper examination here. Insofar as I feel resistant to diversity efforts, this feels like most of what I’m trying to resist. If I was confident that pro-diversity people in EA won’t spiral towards this, I’d be much more supportive. Relevant fable.
All else equal, increased diversity sounds great, but my issue is I see a pattern of other pro-diversity movements sacrificing all other values in the name of trying to increase diversity. Take a statement like this one:
Some of the most talented and resolute people in this community are here because they are deeply emotionally compelled to help others as much as possible, and we’re currently missing out on many such people by being so cold and calculating. There are ways to be warm and calculating! I can think of a few people in the community who manage this well.
Being warm and calculating sounds great, but what if there’s actually a tradeoff here? Just taking myself as an example, I know that as I’ve become aware of how much suffering exists in the grand scheme of things, I’ve begun to worry less about random homeless people I see and stuff like that. Even if there’s some hack I can use to empathize with homeless people while retaining a global perspective, that hack would require effort on my part—effort I could put towards goals that seem more important.
this particular individual — who is probably a troll in general — was banned from the groups where he repeatedly and unrelentingly said such things, though it’s concerning there was any question about whether this was acceptable behavior.
Again, I think there’s a real tradeoff between “free speech” and sensitivity. I view the moderation of online communities as an unsolved problem. I think we benefit from navigating moderation tradeoffs thoughtfully rather than reactively.
Reminding people off the forum to upvote this post, in order to deal with possible hostility, is also a minor red flag from my perspective. This resembles something Gleb Tsipursky once did.
None of this seems very bad in the grand scheme of things, especially not compared to what I’ve seen from other champions of diversity—I just thought it’d be useful to give concrete examples.
Anyway, here are some ideas of mine, if anyone cares:
Phrase guidelines as neutrally as possible, e.g. “don’t be a jerk” instead of “don’t be a sexist”. The nice thing about “don’t be a jerk” is it at admits the possibility that someone could violate the guideline by e.g. loudly calling out a minor instance of sexism in a way that generates a lot of drama and does more harm than good. Rules should exist to serve everyone, and they should be made difficult to weaponize. If most agree your rules are legitimate, that also makes them easier to enforce.
Team-building activities, icebreakers, group singalongs, synchronous movement, sports/group exercise, and so on. The ideal activity is easy for anyone to do and creates a shared EA tribal identity just strong enough to supersede the race/gender/etc. identities we have by default. Kinda like how students at the same university will all cheer for the same sports team.
Following the example of the animal-focused EAs: Work towards achieving critical mass of underrepresented groups. Especially if you can saturate particular venues (e.g. a specific EA meetup group). I know that as a white male, I sometimes get uncomfortable in situations where I am the only white person or the only man in a group, even though I know perfectly well that no one is discriminating against me. I think it’s a natural response to have when you’re in the minority, so in a certain sense there’s just a chicken-and-egg problem. Furthermore, injecting high-caliber underrepresented people into EA will help dismantle stereotypes and increase the number of one-on-one conversations people have, which I think are critical for change.
Take a proactive, rather than reactive, approach to helping EA men with women. Again, I think having more women is playing a big role for animal-focused EAs. More women means the average man has more female friends, better understands how women think, and empathizes with the situations women encounter more readily. In this podcast, Christine Peterson discusses the value of finding a life partner for productivity and mental health. In the same way that CFAR makes EAs more productive through lifehacking, I could imagine someone working covertly to make EAs more productive through solving their dating problems.
Invite the best thinkers who have heterodox views on diversity to attend “diversity in EA” events, in order to get a diverse perspective on diversity and stay aware of tradeoffs. Understand their views in enough depth to market diversity initiatives to the movement at large without getting written off.
When hiring a Diversity & Inclusion Officer, find someone who’s good at managing tradeoffs rather than the person who’s most passionate about the role.
Again, I appreciate the effort you put in to this post, and I support you working towards these goals in a thoughtful way. Also, I welcome PMs from you or anyone else reading this comment—I spent several hours on it, but I’m sure there is stuff I could have put better and I’d love to get feedback.
All else equal, increased diversity sounds great, but my issue is I see a pattern of other pro-diversity movements sacrificing all other values in the name of trying to increase diversity.
It’s not unheard of, but it seems more common than it is because only the movements and initiatives which go too far merit headlines and attention. The average government agency, F500 company, or similar organization piles on all kinds of diversity policies without turning into the Nightmare on Social Justice Street.
The pattern I see is that “organizations” (such as government agencies or Fortune 500 companies) usually turn out OK, whereas “movements” or “communities” (e.g. the atheism movement, or the open source community) often turn out poorly.
An explanation of what you mean by “turn out OK” would be helpful. For instance, do movements that err more towards social justice fare worse than those that err away from it (or than those that sit at the status quo)?
Whether that’s the case for the atheism movement or the open source community is a heavy question that merits more explanation.
Actually, I would think that any overshooting you see in these communities is a reaction to how status-quo (or worse) both of those communities are. Note for instance that when women are not collaborators on a project (but not when they are), their open-source contributions are more likely to be accepted than men’s when their gender is not known but despite that they’re less likely to be accepted than men’s when their gender is known.
The Atheism Plus split was pretty bad. They were a group that wanted all atheists to also be involved in social justice. Naturally many weren’t happy with this takeover of the movement and pushed back. The Atheism Plus side argues that this was due to misogyny, ect, ignoring the fact that some people just wanted to be atheists and do atheist stuff and not get involved in politics. The end result was Atheism Plus was widely rejected, many social justice leaning atheists left the movement, Atheism widely defamed, remaining atheists not particularly open to social justice.
I don’t know very much about open source, but I’ve heard that there’s been some pretty vicious/brutal political fights over codes of conduct, ect.
The atheists even started to disinvite their intellectual founders, e.g. Richard Dawkins. Will EA eventually go down the same path—will they end up disinviting e.g. Bostrom for not being a sufficiently zealous social justice advocate?
All I’m saying is that there is a precedent here. If SJW-flavored EA ends up going down this path, please don’t say you were not warned.
People nominally within EA have already called for us to disavow or not affiliate with Peter Singer so this seems less hypothetical than one might think.
‘Yvain’ gives a good description of a process along along these lines within his comment here (which also contains lots of points which pre-emptively undermine claims within this post).
I entirely appreciate the concern of going too far. Let’s just be careful not to assume that risks only come with action—the opposite path is an awful one too, and with inaction we risk moving further down it.
Kelly, I don’t think the study you cite is good or compelling evidence of the conclusion you’re stating. See Scott’s comments on it for the reasons why.
Even after clarification, your sentence is misleading. The true thing you could say is “Among outsiders to projects, women are more likely to have their contributions accepted than men. Both men and women are less likely to have their contributions accepted when their genders are revealed; the effect was measured to be a percentage point different between the genders and may or may not be statistically significant. There are also major differences between the contribution patterns of men and women.”
As a side note, I find the way you’re using social science quite frustrating. You keep claiming that social science supports many of your particular beliefs, and then other people keep digging into the evidence and pointing out the specific reason that the evidence you’ve presented isn’t very convincing. But it takes a lot of time to rebut all of your evidence that way, much more time than it takes for you to link to another bad study.
This is a similar issue that’s going on in another thread where people feel you’re cherrypicking results rather than sampling randomly in a way that will paint an accurate picture. Perhaps this dialogue can help to explain the concerns that others have expressed:
Person One: Here are 5 studies showing that coffee causes cancer, which suggests we should limit our coffee consumption.
Person Two: Actually if you do a comprehensive survey of the literature, you’ll fine 3 studies showing that coffee causes cancer, 17 showing no effect, and 3 showing the coffee prevents cancer. On balance there’s no stronger evidence that coffee causes cancer than that it prevents it, and in fact it probably has no effect.
Person One: Thanks for the correction! [Edits post to say: “Here are 3 studies showing that coffee causes cancer, which suggests we should limit our coffee consumption.”]
Person Two: I mean… that’s technically true, but I don’t feel the problem is solved.
As a side note, I find the way you’re using social science quite frustrating. You keep claiming that social science supports many of your particular beliefs, and then other people keep digging into the evidence and pointing out the specific reason that the evidence you’ve presented isn’t very convincing. But it takes a lot of time to rebut all of your evidence that way, much more time than it takes for you to link to another bad study.
To be charitable to Kelly, in most parts of the internet, a link to popular reporting on social science research is a high quality argument. I can understand how it might be frustrating for people to tell you you need to up your paper scrutinizing game while you are busy trying to respond to an entire thread full of people expressing disagreement.
To be charitable to Kelly, in most parts of the internet, a link to popular reporting on social science research is a high quality argument.
I dearly hope we never become one of those parts of the internet.
And think we should fight against every slip down that terrible incentive gradient, for example by pointing out that the bottom of that gradient is a really terribly unproductive place, and by pushing back against steps down that doomy path.
I dearly hope we never become one of those parts of the internet.
Me too. However, I’m not entirely clear what incentive gradient you are referring to.
But I do see an incentive gradient which goes like this: Most people responding to threads like this do so in their spare time and run on intrinsic motivation. For whatever reason, on average they find it more intrinsically motivating to look for holes in social psych research if it supports a liberal conclusion. There’s a small population motivated the opposite way, but since people find it less intrinsically motivating to hang out in groups where their viewpoint is a minority, those people gradually drift off. The end result is a forum where papers that point to liberal conclusions get torn apart, and papers that point the other way get a pass.
As far as I can tell, essentially all online discussions of politicized topics fall prey to a failure mode akin to this, so it’s very much something to be aware of.
Full disclosure: I’m not much of a paper scrutinizer. And the way I’ve been behaving in this thread is the same way Kelly has been. For example, I linked to Bryan Caplan’s blog post covering a paper on ideological imbalance in social psychology. The original paper is 53 pages long. Did I read over the entire thing, carefully checking for flaws in the methodology? No, I didn’t.
I’m not even sure it would be useful for me to do that—the best scrutinizer is someone who feels motivated to disprove a paper’s conclusion, and this ideological imbalance paper very much flatters my preconceptions. But the point is that Kelly got called out and I didn’t.
I don’t know what a good solution to this problem looks like. (Maybe LW 2.0 will find one.) But an obvious solution is to extend special charity to anyone who’s an ideological minority, to try & forestall evaporative cooling effects. [Also could be a good way to fight ingroup biases etc.]
As a side note, I suspect we should re-allocate resources away from social psychology as a resolution for SJ debates, on the margin. It provides great opportunities for IQ signaling, but the flip side is the investment necessary to develop a well-justified opinion is high—I don’t think social psych will end up solving the problem for the masses. I would like to see people brainstorm in a larger space of possible solutions.
The incentive gradient I was referring to goes from trying to actually figure out the truth to using arguments as weapons to win against opponents. You can totally use proxies for the truth if you have to(like an article being written by someone you’ve audited in the past, or someone who’s made sound predictions in the past). You can totally decide not to engage with an issue because it’s not worth the time.
But if you just shrug your shoulders and cite average social science reporting on a forum you care about, you are not justified in expecting good outcomes. This is the intellectual equivalent of catching the flu and then purposefully vomiting into the town water supply. People that do this are acting in a harmful manner, and they should be asked to cease and desist.
the best scrutinizer is someone who feels motivated to disprove a paper’s conclusion
The best scrutinizer is someone that feels motivated to actually find the truth. This should be obvious.
For whatever reason, on average they find it more intrinsically motivating to look for holes in social psych research if it supports a liberal conclusion.
Yet EAs are mostly liberal. The 2017 Survey had 309 EAs identifying as Left, 373 as Centre-Left, 4 identifying as Right, 31 as Centre Right. My contention is that this is not about the conclusions being liberal. It’s about specific studies and analyses of studies being terrible. E.g. (and I hate that I have to say this) I lean very socially liberal on most issues. Yet I claim that the article Kelly cited is not good support for anyone’s beliefs. Because it is terrible, and does not track the truth. And we don’t need writings like that, regardless of whose conclusions they happen to support.
The best scrutinizer is someone that feels motivated to actually find the truth. This should be obvious.
How does “this should be obvious” compare to average social science reporting on the epistemic hygiene scale?
Like, this is an empirical claim we could test: give people social psych papers that have known flaws, and see whether curiosity or disagreement with the paper’s conclusion predicts flaw discovery better. I don’t think the result of such an experiment is obvious.
I actually tend to observe the other effect in most intellectual spaces. Any liberal supporting result will get a free pass and be repeated over and over again, while any conservative leaning claim will be torn to shreds. Of course, you’ll see the opposite if you hang around the 50% of people who voted Trump, but not many of them are in the EA community.
I am disinclined to be sympathetic when someone’s problem is that they posted so many bad arguments all at once that they’re finding it hard to respond to all the objections.
Regarding the terrible incentive gradients mentioned by Claire above, I think discussion is more irenic if people resist, insofar as possible, to impute bad epistemic practices to certain people, and even to try and avoid identifying the individual with the view or practice you take to be mistaken, even though they in fact advocate it.
As a concrete example (far from alone, and selected not because it is ‘particularly bad’, but rather because it comes from a particularly virtuous discussant) the passage up-thread seems to include object level claims on the epistemic merits of a certain practice, but also implies an adverse judgement about the epistemic virtue of the person it is replying to:
As a side note, I find the way you’re using social science quite frustrating. You keep claiming that social science supports many of your particular beliefs, and then other people keep digging into the evidence and pointing out the specific reason that the evidence you’ve presented isn’t very convincing. But it takes a lot of time to rebut all of your evidence that way, much more time than it takes for you to link to another bad study. [my emphasis]
The ‘you-locutions’ do the work of imputing, and so invite subsequent discussion about the epistemic virtue of the person being replied to (e.g. “Give them a break, this mistake is understandable given some other factors”/ “No, this is a black mark against them as a thinker, and the other factors are not adequate excuse”).
Although working out the epistemic virtue of others can be a topic with important practical applications (but see discussion by Askell and others above about ‘buzz talk’), the midst of a generally acrimonious discussion on a contentious topic is not the best venue. I think a better approach is a rewording that avoids the additional implications:
I think there’s a pattern of using social science data which is better avoided. Suppose one initially takes a set of studies to support P. Others suggest studies X, Y and Z (members of this set) do not support P after all. If one agrees with this, it seems better to clearly report a correction along the lines of “I took these 5 studies to support P, but I now understand 3 of these 5 do not support P”, rather than offering additions to the set of studies that support P.
The former allows us to forecast how persuasive additional studies are (i.e. if all of the studies initially taken to support P do not in fact support P on further investigation, we may expect similar investigation to reveal the same about the new studies offered). Rhetorically, it may be more persuasive to sceptics of P, as it may allay worries that sympathy to P is tilting the scales in favour of reporting studies that prima facie support P.
The rewording can take longer (but I am not rewording myself, rather a better writer), but even if so I expect other benefits will outweigh it.
An explanation of what you mean by “turn out OK” would be helpful. For instance, do movements that err more towards social justice fare worse than those that err away from it (or than those that sit at the status quo)?
I’m referring to mob mentality, trigger-happy ostracization, and schisms. I don’t think erring towards/away from social justice is quite the right question, because in these failure cases, the distribution of support for social justice becomes a lot more bimodal.
Actually, I would think that any overshooting you see in these communities is a reaction to how status-quo (or worse) both of those communities are.
Sounds plausible. That’s a big reason why I support thoughtful work on diversity: as a way to remove the motivation for less thoughtful work.
I can’t address all of this but will say three quick things:
I’m broadly skeptical of the social psychology research you cite
I appreciate it’s weakness, but it’s at least some evidence against people’s intuitions and in addition to the literature on how those intuitions are demonstrably false and discriminatory it should update people away from those discriminatory beliefs.
[Edit: I appreciate that I should generally behave as though my community will behave well, and as such I should not have requested that people upvote even if I just asked them to “upvote if [they] find the post useful.” I want to be sure to flag in this response though the incredibly poor way in which people who disagree with claims and arguments in favor of diversity and inclusion are using their votes, in comments and on the whole post. It’s worth explicitly observing that identity-driven voting here is not equal among opposers and supporters, but seems clearly dominated by opposers.]
I appreciate your suggestions a lot, but caution you to be careful of your own assumptions. For instance, I never suggested that a Diversity & Inclusion Officer should be the person most passionate about the role instead of most smart about it.
To emphasize though, so it doesn’t get lost behind those critical thoughts: I thoroughly appreciate the suggestions you’ve contributed here.
[Edit: Apologies for some excessive editing. I readily acknowledge that in an already a hostile environment, my initial reaction to criticism regarding an important issue that is causing a lot of harm is too defensive.]
Another idea I had: add questions to the EA Survey to understand how people feel about the issues you are describing. This accomplishes a few things:
It allows us to track progress more effectively than observing our demographic breakdown. Measuring how people feel about EA movement culture gives us a shorter feedback loop, since changes in demographics lag behind culture changes. Furthermore, by attempting to measure the climate issue directly, we can zero in on factors under our control.
It helps fight selection effects that occur in online discussion of these issues. People on both sides can be reluctant to share their thoughts & ideas in a thread like this one. Online discussions in general can be wildly unrepresentative. I was surprised to learn about polls which found that most Native Americans aren’t offended by the use of “Redskins” as a team name (criticism of this poll), and that a majority of black people are against affirmative action. And among the “anti-SJW” crowd, there’s a perception that some folks are going to see racism/sexism in everything, and they will never be satisfied. So taking a representative poll of EAs, and perhaps comparing the results to some baseline, can help us come to agreement on the degree to which we have issues.
I like this idea. It will be skewed towards people who aren’t turned off by the culture, as those who are will have less interest in, and in some or many cases may not even be exposed to, the survey, but getting more systematic info on people’s feelings here would be very useful.
I mentioned my concern that pro-diversity efforts in EA might “spiral” towards a mob mentality. I think one way in which this might happen is if the people working towards diversity in EA recruit people from underrepresented groups that they know through other pro-diversity groups, which, as you mention, frequently suffer from a mob mentality. If the pool of underrepresented people we draw from is not selected this way (e.g. if the majority of black people who are joining EA are against affirmative action, as is true for the majority of the black population in general), then I’m less worried.
I think some of your suggestions are not entirely consistent. For example, you mention that EA should not “throw around the term “AI” with no qualification or explanation”. From my perspective, if I was hearing about EA for the first time and someone felt the need to explain what “AI” was an acronym for, I would feel condescended to. I imagine this effect might be especially acute if I was a member of a minority group (“How dumb do these people think I am?”) Similarly, you suggest that we cut our use of jargon. In practice, I think useful jargon is going to continue getting used no matter what. So the way this suggestion may be interpreted in practice is: Don’t use jargon around people who are members of underrepresented groups. I think people from underrepresented groups will soon figure out they are being condescended to. I think a better idea is to remember that we were once ignorant about jargon ourselves, and make an effort to explain jargon to newbies. Hopefully they feel like members of the ingroup after they’ve mastered the lingo.
Relatedly, there is a question which I think sometimes gets tied up with the diversity question, but perhaps should not get tied up, which is the question of whether EA should aim more to be a committed, elite core vs a broad church. My impression is lots of people privately favor the committed, elite core approach. I think we can have both diversity and a committed, elite core: consider institutions such as Harvard which are both elite and diverse. Furthermore, I think being more public about our elitism might actually help with diversity, because we’d be making our standards clearer and more transparent, and we could rely less heavily on subjective first impressions. (CC Askell on “buzz talk”.) To put it another way: although “diversity” and “inclusion” are often treated as synonyms, it’s actually possible to be both “diverse” and “exclusive” (and this seems likely ideal).
A benefit of diversity you didn’t mention: Insofar as the EA movement has world peace and global cooperation as part of our goals, it’s useful to have people from as many different groups as possible. This is also useful if we want to be able to speak authoritatively on topics like how AI should be used for the benefit of humanity and whatnot.
Unjustified hunch here, but I think maybe another failure mode that can come up when a movement tries to increase diversity is that people who are underrepresented start to receive more attention. Even if this attention is positive (e.g. “How can we cater to people like you better?”), I think this can result in an increased level of self-consciousness. (See my previous point about how people who look different may feel self-conscious by default even if they’re not discriminated against.) Further unjustified conjecture: the sort of black person who supports affirmative action tends to enjoy the power they get from this, whereas the sort of black person who doesn’t support affirmative action doesn’t like it, thereby enhancing the “spiral” effect.
Another possible failure mode: Diversity advocates see something they don’t like (e.g. a person suggesting that women do not contribute to society and are leeches if they don’t offer men sex), and they want to root the problem out. In order to rally support, they let everyone know about the problem (like you did in this post). But by letting everyone know about the problem, they’ve also made it in to a bigger problem: now every woman who reads this post knows that someone, at one point in an EA-related discussion somewhere, made this outrageous claim—which results in those women feeling less welcome and more on edge. The toxic echo of this person’s post continues to reverberate as it is held up as part of a broader trend within EA, even though their post itself was long ago deleted. (This could contribute to the “spiral” effect I described, if the women who stick around after hearing about posts like these are disproportionately those that enjoy engaging in flame wars with people who make outrageous statements.)
I mentioned the EA Survey. One thing you could do is look at existing EA survey data and try to understand whether our issues with underrepresentation seem to be getting better or worse over the years. My impression is that gender thing, at least, has gotten much better since EA was founded. In any case, if things are already on a good path, I’m more skeptical about major diversity initiatives—”if it ain’t broke, don’t fix it”.
Incidentally, I realized some of the points I’m making here are redundant with this essay which was already posted. (But I highly recommend reading it anyway, because it has some great points I hadn’t thought of.)
But by letting everyone know about the problem, they’ve also made it in to a bigger problem: now every woman who reads this post knows that someone, at one point in an EA-related discussion somewhere, made this outrageous claim—which results in those women feeling less welcome and more on edge. The toxic echo of this person’s post continues to reverberate as it is held up as part of a broader trend within EA, even though their post itself was long ago deleted.
This can get very dangerous as it opens a door for trolls to negatively impact the community and potentially damage its reputation. Maybe these kinds of discussions need to be gated in some way, or be had offline or something.
Risk does come with greater publicity of such behavior, but that’s part of the point of making it more public (in addition to the information value for people who want to avoid or address it). This is the first I’ve ever publicly said something about these issues in EA, after three years of many private conversations that seem to have resulted in limited or no impact. Greater publicity means greater accountability and motivation for action, both for the people who behave poorly and the people who let them do so without consequence.
Since I’m already working on inclusionary practices myself, there’s not much else to do but private or public discussion.
The private discussions I have had explicitly around the issue have varied a lot in their content and purpose and can be characterized as any of the following or a combination thereof: Listening to people’s experiences; sharing my own; discussing solutions; actively (beyond just listening) supporting people who were treated poorly; sharing information and concern about the issue with people in a better or still good position to do something about it; trying to discuss why this or more specific issues of exclusion are a problem with people who prefer the status quo; or endeavoring to show people why something they did was a problem and what they should do differently.
Dealing with a bewilderingly amateur situation myself and working to privately help the people responsible to understand the problem and improve took a month out of my life, and with a really important counterfactual, and that’s strictly in time spent on the issue that I don’t think I would have had to lose in e.g. the animal advocacy community, and not accounting for the emotional toll. I have good reason for (cautious) optimism that that was fruitful but also a red flag restraining that optimism and regardless only time will tell.
Basically I’ve spent a huge amount of time on those private and often solution-oriented conversations and have been hanging over the precipice of burnout with the community since day 1 several years ago. (The broader community at least, not the animal advocacy sub/intersected-community. And disclaimer that there are great individuals throughout the broader community who are my friends and/or whose presence in the community I am so happy for, etc.) And I’m definitely not alone in that.
I can do more to have private conversations with people in better positions than myself to make change here (such as people who are looked up to in the community by the people whose behavior could be more inclusionary, or donors to EA orgs), and I might if this post and the discussion here doesn’t inspire other people to take more action on this issue, which is my hope.
[Edit: I appreciate that I should generally behave as though my community will behave well, and as such I should not have requested that people upvote if they find the post helpful. I want to be sure to flag in this response though the incredibly poor way in which people who disagree with claims and arguments in favor of diversity and inclusion are using their votes, in comments and on the whole post.]
Thanks.
I’m also finding the voting in this thread frustrating.
I appreciate your suggestions a lot, but caution you to be careful of your own assumptions. For instance, I never suggested that a Diversity & Inclusion Officer should be the person most passionate about the role instead of most smart about it.
Sorry about that.
To emphasize though, so it doesn’t get lost behind those critical thoughts: I thoroughly appreciate the suggestions you’ve contributed here.
Glad to hear it :)
[Edit: Apologies for some excessive editing. I readily acknowledge that in an already a hostile environment, my initial reaction to criticism regarding an important issue that is causing a lot of harm is too defensive.]
I’m an excessive editor too, I’m not sure it’s something you need to apologize for :)
If I recall correctly, this comment was at −2 when I first saw it, which frustrated me because I think people who publicly admit mistakes should get upvotes. Publicly admitting mistakes is really hard to do. I think we should take a moment to give people credit for this before demanding that they confess their sins even more thoroughly.
it’s at least some evidence against people’s intuitions
I don’t think it is, at all, any more than Daryl Bem’s research updates me towards thinking ESP is real. Like, who knows, the world is a crazy place, maybe the papers here are in the 36% of published psychology papers which hold up under replication. But I don’t think that it makes sense to update against your beliefs about this stuff based on the published science—if you think that the scientists would have published these papers regardless of their truth, as I do, you shouldn’t regard them as evidence.
I don’t think it is, at all, any more than Daryl Bem’s research updates me towards thinking ESP is real.
This strikes me as a misunderstanding of how Bayesian updates work. The reason you still don’t believe in ESP is because your prior for ESP is very low. But I think hearing about Bem’s research should still cause you to update your estimate in favor of ESP a tiny amount. In a world with ESP, Bem finds it easier to discover ESP effects.
if you think that the scientists would have published these papers regardless of their truth
I don’t think social psychologists are that dishonest. Even 36% replicability suggests some relationship between paper-publishing and truth.
Furthermore, I think the fact that social psychologists are so liberal should cause some update in the direction that studying humans causes you to realize liberal views about human nature are correct.
This strikes me as a misunderstanding of how Bayesian updates work. The reason you still don’t believe in ESP is because your prior for ESP is very low. But I think hearing about Bem’s research should still cause you to update your estimate in favor of ESP a tiny amount. In a world with ESP, Bem finds it easier to discover ESP effects.
I think you slightly misunderstand me. What I’m saying is that Bem’s work isn’t really a Bayesian update for me, because I think Bem is approximately as likely to publish papers in the world where (extremely weak) ESP works as the worlds where it doesn’t. The strength of my prior doesn’t feel relevant to me.
I think you’re right that I slightly overstated my case.
Thanks for this post. There’s a lot I agree with here. I’m in especially vigorous agreement with your points regarding hero worship and seeing newcomers as a source of fresh ideas/arguments instead of condescending them.
There are also some points I disagree with. And in the spirit of not considering any arguments above criticism, and disagreement being critical for finding the best answers, I hope you won’t mind if I lay my disagreements out. To save time, I’ll focus on the differences between your view and mine. So if I don’t mention a point you made, you can default to assuming I agree with it.
First, I’m broadly skeptical of the social psychology research you cite. Whenever I read about a study that claims women are more analytical than men, or women are better leaders than men, I imagine whether I would hear about it if the experiment found the opposite result.
I recommend this blog post on the lack of ideological diversity in social psychology. Social psychologists are overwhelmingly liberal, and many openly admit to discriminating against conservatives in hiring. Here is a good post by a Mexican social psychologist that discusses how this plays out. There’s also the issue of publication bias at the journal level. I know someone who served on the selection committee of a (minor & unimportant, so perhaps not representative) psychology journal. The committee had an explicit philosophy of only publishing papers they liked, and espousing “problematic” views was a strike against a paper. Anyway, I think to some degree the field functions as a liberal echo chamber on controversial issues.
There’s really an entire can of worms here—social psychology is currently experiencing a major reproducibility crisis—but I don’t want to get too deep in to it, because to defend my position fully, I’d want to share evidence for positions that make people uncomfortable. Suffice to say that there’s a third layer of publication bias at the level of your Facebook feed, and I could show you a different set of research-backed thinkpieces that point to different conclusions. (Suggestion: if you wouldn’t want someone on the EA Forum to make arguments for the position not X, maybe avoid making arguments for the position X. Otherwise you put commenters in an impossible bind.)
But for me this point is really the elephant in the room:
I would like to see a much deeper examination here. Insofar as I feel resistant to diversity efforts, this feels like most of what I’m trying to resist. If I was confident that pro-diversity people in EA won’t spiral towards this, I’d be much more supportive. Relevant fable.
All else equal, increased diversity sounds great, but my issue is I see a pattern of other pro-diversity movements sacrificing all other values in the name of trying to increase diversity. Take a statement like this one:
Being warm and calculating sounds great, but what if there’s actually a tradeoff here? Just taking myself as an example, I know that as I’ve become aware of how much suffering exists in the grand scheme of things, I’ve begun to worry less about random homeless people I see and stuff like that. Even if there’s some hack I can use to empathize with homeless people while retaining a global perspective, that hack would require effort on my part—effort I could put towards goals that seem more important.
Again, I think there’s a real tradeoff between “free speech” and sensitivity. I view the moderation of online communities as an unsolved problem. I think we benefit from navigating moderation tradeoffs thoughtfully rather than reactively.
Reminding people off the forum to upvote this post, in order to deal with possible hostility, is also a minor red flag from my perspective. This resembles something Gleb Tsipursky once did.
None of this seems very bad in the grand scheme of things, especially not compared to what I’ve seen from other champions of diversity—I just thought it’d be useful to give concrete examples.
Anyway, here are some ideas of mine, if anyone cares:
Phrase guidelines as neutrally as possible, e.g. “don’t be a jerk” instead of “don’t be a sexist”. The nice thing about “don’t be a jerk” is it at admits the possibility that someone could violate the guideline by e.g. loudly calling out a minor instance of sexism in a way that generates a lot of drama and does more harm than good. Rules should exist to serve everyone, and they should be made difficult to weaponize. If most agree your rules are legitimate, that also makes them easier to enforce.
Team-building activities, icebreakers, group singalongs, synchronous movement, sports/group exercise, and so on. The ideal activity is easy for anyone to do and creates a shared EA tribal identity just strong enough to supersede the race/gender/etc. identities we have by default. Kinda like how students at the same university will all cheer for the same sports team.
Following the example of the animal-focused EAs: Work towards achieving critical mass of underrepresented groups. Especially if you can saturate particular venues (e.g. a specific EA meetup group). I know that as a white male, I sometimes get uncomfortable in situations where I am the only white person or the only man in a group, even though I know perfectly well that no one is discriminating against me. I think it’s a natural response to have when you’re in the minority, so in a certain sense there’s just a chicken-and-egg problem. Furthermore, injecting high-caliber underrepresented people into EA will help dismantle stereotypes and increase the number of one-on-one conversations people have, which I think are critical for change.
Take a proactive, rather than reactive, approach to helping EA men with women. Again, I think having more women is playing a big role for animal-focused EAs. More women means the average man has more female friends, better understands how women think, and empathizes with the situations women encounter more readily. In this podcast, Christine Peterson discusses the value of finding a life partner for productivity and mental health. In the same way that CFAR makes EAs more productive through lifehacking, I could imagine someone working covertly to make EAs more productive through solving their dating problems.
Invite the best thinkers who have heterodox views on diversity to attend “diversity in EA” events, in order to get a diverse perspective on diversity and stay aware of tradeoffs. Understand their views in enough depth to market diversity initiatives to the movement at large without getting written off.
When hiring a Diversity & Inclusion Officer, find someone who’s good at managing tradeoffs rather than the person who’s most passionate about the role.
Again, I appreciate the effort you put in to this post, and I support you working towards these goals in a thoughtful way. Also, I welcome PMs from you or anyone else reading this comment—I spent several hours on it, but I’m sure there is stuff I could have put better and I’d love to get feedback.
It’s not unheard of, but it seems more common than it is because only the movements and initiatives which go too far merit headlines and attention. The average government agency, F500 company, or similar organization piles on all kinds of diversity policies without turning into the Nightmare on Social Justice Street.
The pattern I see is that “organizations” (such as government agencies or Fortune 500 companies) usually turn out OK, whereas “movements” or “communities” (e.g. the atheism movement, or the open source community) often turn out poorly.
Hm, that’s a good point. I can’t come up with a solid counterexample off the top of my head.
An explanation of what you mean by “turn out OK” would be helpful. For instance, do movements that err more towards social justice fare worse than those that err away from it (or than those that sit at the status quo)?
Whether that’s the case for the atheism movement or the open source community is a heavy question that merits more explanation.
Actually, I would think that any overshooting you see in these communities is a reaction to how status-quo (or worse) both of those communities are. Note for instance that when women are not collaborators on a project (but not when they are), their open-source contributions are more likely to be accepted than men’s when their gender is not known but despite that they’re less likely to be accepted than men’s when their gender is known.
The Atheism Plus split was pretty bad. They were a group that wanted all atheists to also be involved in social justice. Naturally many weren’t happy with this takeover of the movement and pushed back. The Atheism Plus side argues that this was due to misogyny, ect, ignoring the fact that some people just wanted to be atheists and do atheist stuff and not get involved in politics. The end result was Atheism Plus was widely rejected, many social justice leaning atheists left the movement, Atheism widely defamed, remaining atheists not particularly open to social justice.
I don’t know very much about open source, but I’ve heard that there’s been some pretty vicious/brutal political fights over codes of conduct, ect.
Came to say this as well.
See, for example:
https://www.reddit.com/r/atheism/comments/2ygiwh/so_why_did_atheism_plus_fail/
The atheists even started to disinvite their intellectual founders, e.g. Richard Dawkins. Will EA eventually go down the same path—will they end up disinviting e.g. Bostrom for not being a sufficiently zealous social justice advocate?
All I’m saying is that there is a precedent here. If SJW-flavored EA ends up going down this path, please don’t say you were not warned.
People nominally within EA have already called for us to disavow or not affiliate with Peter Singer so this seems less hypothetical than one might think.
‘Yvain’ gives a good description of a process along along these lines within his comment here (which also contains lots of points which pre-emptively undermine claims within this post).
I entirely appreciate the concern of going too far. Let’s just be careful not to assume that risks only come with action—the opposite path is an awful one too, and with inaction we risk moving further down it.
Kelly, I don’t think the study you cite is good or compelling evidence of the conclusion you’re stating. See Scott’s comments on it for the reasons why.
(edited because the original link didn’t work)
Thanks, clarified.
Even after clarification, your sentence is misleading. The true thing you could say is “Among outsiders to projects, women are more likely to have their contributions accepted than men. Both men and women are less likely to have their contributions accepted when their genders are revealed; the effect was measured to be a percentage point different between the genders and may or may not be statistically significant. There are also major differences between the contribution patterns of men and women.”
As a side note, I find the way you’re using social science quite frustrating. You keep claiming that social science supports many of your particular beliefs, and then other people keep digging into the evidence and pointing out the specific reason that the evidence you’ve presented isn’t very convincing. But it takes a lot of time to rebut all of your evidence that way, much more time than it takes for you to link to another bad study.
This is a similar issue that’s going on in another thread where people feel you’re cherrypicking results rather than sampling randomly in a way that will paint an accurate picture. Perhaps this dialogue can help to explain the concerns that others have expressed:
Person One: Here are 5 studies showing that coffee causes cancer, which suggests we should limit our coffee consumption.
Person Two: Actually if you do a comprehensive survey of the literature, you’ll fine 3 studies showing that coffee causes cancer, 17 showing no effect, and 3 showing the coffee prevents cancer. On balance there’s no stronger evidence that coffee causes cancer than that it prevents it, and in fact it probably has no effect.
Person One: Thanks for the correction! [Edits post to say: “Here are 3 studies showing that coffee causes cancer, which suggests we should limit our coffee consumption.”]
Person Two: I mean… that’s technically true, but I don’t feel the problem is solved.
To be charitable to Kelly, in most parts of the internet, a link to popular reporting on social science research is a high quality argument. I can understand how it might be frustrating for people to tell you you need to up your paper scrutinizing game while you are busy trying to respond to an entire thread full of people expressing disagreement.
I dearly hope we never become one of those parts of the internet.
And think we should fight against every slip down that terrible incentive gradient, for example by pointing out that the bottom of that gradient is a really terribly unproductive place, and by pushing back against steps down that doomy path.
Me too. However, I’m not entirely clear what incentive gradient you are referring to.
But I do see an incentive gradient which goes like this: Most people responding to threads like this do so in their spare time and run on intrinsic motivation. For whatever reason, on average they find it more intrinsically motivating to look for holes in social psych research if it supports a liberal conclusion. There’s a small population motivated the opposite way, but since people find it less intrinsically motivating to hang out in groups where their viewpoint is a minority, those people gradually drift off. The end result is a forum where papers that point to liberal conclusions get torn apart, and papers that point the other way get a pass.
As far as I can tell, essentially all online discussions of politicized topics fall prey to a failure mode akin to this, so it’s very much something to be aware of.
Full disclosure: I’m not much of a paper scrutinizer. And the way I’ve been behaving in this thread is the same way Kelly has been. For example, I linked to Bryan Caplan’s blog post covering a paper on ideological imbalance in social psychology. The original paper is 53 pages long. Did I read over the entire thing, carefully checking for flaws in the methodology? No, I didn’t.
I’m not even sure it would be useful for me to do that—the best scrutinizer is someone who feels motivated to disprove a paper’s conclusion, and this ideological imbalance paper very much flatters my preconceptions. But the point is that Kelly got called out and I didn’t.
I don’t know what a good solution to this problem looks like. (Maybe LW 2.0 will find one.) But an obvious solution is to extend special charity to anyone who’s an ideological minority, to try & forestall evaporative cooling effects. [Also could be a good way to fight ingroup biases etc.]
As a side note, I suspect we should re-allocate resources away from social psychology as a resolution for SJ debates, on the margin. It provides great opportunities for IQ signaling, but the flip side is the investment necessary to develop a well-justified opinion is high—I don’t think social psych will end up solving the problem for the masses. I would like to see people brainstorm in a larger space of possible solutions.
The incentive gradient I was referring to goes from trying to actually figure out the truth to using arguments as weapons to win against opponents. You can totally use proxies for the truth if you have to(like an article being written by someone you’ve audited in the past, or someone who’s made sound predictions in the past). You can totally decide not to engage with an issue because it’s not worth the time.
But if you just shrug your shoulders and cite average social science reporting on a forum you care about, you are not justified in expecting good outcomes. This is the intellectual equivalent of catching the flu and then purposefully vomiting into the town water supply. People that do this are acting in a harmful manner, and they should be asked to cease and desist.
The best scrutinizer is someone that feels motivated to actually find the truth. This should be obvious.
Yet EAs are mostly liberal. The 2017 Survey had 309 EAs identifying as Left, 373 as Centre-Left, 4 identifying as Right, 31 as Centre Right. My contention is that this is not about the conclusions being liberal. It’s about specific studies and analyses of studies being terrible. E.g. (and I hate that I have to say this) I lean very socially liberal on most issues. Yet I claim that the article Kelly cited is not good support for anyone’s beliefs. Because it is terrible, and does not track the truth. And we don’t need writings like that, regardless of whose conclusions they happen to support.
How does “this should be obvious” compare to average social science reporting on the epistemic hygiene scale?
Like, this is an empirical claim we could test: give people social psych papers that have known flaws, and see whether curiosity or disagreement with the paper’s conclusion predicts flaw discovery better. I don’t think the result of such an experiment is obvious.
Flaws aren’t the only things I want to discover when I scrutinize a paper. I also want to discover truths, if they exist, among other things
[random] I find the survey numbers interesting, insofar as they suggest that EA is more left-leaning than almost any profession or discipline.
(see e.g. this and this).
I actually tend to observe the other effect in most intellectual spaces. Any liberal supporting result will get a free pass and be repeated over and over again, while any conservative leaning claim will be torn to shreds. Of course, you’ll see the opposite if you hang around the 50% of people who voted Trump, but not many of them are in the EA community.
Do you know of any spaces that don’t have the problem one way or the other?
I would say that EA/Less Wrong are better in that any controversial claim you make is likely to be torn to shreds.
I am disinclined to be sympathetic when someone’s problem is that they posted so many bad arguments all at once that they’re finding it hard to respond to all the objections.
Regarding the terrible incentive gradients mentioned by Claire above, I think discussion is more irenic if people resist, insofar as possible, to impute bad epistemic practices to certain people, and even to try and avoid identifying the individual with the view or practice you take to be mistaken, even though they in fact advocate it.
As a concrete example (far from alone, and selected not because it is ‘particularly bad’, but rather because it comes from a particularly virtuous discussant) the passage up-thread seems to include object level claims on the epistemic merits of a certain practice, but also implies an adverse judgement about the epistemic virtue of the person it is replying to:
The ‘you-locutions’ do the work of imputing, and so invite subsequent discussion about the epistemic virtue of the person being replied to (e.g. “Give them a break, this mistake is understandable given some other factors”/ “No, this is a black mark against them as a thinker, and the other factors are not adequate excuse”).
Although working out the epistemic virtue of others can be a topic with important practical applications (but see discussion by Askell and others above about ‘buzz talk’), the midst of a generally acrimonious discussion on a contentious topic is not the best venue. I think a better approach is a rewording that avoids the additional implications:
The rewording can take longer (but I am not rewording myself, rather a better writer), but even if so I expect other benefits will outweigh it.
I’m referring to mob mentality, trigger-happy ostracization, and schisms. I don’t think erring towards/away from social justice is quite the right question, because in these failure cases, the distribution of support for social justice becomes a lot more bimodal.
Sounds plausible. That’s a big reason why I support thoughtful work on diversity: as a way to remove the motivation for less thoughtful work.
I can’t address all of this but will say three quick things:
I appreciate it’s weakness, but it’s at least some evidence against people’s intuitions and in addition to the literature on how those intuitions are demonstrably false and discriminatory it should update people away from those discriminatory beliefs.
[Edit: I appreciate that I should generally behave as though my community will behave well, and as such I should not have requested that people upvote even if I just asked them to “upvote if [they] find the post useful.” I want to be sure to flag in this response though the incredibly poor way in which people who disagree with claims and arguments in favor of diversity and inclusion are using their votes, in comments and on the whole post. It’s worth explicitly observing that identity-driven voting here is not equal among opposers and supporters, but seems clearly dominated by opposers.]
I appreciate your suggestions a lot, but caution you to be careful of your own assumptions. For instance, I never suggested that a Diversity & Inclusion Officer should be the person most passionate about the role instead of most smart about it.
To emphasize though, so it doesn’t get lost behind those critical thoughts: I thoroughly appreciate the suggestions you’ve contributed here.
[Edit: Apologies for some excessive editing. I readily acknowledge that in an already a hostile environment, my initial reaction to criticism regarding an important issue that is causing a lot of harm is too defensive.]
Another idea I had: add questions to the EA Survey to understand how people feel about the issues you are describing. This accomplishes a few things:
It allows us to track progress more effectively than observing our demographic breakdown. Measuring how people feel about EA movement culture gives us a shorter feedback loop, since changes in demographics lag behind culture changes. Furthermore, by attempting to measure the climate issue directly, we can zero in on factors under our control.
It helps fight selection effects that occur in online discussion of these issues. People on both sides can be reluctant to share their thoughts & ideas in a thread like this one. Online discussions in general can be wildly unrepresentative. I was surprised to learn about polls which found that most Native Americans aren’t offended by the use of “Redskins” as a team name (criticism of this poll), and that a majority of black people are against affirmative action. And among the “anti-SJW” crowd, there’s a perception that some folks are going to see racism/sexism in everything, and they will never be satisfied. So taking a representative poll of EAs, and perhaps comparing the results to some baseline, can help us come to agreement on the degree to which we have issues.
I like this idea. It will be skewed towards people who aren’t turned off by the culture, as those who are will have less interest in, and in some or many cases may not even be exposed to, the survey, but getting more systematic info on people’s feelings here would be very useful.
Some more thoughts:
I mentioned my concern that pro-diversity efforts in EA might “spiral” towards a mob mentality. I think one way in which this might happen is if the people working towards diversity in EA recruit people from underrepresented groups that they know through other pro-diversity groups, which, as you mention, frequently suffer from a mob mentality. If the pool of underrepresented people we draw from is not selected this way (e.g. if the majority of black people who are joining EA are against affirmative action, as is true for the majority of the black population in general), then I’m less worried.
I think some of your suggestions are not entirely consistent. For example, you mention that EA should not “throw around the term “AI” with no qualification or explanation”. From my perspective, if I was hearing about EA for the first time and someone felt the need to explain what “AI” was an acronym for, I would feel condescended to. I imagine this effect might be especially acute if I was a member of a minority group (“How dumb do these people think I am?”) Similarly, you suggest that we cut our use of jargon. In practice, I think useful jargon is going to continue getting used no matter what. So the way this suggestion may be interpreted in practice is: Don’t use jargon around people who are members of underrepresented groups. I think people from underrepresented groups will soon figure out they are being condescended to. I think a better idea is to remember that we were once ignorant about jargon ourselves, and make an effort to explain jargon to newbies. Hopefully they feel like members of the ingroup after they’ve mastered the lingo.
Relatedly, there is a question which I think sometimes gets tied up with the diversity question, but perhaps should not get tied up, which is the question of whether EA should aim more to be a committed, elite core vs a broad church. My impression is lots of people privately favor the committed, elite core approach. I think we can have both diversity and a committed, elite core: consider institutions such as Harvard which are both elite and diverse. Furthermore, I think being more public about our elitism might actually help with diversity, because we’d be making our standards clearer and more transparent, and we could rely less heavily on subjective first impressions. (CC Askell on “buzz talk”.) To put it another way: although “diversity” and “inclusion” are often treated as synonyms, it’s actually possible to be both “diverse” and “exclusive” (and this seems likely ideal).
A benefit of diversity you didn’t mention: Insofar as the EA movement has world peace and global cooperation as part of our goals, it’s useful to have people from as many different groups as possible. This is also useful if we want to be able to speak authoritatively on topics like how AI should be used for the benefit of humanity and whatnot.
Unjustified hunch here, but I think maybe another failure mode that can come up when a movement tries to increase diversity is that people who are underrepresented start to receive more attention. Even if this attention is positive (e.g. “How can we cater to people like you better?”), I think this can result in an increased level of self-consciousness. (See my previous point about how people who look different may feel self-conscious by default even if they’re not discriminated against.) Further unjustified conjecture: the sort of black person who supports affirmative action tends to enjoy the power they get from this, whereas the sort of black person who doesn’t support affirmative action doesn’t like it, thereby enhancing the “spiral” effect.
Another possible failure mode: Diversity advocates see something they don’t like (e.g. a person suggesting that women do not contribute to society and are leeches if they don’t offer men sex), and they want to root the problem out. In order to rally support, they let everyone know about the problem (like you did in this post). But by letting everyone know about the problem, they’ve also made it in to a bigger problem: now every woman who reads this post knows that someone, at one point in an EA-related discussion somewhere, made this outrageous claim—which results in those women feeling less welcome and more on edge. The toxic echo of this person’s post continues to reverberate as it is held up as part of a broader trend within EA, even though their post itself was long ago deleted. (This could contribute to the “spiral” effect I described, if the women who stick around after hearing about posts like these are disproportionately those that enjoy engaging in flame wars with people who make outrageous statements.)
I mentioned the EA Survey. One thing you could do is look at existing EA survey data and try to understand whether our issues with underrepresentation seem to be getting better or worse over the years. My impression is that gender thing, at least, has gotten much better since EA was founded. In any case, if things are already on a good path, I’m more skeptical about major diversity initiatives—”if it ain’t broke, don’t fix it”.
Incidentally, I realized some of the points I’m making here are redundant with this essay which was already posted. (But I highly recommend reading it anyway, because it has some great points I hadn’t thought of.)
This can get very dangerous as it opens a door for trolls to negatively impact the community and potentially damage its reputation. Maybe these kinds of discussions need to be gated in some way, or be had offline or something.
Risk does come with greater publicity of such behavior, but that’s part of the point of making it more public (in addition to the information value for people who want to avoid or address it). This is the first I’ve ever publicly said something about these issues in EA, after three years of many private conversations that seem to have resulted in limited or no impact. Greater publicity means greater accountability and motivation for action, both for the people who behave poorly and the people who let them do so without consequence.
Out of curiosity, have you tried anything besides private conversations?
Since I’m already working on inclusionary practices myself, there’s not much else to do but private or public discussion.
The private discussions I have had explicitly around the issue have varied a lot in their content and purpose and can be characterized as any of the following or a combination thereof: Listening to people’s experiences; sharing my own; discussing solutions; actively (beyond just listening) supporting people who were treated poorly; sharing information and concern about the issue with people in a better or still good position to do something about it; trying to discuss why this or more specific issues of exclusion are a problem with people who prefer the status quo; or endeavoring to show people why something they did was a problem and what they should do differently.
Dealing with a bewilderingly amateur situation myself and working to privately help the people responsible to understand the problem and improve took a month out of my life, and with a really important counterfactual, and that’s strictly in time spent on the issue that I don’t think I would have had to lose in e.g. the animal advocacy community, and not accounting for the emotional toll. I have good reason for (cautious) optimism that that was fruitful but also a red flag restraining that optimism and regardless only time will tell.
Basically I’ve spent a huge amount of time on those private and often solution-oriented conversations and have been hanging over the precipice of burnout with the community since day 1 several years ago. (The broader community at least, not the animal advocacy sub/intersected-community. And disclaimer that there are great individuals throughout the broader community who are my friends and/or whose presence in the community I am so happy for, etc.) And I’m definitely not alone in that.
I can do more to have private conversations with people in better positions than myself to make change here (such as people who are looked up to in the community by the people whose behavior could be more inclusionary, or donors to EA orgs), and I might if this post and the discussion here doesn’t inspire other people to take more action on this issue, which is my hope.
Thanks.
I’m also finding the voting in this thread frustrating.
Sorry about that.
Glad to hear it :)
I’m an excessive editor too, I’m not sure it’s something you need to apologize for :)
xccf, I’d be interested to hear an examples of comments which you think were excessively downvoted.
If I recall correctly, this comment was at −2 when I first saw it, which frustrated me because I think people who publicly admit mistakes should get upvotes. Publicly admitting mistakes is really hard to do. I think we should take a moment to give people credit for this before demanding that they confess their sins even more thoroughly.
I don’t think it is, at all, any more than Daryl Bem’s research updates me towards thinking ESP is real. Like, who knows, the world is a crazy place, maybe the papers here are in the 36% of published psychology papers which hold up under replication. But I don’t think that it makes sense to update against your beliefs about this stuff based on the published science—if you think that the scientists would have published these papers regardless of their truth, as I do, you shouldn’t regard them as evidence.
I think you’re overstating your case.
This strikes me as a misunderstanding of how Bayesian updates work. The reason you still don’t believe in ESP is because your prior for ESP is very low. But I think hearing about Bem’s research should still cause you to update your estimate in favor of ESP a tiny amount. In a world with ESP, Bem finds it easier to discover ESP effects.
I don’t think social psychologists are that dishonest. Even 36% replicability suggests some relationship between paper-publishing and truth.
Furthermore, I think the fact that social psychologists are so liberal should cause some update in the direction that studying humans causes you to realize liberal views about human nature are correct.
I think you slightly misunderstand me. What I’m saying is that Bem’s work isn’t really a Bayesian update for me, because I think Bem is approximately as likely to publish papers in the world where (extremely weak) ESP works as the worlds where it doesn’t. The strength of my prior doesn’t feel relevant to me.
I think you’re right that I slightly overstated my case.
Christine Peterson’s life partner discussion is around 1:17:20 at the above link^^
It’s part of a broader discussion about supporting yourself while being altruistic over the long haul (starts around 1:15:00).