All else equal, increased diversity sounds great, but my issue is I see a pattern of other pro-diversity movements sacrificing all other values in the name of trying to increase diversity.
It’s not unheard of, but it seems more common than it is because only the movements and initiatives which go too far merit headlines and attention. The average government agency, F500 company, or similar organization piles on all kinds of diversity policies without turning into the Nightmare on Social Justice Street.
The pattern I see is that “organizations” (such as government agencies or Fortune 500 companies) usually turn out OK, whereas “movements” or “communities” (e.g. the atheism movement, or the open source community) often turn out poorly.
An explanation of what you mean by “turn out OK” would be helpful. For instance, do movements that err more towards social justice fare worse than those that err away from it (or than those that sit at the status quo)?
Whether that’s the case for the atheism movement or the open source community is a heavy question that merits more explanation.
Actually, I would think that any overshooting you see in these communities is a reaction to how status-quo (or worse) both of those communities are. Note for instance that when women are not collaborators on a project (but not when they are), their open-source contributions are more likely to be accepted than men’s when their gender is not known but despite that they’re less likely to be accepted than men’s when their gender is known.
The Atheism Plus split was pretty bad. They were a group that wanted all atheists to also be involved in social justice. Naturally many weren’t happy with this takeover of the movement and pushed back. The Atheism Plus side argues that this was due to misogyny, ect, ignoring the fact that some people just wanted to be atheists and do atheist stuff and not get involved in politics. The end result was Atheism Plus was widely rejected, many social justice leaning atheists left the movement, Atheism widely defamed, remaining atheists not particularly open to social justice.
I don’t know very much about open source, but I’ve heard that there’s been some pretty vicious/brutal political fights over codes of conduct, ect.
The atheists even started to disinvite their intellectual founders, e.g. Richard Dawkins. Will EA eventually go down the same path—will they end up disinviting e.g. Bostrom for not being a sufficiently zealous social justice advocate?
All I’m saying is that there is a precedent here. If SJW-flavored EA ends up going down this path, please don’t say you were not warned.
People nominally within EA have already called for us to disavow or not affiliate with Peter Singer so this seems less hypothetical than one might think.
‘Yvain’ gives a good description of a process along along these lines within his comment here (which also contains lots of points which pre-emptively undermine claims within this post).
I entirely appreciate the concern of going too far. Let’s just be careful not to assume that risks only come with action—the opposite path is an awful one too, and with inaction we risk moving further down it.
Kelly, I don’t think the study you cite is good or compelling evidence of the conclusion you’re stating. See Scott’s comments on it for the reasons why.
Even after clarification, your sentence is misleading. The true thing you could say is “Among outsiders to projects, women are more likely to have their contributions accepted than men. Both men and women are less likely to have their contributions accepted when their genders are revealed; the effect was measured to be a percentage point different between the genders and may or may not be statistically significant. There are also major differences between the contribution patterns of men and women.”
As a side note, I find the way you’re using social science quite frustrating. You keep claiming that social science supports many of your particular beliefs, and then other people keep digging into the evidence and pointing out the specific reason that the evidence you’ve presented isn’t very convincing. But it takes a lot of time to rebut all of your evidence that way, much more time than it takes for you to link to another bad study.
This is a similar issue that’s going on in another thread where people feel you’re cherrypicking results rather than sampling randomly in a way that will paint an accurate picture. Perhaps this dialogue can help to explain the concerns that others have expressed:
Person One: Here are 5 studies showing that coffee causes cancer, which suggests we should limit our coffee consumption.
Person Two: Actually if you do a comprehensive survey of the literature, you’ll fine 3 studies showing that coffee causes cancer, 17 showing no effect, and 3 showing the coffee prevents cancer. On balance there’s no stronger evidence that coffee causes cancer than that it prevents it, and in fact it probably has no effect.
Person One: Thanks for the correction! [Edits post to say: “Here are 3 studies showing that coffee causes cancer, which suggests we should limit our coffee consumption.”]
Person Two: I mean… that’s technically true, but I don’t feel the problem is solved.
As a side note, I find the way you’re using social science quite frustrating. You keep claiming that social science supports many of your particular beliefs, and then other people keep digging into the evidence and pointing out the specific reason that the evidence you’ve presented isn’t very convincing. But it takes a lot of time to rebut all of your evidence that way, much more time than it takes for you to link to another bad study.
To be charitable to Kelly, in most parts of the internet, a link to popular reporting on social science research is a high quality argument. I can understand how it might be frustrating for people to tell you you need to up your paper scrutinizing game while you are busy trying to respond to an entire thread full of people expressing disagreement.
To be charitable to Kelly, in most parts of the internet, a link to popular reporting on social science research is a high quality argument.
I dearly hope we never become one of those parts of the internet.
And think we should fight against every slip down that terrible incentive gradient, for example by pointing out that the bottom of that gradient is a really terribly unproductive place, and by pushing back against steps down that doomy path.
I dearly hope we never become one of those parts of the internet.
Me too. However, I’m not entirely clear what incentive gradient you are referring to.
But I do see an incentive gradient which goes like this: Most people responding to threads like this do so in their spare time and run on intrinsic motivation. For whatever reason, on average they find it more intrinsically motivating to look for holes in social psych research if it supports a liberal conclusion. There’s a small population motivated the opposite way, but since people find it less intrinsically motivating to hang out in groups where their viewpoint is a minority, those people gradually drift off. The end result is a forum where papers that point to liberal conclusions get torn apart, and papers that point the other way get a pass.
As far as I can tell, essentially all online discussions of politicized topics fall prey to a failure mode akin to this, so it’s very much something to be aware of.
Full disclosure: I’m not much of a paper scrutinizer. And the way I’ve been behaving in this thread is the same way Kelly has been. For example, I linked to Bryan Caplan’s blog post covering a paper on ideological imbalance in social psychology. The original paper is 53 pages long. Did I read over the entire thing, carefully checking for flaws in the methodology? No, I didn’t.
I’m not even sure it would be useful for me to do that—the best scrutinizer is someone who feels motivated to disprove a paper’s conclusion, and this ideological imbalance paper very much flatters my preconceptions. But the point is that Kelly got called out and I didn’t.
I don’t know what a good solution to this problem looks like. (Maybe LW 2.0 will find one.) But an obvious solution is to extend special charity to anyone who’s an ideological minority, to try & forestall evaporative cooling effects. [Also could be a good way to fight ingroup biases etc.]
As a side note, I suspect we should re-allocate resources away from social psychology as a resolution for SJ debates, on the margin. It provides great opportunities for IQ signaling, but the flip side is the investment necessary to develop a well-justified opinion is high—I don’t think social psych will end up solving the problem for the masses. I would like to see people brainstorm in a larger space of possible solutions.
The incentive gradient I was referring to goes from trying to actually figure out the truth to using arguments as weapons to win against opponents. You can totally use proxies for the truth if you have to(like an article being written by someone you’ve audited in the past, or someone who’s made sound predictions in the past). You can totally decide not to engage with an issue because it’s not worth the time.
But if you just shrug your shoulders and cite average social science reporting on a forum you care about, you are not justified in expecting good outcomes. This is the intellectual equivalent of catching the flu and then purposefully vomiting into the town water supply. People that do this are acting in a harmful manner, and they should be asked to cease and desist.
the best scrutinizer is someone who feels motivated to disprove a paper’s conclusion
The best scrutinizer is someone that feels motivated to actually find the truth. This should be obvious.
For whatever reason, on average they find it more intrinsically motivating to look for holes in social psych research if it supports a liberal conclusion.
Yet EAs are mostly liberal. The 2017 Survey had 309 EAs identifying as Left, 373 as Centre-Left, 4 identifying as Right, 31 as Centre Right. My contention is that this is not about the conclusions being liberal. It’s about specific studies and analyses of studies being terrible. E.g. (and I hate that I have to say this) I lean very socially liberal on most issues. Yet I claim that the article Kelly cited is not good support for anyone’s beliefs. Because it is terrible, and does not track the truth. And we don’t need writings like that, regardless of whose conclusions they happen to support.
The best scrutinizer is someone that feels motivated to actually find the truth. This should be obvious.
How does “this should be obvious” compare to average social science reporting on the epistemic hygiene scale?
Like, this is an empirical claim we could test: give people social psych papers that have known flaws, and see whether curiosity or disagreement with the paper’s conclusion predicts flaw discovery better. I don’t think the result of such an experiment is obvious.
I actually tend to observe the other effect in most intellectual spaces. Any liberal supporting result will get a free pass and be repeated over and over again, while any conservative leaning claim will be torn to shreds. Of course, you’ll see the opposite if you hang around the 50% of people who voted Trump, but not many of them are in the EA community.
I am disinclined to be sympathetic when someone’s problem is that they posted so many bad arguments all at once that they’re finding it hard to respond to all the objections.
Regarding the terrible incentive gradients mentioned by Claire above, I think discussion is more irenic if people resist, insofar as possible, to impute bad epistemic practices to certain people, and even to try and avoid identifying the individual with the view or practice you take to be mistaken, even though they in fact advocate it.
As a concrete example (far from alone, and selected not because it is ‘particularly bad’, but rather because it comes from a particularly virtuous discussant) the passage up-thread seems to include object level claims on the epistemic merits of a certain practice, but also implies an adverse judgement about the epistemic virtue of the person it is replying to:
As a side note, I find the way you’re using social science quite frustrating. You keep claiming that social science supports many of your particular beliefs, and then other people keep digging into the evidence and pointing out the specific reason that the evidence you’ve presented isn’t very convincing. But it takes a lot of time to rebut all of your evidence that way, much more time than it takes for you to link to another bad study. [my emphasis]
The ‘you-locutions’ do the work of imputing, and so invite subsequent discussion about the epistemic virtue of the person being replied to (e.g. “Give them a break, this mistake is understandable given some other factors”/ “No, this is a black mark against them as a thinker, and the other factors are not adequate excuse”).
Although working out the epistemic virtue of others can be a topic with important practical applications (but see discussion by Askell and others above about ‘buzz talk’), the midst of a generally acrimonious discussion on a contentious topic is not the best venue. I think a better approach is a rewording that avoids the additional implications:
I think there’s a pattern of using social science data which is better avoided. Suppose one initially takes a set of studies to support P. Others suggest studies X, Y and Z (members of this set) do not support P after all. If one agrees with this, it seems better to clearly report a correction along the lines of “I took these 5 studies to support P, but I now understand 3 of these 5 do not support P”, rather than offering additions to the set of studies that support P.
The former allows us to forecast how persuasive additional studies are (i.e. if all of the studies initially taken to support P do not in fact support P on further investigation, we may expect similar investigation to reveal the same about the new studies offered). Rhetorically, it may be more persuasive to sceptics of P, as it may allay worries that sympathy to P is tilting the scales in favour of reporting studies that prima facie support P.
The rewording can take longer (but I am not rewording myself, rather a better writer), but even if so I expect other benefits will outweigh it.
An explanation of what you mean by “turn out OK” would be helpful. For instance, do movements that err more towards social justice fare worse than those that err away from it (or than those that sit at the status quo)?
I’m referring to mob mentality, trigger-happy ostracization, and schisms. I don’t think erring towards/away from social justice is quite the right question, because in these failure cases, the distribution of support for social justice becomes a lot more bimodal.
Actually, I would think that any overshooting you see in these communities is a reaction to how status-quo (or worse) both of those communities are.
Sounds plausible. That’s a big reason why I support thoughtful work on diversity: as a way to remove the motivation for less thoughtful work.
It’s not unheard of, but it seems more common than it is because only the movements and initiatives which go too far merit headlines and attention. The average government agency, F500 company, or similar organization piles on all kinds of diversity policies without turning into the Nightmare on Social Justice Street.
The pattern I see is that “organizations” (such as government agencies or Fortune 500 companies) usually turn out OK, whereas “movements” or “communities” (e.g. the atheism movement, or the open source community) often turn out poorly.
Hm, that’s a good point. I can’t come up with a solid counterexample off the top of my head.
An explanation of what you mean by “turn out OK” would be helpful. For instance, do movements that err more towards social justice fare worse than those that err away from it (or than those that sit at the status quo)?
Whether that’s the case for the atheism movement or the open source community is a heavy question that merits more explanation.
Actually, I would think that any overshooting you see in these communities is a reaction to how status-quo (or worse) both of those communities are. Note for instance that when women are not collaborators on a project (but not when they are), their open-source contributions are more likely to be accepted than men’s when their gender is not known but despite that they’re less likely to be accepted than men’s when their gender is known.
The Atheism Plus split was pretty bad. They were a group that wanted all atheists to also be involved in social justice. Naturally many weren’t happy with this takeover of the movement and pushed back. The Atheism Plus side argues that this was due to misogyny, ect, ignoring the fact that some people just wanted to be atheists and do atheist stuff and not get involved in politics. The end result was Atheism Plus was widely rejected, many social justice leaning atheists left the movement, Atheism widely defamed, remaining atheists not particularly open to social justice.
I don’t know very much about open source, but I’ve heard that there’s been some pretty vicious/brutal political fights over codes of conduct, ect.
Came to say this as well.
See, for example:
https://www.reddit.com/r/atheism/comments/2ygiwh/so_why_did_atheism_plus_fail/
The atheists even started to disinvite their intellectual founders, e.g. Richard Dawkins. Will EA eventually go down the same path—will they end up disinviting e.g. Bostrom for not being a sufficiently zealous social justice advocate?
All I’m saying is that there is a precedent here. If SJW-flavored EA ends up going down this path, please don’t say you were not warned.
People nominally within EA have already called for us to disavow or not affiliate with Peter Singer so this seems less hypothetical than one might think.
‘Yvain’ gives a good description of a process along along these lines within his comment here (which also contains lots of points which pre-emptively undermine claims within this post).
I entirely appreciate the concern of going too far. Let’s just be careful not to assume that risks only come with action—the opposite path is an awful one too, and with inaction we risk moving further down it.
Kelly, I don’t think the study you cite is good or compelling evidence of the conclusion you’re stating. See Scott’s comments on it for the reasons why.
(edited because the original link didn’t work)
Thanks, clarified.
Even after clarification, your sentence is misleading. The true thing you could say is “Among outsiders to projects, women are more likely to have their contributions accepted than men. Both men and women are less likely to have their contributions accepted when their genders are revealed; the effect was measured to be a percentage point different between the genders and may or may not be statistically significant. There are also major differences between the contribution patterns of men and women.”
As a side note, I find the way you’re using social science quite frustrating. You keep claiming that social science supports many of your particular beliefs, and then other people keep digging into the evidence and pointing out the specific reason that the evidence you’ve presented isn’t very convincing. But it takes a lot of time to rebut all of your evidence that way, much more time than it takes for you to link to another bad study.
This is a similar issue that’s going on in another thread where people feel you’re cherrypicking results rather than sampling randomly in a way that will paint an accurate picture. Perhaps this dialogue can help to explain the concerns that others have expressed:
Person One: Here are 5 studies showing that coffee causes cancer, which suggests we should limit our coffee consumption.
Person Two: Actually if you do a comprehensive survey of the literature, you’ll fine 3 studies showing that coffee causes cancer, 17 showing no effect, and 3 showing the coffee prevents cancer. On balance there’s no stronger evidence that coffee causes cancer than that it prevents it, and in fact it probably has no effect.
Person One: Thanks for the correction! [Edits post to say: “Here are 3 studies showing that coffee causes cancer, which suggests we should limit our coffee consumption.”]
Person Two: I mean… that’s technically true, but I don’t feel the problem is solved.
To be charitable to Kelly, in most parts of the internet, a link to popular reporting on social science research is a high quality argument. I can understand how it might be frustrating for people to tell you you need to up your paper scrutinizing game while you are busy trying to respond to an entire thread full of people expressing disagreement.
I dearly hope we never become one of those parts of the internet.
And think we should fight against every slip down that terrible incentive gradient, for example by pointing out that the bottom of that gradient is a really terribly unproductive place, and by pushing back against steps down that doomy path.
Me too. However, I’m not entirely clear what incentive gradient you are referring to.
But I do see an incentive gradient which goes like this: Most people responding to threads like this do so in their spare time and run on intrinsic motivation. For whatever reason, on average they find it more intrinsically motivating to look for holes in social psych research if it supports a liberal conclusion. There’s a small population motivated the opposite way, but since people find it less intrinsically motivating to hang out in groups where their viewpoint is a minority, those people gradually drift off. The end result is a forum where papers that point to liberal conclusions get torn apart, and papers that point the other way get a pass.
As far as I can tell, essentially all online discussions of politicized topics fall prey to a failure mode akin to this, so it’s very much something to be aware of.
Full disclosure: I’m not much of a paper scrutinizer. And the way I’ve been behaving in this thread is the same way Kelly has been. For example, I linked to Bryan Caplan’s blog post covering a paper on ideological imbalance in social psychology. The original paper is 53 pages long. Did I read over the entire thing, carefully checking for flaws in the methodology? No, I didn’t.
I’m not even sure it would be useful for me to do that—the best scrutinizer is someone who feels motivated to disprove a paper’s conclusion, and this ideological imbalance paper very much flatters my preconceptions. But the point is that Kelly got called out and I didn’t.
I don’t know what a good solution to this problem looks like. (Maybe LW 2.0 will find one.) But an obvious solution is to extend special charity to anyone who’s an ideological minority, to try & forestall evaporative cooling effects. [Also could be a good way to fight ingroup biases etc.]
As a side note, I suspect we should re-allocate resources away from social psychology as a resolution for SJ debates, on the margin. It provides great opportunities for IQ signaling, but the flip side is the investment necessary to develop a well-justified opinion is high—I don’t think social psych will end up solving the problem for the masses. I would like to see people brainstorm in a larger space of possible solutions.
The incentive gradient I was referring to goes from trying to actually figure out the truth to using arguments as weapons to win against opponents. You can totally use proxies for the truth if you have to(like an article being written by someone you’ve audited in the past, or someone who’s made sound predictions in the past). You can totally decide not to engage with an issue because it’s not worth the time.
But if you just shrug your shoulders and cite average social science reporting on a forum you care about, you are not justified in expecting good outcomes. This is the intellectual equivalent of catching the flu and then purposefully vomiting into the town water supply. People that do this are acting in a harmful manner, and they should be asked to cease and desist.
The best scrutinizer is someone that feels motivated to actually find the truth. This should be obvious.
Yet EAs are mostly liberal. The 2017 Survey had 309 EAs identifying as Left, 373 as Centre-Left, 4 identifying as Right, 31 as Centre Right. My contention is that this is not about the conclusions being liberal. It’s about specific studies and analyses of studies being terrible. E.g. (and I hate that I have to say this) I lean very socially liberal on most issues. Yet I claim that the article Kelly cited is not good support for anyone’s beliefs. Because it is terrible, and does not track the truth. And we don’t need writings like that, regardless of whose conclusions they happen to support.
How does “this should be obvious” compare to average social science reporting on the epistemic hygiene scale?
Like, this is an empirical claim we could test: give people social psych papers that have known flaws, and see whether curiosity or disagreement with the paper’s conclusion predicts flaw discovery better. I don’t think the result of such an experiment is obvious.
Flaws aren’t the only things I want to discover when I scrutinize a paper. I also want to discover truths, if they exist, among other things
[random] I find the survey numbers interesting, insofar as they suggest that EA is more left-leaning than almost any profession or discipline.
(see e.g. this and this).
I actually tend to observe the other effect in most intellectual spaces. Any liberal supporting result will get a free pass and be repeated over and over again, while any conservative leaning claim will be torn to shreds. Of course, you’ll see the opposite if you hang around the 50% of people who voted Trump, but not many of them are in the EA community.
Do you know of any spaces that don’t have the problem one way or the other?
I would say that EA/Less Wrong are better in that any controversial claim you make is likely to be torn to shreds.
I am disinclined to be sympathetic when someone’s problem is that they posted so many bad arguments all at once that they’re finding it hard to respond to all the objections.
Regarding the terrible incentive gradients mentioned by Claire above, I think discussion is more irenic if people resist, insofar as possible, to impute bad epistemic practices to certain people, and even to try and avoid identifying the individual with the view or practice you take to be mistaken, even though they in fact advocate it.
As a concrete example (far from alone, and selected not because it is ‘particularly bad’, but rather because it comes from a particularly virtuous discussant) the passage up-thread seems to include object level claims on the epistemic merits of a certain practice, but also implies an adverse judgement about the epistemic virtue of the person it is replying to:
The ‘you-locutions’ do the work of imputing, and so invite subsequent discussion about the epistemic virtue of the person being replied to (e.g. “Give them a break, this mistake is understandable given some other factors”/ “No, this is a black mark against them as a thinker, and the other factors are not adequate excuse”).
Although working out the epistemic virtue of others can be a topic with important practical applications (but see discussion by Askell and others above about ‘buzz talk’), the midst of a generally acrimonious discussion on a contentious topic is not the best venue. I think a better approach is a rewording that avoids the additional implications:
The rewording can take longer (but I am not rewording myself, rather a better writer), but even if so I expect other benefits will outweigh it.
I’m referring to mob mentality, trigger-happy ostracization, and schisms. I don’t think erring towards/away from social justice is quite the right question, because in these failure cases, the distribution of support for social justice becomes a lot more bimodal.
Sounds plausible. That’s a big reason why I support thoughtful work on diversity: as a way to remove the motivation for less thoughtful work.