Whether gender diversity is something to strive for is beyond this discussion. We will simply assume that it is and go from there. You could for example check out these posts (1, 2, 3) for a discussion on (gender) diversity if you want to read about this or discuss it.
I am generally skeptical of people attempting to preempt criticism on a controversial topic by declaring it out of scope for the discussion. While I understand the desire to make progress on an issue, establishing that the issue is a correct one is important! You attempt to pass off this evidential burden to the linked posts, but it’s worth noting that at least the first two were generally regarded as quite weak at the time they were written (I haven’t seen any significant discussion of the third and comments are not enabled).
The first, ‘Why & How to Make Progress on Diversity & Inclusion in EA’ (41 upvotes, 231 comments) attempted to lay out the scientific evidence for why gender ratio balance was important, but did so in a really quite disappointing way. Evidence was quoted selectively, studies were mis-described, other results failed to replicate and the major objections to the thesis (e.g. publication bias in the literature) were not considered. Overall the post was sufficiently weak that I think this comment was a fair summary: ‘I am disinclined to be sympathetic when someone’s problem is that they posted so many bad arguments all at once that they’re finding it hard to respond to all the objections.’
The second, ‘In diversity lies epistemic strength’ (10 upvotes, 25 comments), argues that we should try to make the community more demographically diverse because then we’d benefit from a wider range of perspectives. But this instrumental argument relies on a key premise, that “we cannot measure the diversity of perspectives of a person directly, [so] our best proxy for it is demographic diversity”, which seems clearly false: there are many better proxies, like just asking people what their perspectives are and comparing to the people you have already.
I’m open to gender diversity promotion actually being a worthwhile project and cause area. But I think we should reject an argumentative strategy of relying on posts that received a lot of justified criticism in an attempt to avoid evidential burden.
In the case of this post I actually think the issue is even clearer. Some of your proposals, like adopting “the patriarchy” as a cause area, or rejecting impartiality in favour of an “ethics of care”, are major and controversial changes. EA has been, up until now, dedicated to evaluating causes purely based on their cost-effectiveness on impartial grounds, and not based on how they would influence the PR or outreach for the EA movement. We have conspicuously not adopted cause areas like abortion as a focus, even though doing so might help improve attract extremely under-represented groups (e.g. religious people, conservatives), because people do not think it is a highly cost-effective cause area, and I think this is the right decision. Suggesting we should adopt a cause that we would not otherwise have chosen—that PR/outreach benefits should be considered in the cause area evaluation process, comparable to scope/neglectedness/tractability—requires a lot of justification. And yet the issues that we have to discuss to provide that justification (e.g. how large are the benefits of gender balance, how large are the costs, and how cost-effective are the interventions?) fall squarely within the topics you have declared verboten.
I think it’s permissible/reasonable/preferable to have forum posts or discussion threads of the rough form “Conditional upon X being true, what are the best next steps?” I think it is understandable for such posters to not wish to debate whether X is true in the comments of the post itself, especially if it’s either an old debate or otherwise tiresome.
For example, we might want to have posts on:
what should people do in short AI timelines scenarios, without explicitly arguing for why AI timelines are short
conversely, what should people do in long AI timelines scenarios, without explicitly arguing for why AI timelines are long.
posts on best ways to reduce factory farming, without explicitly arguing for why factory farming is net negative.
posts on how to save children’s lives, without explicitly engaging with the relevant thorny population ethics questions
I mostly agree with this, and don’t even think X in the bracketing “conditional upon X being true” has to be likely at all. However, I think this type of question can become problematic if the bracketing is interpreted in a way that inappropriately protects proposed ideas from criticism. I’m finding it difficult to put my finger precisely on when that happens, but here is a first stab at it:
“Conditional upon AI timelines being short, what are the best next steps?” does not inappropriately protect anything. There is a lively discussion of AI timelines in many other threads. Moreover, every post impliedly contains as its first sentence something like “If AI timelines are not short, what follows likely doesn’t make any sense.” There are also potential general criticisms like “we should be working on bednets instead” . . . but these are pretty obvious and touting the benefits of bednets really belongs in a thread about bednets instead.
What we have here—“Conditional upon more gender diversity in EA being a good thing, what are the best next steps?” is perfectly fine as far as it goes. However, unlike the AI timelines hypo, shutting out criticisms which are based on questioning the extent to which gender diversity would be beneficial risks inappropriately protecting proposed ideas from evaluation and criticism. I think that is roughly in the neighborhood of the point @Larks was trying to make in the last paragraph of the comment above.
The reason is that the proposed ideas that might be proposed in response to this prompt are likely to have both specific benefits and specific costs/risks/objections. Where specific costs/risks/objections are involved—as opposed to general ones like “this doesn’t make sense because AGI is 100+ years away” or “we’d be better off focusing on bednets”—then bracketing has the potential to be more problematic. People should be able to perform a cost/benefit analysis, and here that requires (to some extent) evaluating how benefical having more gender diversity in EA would be. And there’s not a range of threads evaluating the benefits and costs of (e.g.) adding combating the patriarchy as an EA focus area, so banishing those evaluations from this thread poses a higher risk of suppressing them.
Fwiw, I think your examples are all based on less controversial conditionals, though, which makes them less informative here. And I also think the topics that are conditioned on in your examples already received sufficient analyses that make me less worried about people making things worse* as they will be aware of more relevant considerations, in contrast to the treatment in the background discussions that Larks discussed.
*(except the timelines example, which still feels slightly different though as everything seems fairly uncertain about AI strategy)
Hmm good point that my examples are maybe too uncontroversial, so it’s somewhat biased and not a fair comparison. Still, maybe I don’t really understand what counts as controversial, but at the very least, it’s easy to come up with examples of conditionals that many people (and many EAs) likely place <50% credence on, but are still useful to have on the forum:
The AI timelines example, again (because mathematically you can’t have >50% credence in both long and short AI timelines)
But perhaps “many people (and many EAs) likely place <50% credence on” is not a good operationalization of “controversial.” In that case maybe it’d be helpful to operationalize what we mean by that word.
But perhaps “many people (and many EAs) likely place <50% credence on” is not a good operationalization of “controversial.” In that case maybe it’d be helpful to operationalize what we mean by that word.
I think the relevant consideration here isn’t whether a post is (implicitly or not) assuming controversial premises, it’s the degree to which it’s (implicitly or not) recommending controversial courses of action.
There’s a big difference between a longtermist analysis of the importance of nuclear nonproliferation and a longtermist analysis of airstrikes on foreign data centers, for instance.
Hi Larks, thank you for taking the time to articulate your concerns! I will respond to a few below:
Concern 1: passing off evidential burden
• I agree it would be preferable if we would have a made a solid case for why gender diversity is important in this post.
-> To explain this choice: we did not feel like we could do this topic justice in the limited time we had available for this so decided to prioritize sharing the information in this post instead. Another reason for focusing on the content of the post above is that we had a somewhat rare opportunity to get this many people’s input on the topic all at once—which I would say gave us some comparative advantage for writing about this rather than writing about why/whether gender diversity is important.
• As you specifically mention that you think “relying on posts that received a lot of justified criticism” is a bad idea, do you have suggestions for different posts that you found better?
Concern 2: “Some of your proposals, like adopting “the patriarchy” as a cause area, or rejecting impartiality in favour of an “ethics of care”, are major and controversial changes”
• Something I’d like to point out here: these are not our proposals. As we mention in the post, ‘The views we describe in this post don’t necessarily correspond with our (Veerle Bakker’s & Alexandra Bos’) own but rather we are describing others’ input.′ For more details on this process, I’d recommend taking a look at the Methodology & Limitations if you haven’t already.
-> Overall, I think the reasons you mention for not taking on the proposals under ‘Adjusting attributes of EA thought’ are very fair and I probably agree with you on them.
• A second point regarding your concern: I think you are conflating the underlying reasons participants suspected are behind the gender gap with the solutions they propose.
However,
saying ‘X might be the cause of problem Y’, is not the same as saying:
‘we should do the opposite from X so that problem Y is solved’
Therefore, I don’t feel that, for instance, your claim that a proposal in this post was to adopt “the patriarchy” as cause area fairly represents the written content. What we wrote is that “One of these topics is how EA does not focus specifically on gender inequality issues in its thinking (e.g. ‘the patriarchy’ is not a problem recommended to work on by the EA community).” This isa description of a concern some of the participants described, not a solution they proposed. The same goes for your interpretation that the proposal is “rejecting impartiality in favour of an ethics of care”.
Larks, I think you’re conflating 2 different things:
1) discussing whether something closer to gender parity (I think that’s a more precise word than ‘diversity’ in this context) is desirable at all,
versus
2) discussing whether some particular step to promote it is worth the cost.
It’s only the EDIT: first (original wrote second, sorry) that the post says its not focused on.
This is quite important because finding ‘closer to gender parity is better, all things being equal’ “controversial” is quite different from finding the claim that some specific level of prioritization of gender parity or some particular argument for why it is good is controversial. It’s hard to tell which you are saying is “controversial” and it really effects how strongly anti-feminist what you are saying is.
Though in fairness, it is certainly hard to discuss whether any individual proposal is worth (alleged) costs, without getting into how much value to place on gender parity relative to other things.
I am generally skeptical of people attempting to preempt criticism on a controversial topic by declaring it out of scope for the discussion. While I understand the desire to make progress on an issue, establishing that the issue is a correct one is important! You attempt to pass off this evidential burden to the linked posts, but it’s worth noting that at least the first two were generally regarded as quite weak at the time they were written (I haven’t seen any significant discussion of the third and comments are not enabled).
The first, ‘Why & How to Make Progress on Diversity & Inclusion in EA’ (41 upvotes, 231 comments) attempted to lay out the scientific evidence for why gender ratio balance was important, but did so in a really quite disappointing way. Evidence was quoted selectively, studies were mis-described, other results failed to replicate and the major objections to the thesis (e.g. publication bias in the literature) were not considered. Overall the post was sufficiently weak that I think this comment was a fair summary: ‘I am disinclined to be sympathetic when someone’s problem is that they posted so many bad arguments all at once that they’re finding it hard to respond to all the objections.’
The second, ‘In diversity lies epistemic strength’ (10 upvotes, 25 comments), argues that we should try to make the community more demographically diverse because then we’d benefit from a wider range of perspectives. But this instrumental argument relies on a key premise, that “we cannot measure the diversity of perspectives of a person directly, [so] our best proxy for it is demographic diversity”, which seems clearly false: there are many better proxies, like just asking people what their perspectives are and comparing to the people you have already.
I’m open to gender diversity promotion actually being a worthwhile project and cause area. But I think we should reject an argumentative strategy of relying on posts that received a lot of justified criticism in an attempt to avoid evidential burden.
In the case of this post I actually think the issue is even clearer. Some of your proposals, like adopting “the patriarchy” as a cause area, or rejecting impartiality in favour of an “ethics of care”, are major and controversial changes. EA has been, up until now, dedicated to evaluating causes purely based on their cost-effectiveness on impartial grounds, and not based on how they would influence the PR or outreach for the EA movement. We have conspicuously not adopted cause areas like abortion as a focus, even though doing so might help improve attract extremely under-represented groups (e.g. religious people, conservatives), because people do not think it is a highly cost-effective cause area, and I think this is the right decision. Suggesting we should adopt a cause that we would not otherwise have chosen—that PR/outreach benefits should be considered in the cause area evaluation process, comparable to scope/neglectedness/tractability—requires a lot of justification. And yet the issues that we have to discuss to provide that justification (e.g. how large are the benefits of gender balance, how large are the costs, and how cost-effective are the interventions?) fall squarely within the topics you have declared verboten.
I think it’s permissible/reasonable/preferable to have forum posts or discussion threads of the rough form “Conditional upon X being true, what are the best next steps?” I think it is understandable for such posters to not wish to debate whether X is true in the comments of the post itself, especially if it’s either an old debate or otherwise tiresome.
For example, we might want to have posts on:
what should people do in short AI timelines scenarios, without explicitly arguing for why AI timelines are short
conversely, what should people do in long AI timelines scenarios, without explicitly arguing for why AI timelines are long.
posts on best ways to reduce factory farming, without explicitly arguing for why factory farming is net negative.
posts on how to save children’s lives, without explicitly engaging with the relevant thorny population ethics questions
what should people do to reduce nuclear risk, without explicitly arguing for why reducing nuclear risk is the best use of limited resources.
Posts on research and recommendations on climate change, without explicitly engaging with whether climate change is net positive.
I mostly agree with this, and don’t even think X in the bracketing “conditional upon X being true” has to be likely at all. However, I think this type of question can become problematic if the bracketing is interpreted in a way that inappropriately protects proposed ideas from criticism. I’m finding it difficult to put my finger precisely on when that happens, but here is a first stab at it:
“Conditional upon AI timelines being short, what are the best next steps?” does not inappropriately protect anything. There is a lively discussion of AI timelines in many other threads. Moreover, every post impliedly contains as its first sentence something like “If AI timelines are not short, what follows likely doesn’t make any sense.” There are also potential general criticisms like “we should be working on bednets instead” . . . but these are pretty obvious and touting the benefits of bednets really belongs in a thread about bednets instead.
What we have here—“Conditional upon more gender diversity in EA being a good thing, what are the best next steps?” is perfectly fine as far as it goes. However, unlike the AI timelines hypo, shutting out criticisms which are based on questioning the extent to which gender diversity would be beneficial risks inappropriately protecting proposed ideas from evaluation and criticism. I think that is roughly in the neighborhood of the point @Larks was trying to make in the last paragraph of the comment above.
The reason is that the proposed ideas that might be proposed in response to this prompt are likely to have both specific benefits and specific costs/risks/objections. Where specific costs/risks/objections are involved—as opposed to general ones like “this doesn’t make sense because AGI is 100+ years away” or “we’d be better off focusing on bednets”—then bracketing has the potential to be more problematic. People should be able to perform a cost/benefit analysis, and here that requires (to some extent) evaluating how benefical having more gender diversity in EA would be. And there’s not a range of threads evaluating the benefits and costs of (e.g.) adding combating the patriarchy as an EA focus area, so banishing those evaluations from this thread poses a higher risk of suppressing them.
Thank you, this explanation makes a lot of sense to me.
Fwiw, I think your examples are all based on less controversial conditionals, though, which makes them less informative here. And I also think the topics that are conditioned on in your examples already received sufficient analyses that make me less worried about people making things worse* as they will be aware of more relevant considerations, in contrast to the treatment in the background discussions that Larks discussed.
*(except the timelines example, which still feels slightly different though as everything seems fairly uncertain about AI strategy)
Hmm good point that my examples are maybe too uncontroversial, so it’s somewhat biased and not a fair comparison. Still, maybe I don’t really understand what counts as controversial, but at the very least, it’s easy to come up with examples of conditionals that many people (and many EAs) likely place <50% credence on, but are still useful to have on the forum:
evaluating organophosphate pesticides and other neurotoxicants (implicit conditional: global health is a plausibly cost-competitive priority with other top EA priorities)
Factors for shrimp welfare (implicit conditional: shrimp are moral patients)
The assymetry and the far future (implicit conditional: Asymmetry views, among others
ways forecasting can be useful for the longterm-future (implicit conditional: the LT future matters in decision-relevant ways)
The AI timelines example, again (because mathematically you can’t have >50% credence in both long and short AI timelines)
But perhaps “many people (and many EAs) likely place <50% credence on” is not a good operationalization of “controversial.” In that case maybe it’d be helpful to operationalize what we mean by that word.
I think the relevant consideration here isn’t whether a post is (implicitly or not) assuming controversial premises, it’s the degree to which it’s (implicitly or not) recommending controversial courses of action.
There’s a big difference between a longtermist analysis of the importance of nuclear nonproliferation and a longtermist analysis of airstrikes on foreign data centers, for instance.
Hi Larks, thank you for taking the time to articulate your concerns! I will respond to a few below:
Concern 1: passing off evidential burden
• I agree it would be preferable if we would have a made a solid case for why gender diversity is important in this post.
-> To explain this choice: we did not feel like we could do this topic justice in the limited time we had available for this so decided to prioritize sharing the information in this post instead. Another reason for focusing on the content of the post above is that we had a somewhat rare opportunity to get this many people’s input on the topic all at once—which I would say gave us some comparative advantage for writing about this rather than writing about why/whether gender diversity is important.
• As you specifically mention that you think “relying on posts that received a lot of justified criticism” is a bad idea, do you have suggestions for different posts that you found better?
Concern 2: “Some of your proposals, like adopting “the patriarchy” as a cause area, or rejecting impartiality in favour of an “ethics of care”, are major and controversial changes”
• Something I’d like to point out here: these are not our proposals. As we mention in the post, ‘The views we describe in this post don’t necessarily correspond with our (Veerle Bakker’s & Alexandra Bos’) own but rather we are describing others’ input.′ For more details on this process, I’d recommend taking a look at the Methodology & Limitations if you haven’t already.
-> Overall, I think the reasons you mention for not taking on the proposals under ‘Adjusting attributes of EA thought’ are very fair and I probably agree with you on them.
• A second point regarding your concern: I think you are conflating the underlying reasons participants suspected are behind the gender gap with the solutions they propose.
However,
saying ‘X might be the cause of problem Y’, is not the same as saying:
‘we should do the opposite from X so that problem Y is solved’
Therefore, I don’t feel that, for instance, your claim that a proposal in this post was to adopt “the patriarchy” as cause area fairly represents the written content. What we wrote is that “One of these topics is how EA does not focus specifically on gender inequality issues in its thinking (e.g. ‘the patriarchy’ is not a problem recommended to work on by the EA community).” This is a description of a concern some of the participants described, not a solution they proposed. The same goes for your interpretation that the proposal is “rejecting impartiality in favour of an ethics of care”.
Larks, I think you’re conflating 2 different things:
1) discussing whether something closer to gender parity (I think that’s a more precise word than ‘diversity’ in this context) is desirable at all,
versus
2) discussing whether some particular step to promote it is worth the cost.
It’s only the EDIT: first (original wrote second, sorry) that the post says its not focused on.
This is quite important because finding ‘closer to gender parity is better, all things being equal’ “controversial” is quite different from finding the claim that some specific level of prioritization of gender parity or some particular argument for why it is good is controversial. It’s hard to tell which you are saying is “controversial” and it really effects how strongly anti-feminist what you are saying is.
Though in fairness, it is certainly hard to discuss whether any individual proposal is worth (alleged) costs, without getting into how much value to place on gender parity relative to other things.
If this feels like a controversial topic, maybe this discussion isn’t intended for you.