I mostly agree with this, and don’t even think X in the bracketing “conditional upon X being true” has to be likely at all. However, I think this type of question can become problematic if the bracketing is interpreted in a way that inappropriately protects proposed ideas from criticism. I’m finding it difficult to put my finger precisely on when that happens, but here is a first stab at it:
“Conditional upon AI timelines being short, what are the best next steps?” does not inappropriately protect anything. There is a lively discussion of AI timelines in many other threads. Moreover, every post impliedly contains as its first sentence something like “If AI timelines are not short, what follows likely doesn’t make any sense.” There are also potential general criticisms like “we should be working on bednets instead” . . . but these are pretty obvious and touting the benefits of bednets really belongs in a thread about bednets instead.
What we have here—“Conditional upon more gender diversity in EA being a good thing, what are the best next steps?” is perfectly fine as far as it goes. However, unlike the AI timelines hypo, shutting out criticisms which are based on questioning the extent to which gender diversity would be beneficial risks inappropriately protecting proposed ideas from evaluation and criticism. I think that is roughly in the neighborhood of the point @Larks was trying to make in the last paragraph of the comment above.
The reason is that the proposed ideas that might be proposed in response to this prompt are likely to have both specific benefits and specific costs/risks/objections. Where specific costs/risks/objections are involved—as opposed to general ones like “this doesn’t make sense because AGI is 100+ years away” or “we’d be better off focusing on bednets”—then bracketing has the potential to be more problematic. People should be able to perform a cost/benefit analysis, and here that requires (to some extent) evaluating how benefical having more gender diversity in EA would be. And there’s not a range of threads evaluating the benefits and costs of (e.g.) adding combating the patriarchy as an EA focus area, so banishing those evaluations from this thread poses a higher risk of suppressing them.
I mostly agree with this, and don’t even think X in the bracketing “conditional upon X being true” has to be likely at all. However, I think this type of question can become problematic if the bracketing is interpreted in a way that inappropriately protects proposed ideas from criticism. I’m finding it difficult to put my finger precisely on when that happens, but here is a first stab at it:
“Conditional upon AI timelines being short, what are the best next steps?” does not inappropriately protect anything. There is a lively discussion of AI timelines in many other threads. Moreover, every post impliedly contains as its first sentence something like “If AI timelines are not short, what follows likely doesn’t make any sense.” There are also potential general criticisms like “we should be working on bednets instead” . . . but these are pretty obvious and touting the benefits of bednets really belongs in a thread about bednets instead.
What we have here—“Conditional upon more gender diversity in EA being a good thing, what are the best next steps?” is perfectly fine as far as it goes. However, unlike the AI timelines hypo, shutting out criticisms which are based on questioning the extent to which gender diversity would be beneficial risks inappropriately protecting proposed ideas from evaluation and criticism. I think that is roughly in the neighborhood of the point @Larks was trying to make in the last paragraph of the comment above.
The reason is that the proposed ideas that might be proposed in response to this prompt are likely to have both specific benefits and specific costs/risks/objections. Where specific costs/risks/objections are involved—as opposed to general ones like “this doesn’t make sense because AGI is 100+ years away” or “we’d be better off focusing on bednets”—then bracketing has the potential to be more problematic. People should be able to perform a cost/benefit analysis, and here that requires (to some extent) evaluating how benefical having more gender diversity in EA would be. And there’s not a range of threads evaluating the benefits and costs of (e.g.) adding combating the patriarchy as an EA focus area, so banishing those evaluations from this thread poses a higher risk of suppressing them.
Thank you, this explanation makes a lot of sense to me.