I think it’s permissible/reasonable/preferable to have forum posts or discussion threads of the rough form “Conditional upon X being true, what are the best next steps?” I think it is understandable for such posters to not wish to debate whether X is true in the comments of the post itself, especially if it’s either an old debate or otherwise tiresome.
For example, we might want to have posts on:
what should people do in short AI timelines scenarios, without explicitly arguing for why AI timelines are short
conversely, what should people do in long AI timelines scenarios, without explicitly arguing for why AI timelines are long.
posts on best ways to reduce factory farming, without explicitly arguing for why factory farming is net negative.
posts on how to save children’s lives, without explicitly engaging with the relevant thorny population ethics questions
I mostly agree with this, and don’t even think X in the bracketing “conditional upon X being true” has to be likely at all. However, I think this type of question can become problematic if the bracketing is interpreted in a way that inappropriately protects proposed ideas from criticism. I’m finding it difficult to put my finger precisely on when that happens, but here is a first stab at it:
“Conditional upon AI timelines being short, what are the best next steps?” does not inappropriately protect anything. There is a lively discussion of AI timelines in many other threads. Moreover, every post impliedly contains as its first sentence something like “If AI timelines are not short, what follows likely doesn’t make any sense.” There are also potential general criticisms like “we should be working on bednets instead” . . . but these are pretty obvious and touting the benefits of bednets really belongs in a thread about bednets instead.
What we have here—“Conditional upon more gender diversity in EA being a good thing, what are the best next steps?” is perfectly fine as far as it goes. However, unlike the AI timelines hypo, shutting out criticisms which are based on questioning the extent to which gender diversity would be beneficial risks inappropriately protecting proposed ideas from evaluation and criticism. I think that is roughly in the neighborhood of the point @Larks was trying to make in the last paragraph of the comment above.
The reason is that the proposed ideas that might be proposed in response to this prompt are likely to have both specific benefits and specific costs/risks/objections. Where specific costs/risks/objections are involved—as opposed to general ones like “this doesn’t make sense because AGI is 100+ years away” or “we’d be better off focusing on bednets”—then bracketing has the potential to be more problematic. People should be able to perform a cost/benefit analysis, and here that requires (to some extent) evaluating how benefical having more gender diversity in EA would be. And there’s not a range of threads evaluating the benefits and costs of (e.g.) adding combating the patriarchy as an EA focus area, so banishing those evaluations from this thread poses a higher risk of suppressing them.
Fwiw, I think your examples are all based on less controversial conditionals, though, which makes them less informative here. And I also think the topics that are conditioned on in your examples already received sufficient analyses that make me less worried about people making things worse* as they will be aware of more relevant considerations, in contrast to the treatment in the background discussions that Larks discussed.
*(except the timelines example, which still feels slightly different though as everything seems fairly uncertain about AI strategy)
Hmm good point that my examples are maybe too uncontroversial, so it’s somewhat biased and not a fair comparison. Still, maybe I don’t really understand what counts as controversial, but at the very least, it’s easy to come up with examples of conditionals that many people (and many EAs) likely place <50% credence on, but are still useful to have on the forum:
The AI timelines example, again (because mathematically you can’t have >50% credence in both long and short AI timelines)
But perhaps “many people (and many EAs) likely place <50% credence on” is not a good operationalization of “controversial.” In that case maybe it’d be helpful to operationalize what we mean by that word.
But perhaps “many people (and many EAs) likely place <50% credence on” is not a good operationalization of “controversial.” In that case maybe it’d be helpful to operationalize what we mean by that word.
I think the relevant consideration here isn’t whether a post is (implicitly or not) assuming controversial premises, it’s the degree to which it’s (implicitly or not) recommending controversial courses of action.
There’s a big difference between a longtermist analysis of the importance of nuclear nonproliferation and a longtermist analysis of airstrikes on foreign data centers, for instance.
I think it’s permissible/reasonable/preferable to have forum posts or discussion threads of the rough form “Conditional upon X being true, what are the best next steps?” I think it is understandable for such posters to not wish to debate whether X is true in the comments of the post itself, especially if it’s either an old debate or otherwise tiresome.
For example, we might want to have posts on:
what should people do in short AI timelines scenarios, without explicitly arguing for why AI timelines are short
conversely, what should people do in long AI timelines scenarios, without explicitly arguing for why AI timelines are long.
posts on best ways to reduce factory farming, without explicitly arguing for why factory farming is net negative.
posts on how to save children’s lives, without explicitly engaging with the relevant thorny population ethics questions
what should people do to reduce nuclear risk, without explicitly arguing for why reducing nuclear risk is the best use of limited resources.
Posts on research and recommendations on climate change, without explicitly engaging with whether climate change is net positive.
I mostly agree with this, and don’t even think X in the bracketing “conditional upon X being true” has to be likely at all. However, I think this type of question can become problematic if the bracketing is interpreted in a way that inappropriately protects proposed ideas from criticism. I’m finding it difficult to put my finger precisely on when that happens, but here is a first stab at it:
“Conditional upon AI timelines being short, what are the best next steps?” does not inappropriately protect anything. There is a lively discussion of AI timelines in many other threads. Moreover, every post impliedly contains as its first sentence something like “If AI timelines are not short, what follows likely doesn’t make any sense.” There are also potential general criticisms like “we should be working on bednets instead” . . . but these are pretty obvious and touting the benefits of bednets really belongs in a thread about bednets instead.
What we have here—“Conditional upon more gender diversity in EA being a good thing, what are the best next steps?” is perfectly fine as far as it goes. However, unlike the AI timelines hypo, shutting out criticisms which are based on questioning the extent to which gender diversity would be beneficial risks inappropriately protecting proposed ideas from evaluation and criticism. I think that is roughly in the neighborhood of the point @Larks was trying to make in the last paragraph of the comment above.
The reason is that the proposed ideas that might be proposed in response to this prompt are likely to have both specific benefits and specific costs/risks/objections. Where specific costs/risks/objections are involved—as opposed to general ones like “this doesn’t make sense because AGI is 100+ years away” or “we’d be better off focusing on bednets”—then bracketing has the potential to be more problematic. People should be able to perform a cost/benefit analysis, and here that requires (to some extent) evaluating how benefical having more gender diversity in EA would be. And there’s not a range of threads evaluating the benefits and costs of (e.g.) adding combating the patriarchy as an EA focus area, so banishing those evaluations from this thread poses a higher risk of suppressing them.
Thank you, this explanation makes a lot of sense to me.
Fwiw, I think your examples are all based on less controversial conditionals, though, which makes them less informative here. And I also think the topics that are conditioned on in your examples already received sufficient analyses that make me less worried about people making things worse* as they will be aware of more relevant considerations, in contrast to the treatment in the background discussions that Larks discussed.
*(except the timelines example, which still feels slightly different though as everything seems fairly uncertain about AI strategy)
Hmm good point that my examples are maybe too uncontroversial, so it’s somewhat biased and not a fair comparison. Still, maybe I don’t really understand what counts as controversial, but at the very least, it’s easy to come up with examples of conditionals that many people (and many EAs) likely place <50% credence on, but are still useful to have on the forum:
evaluating organophosphate pesticides and other neurotoxicants (implicit conditional: global health is a plausibly cost-competitive priority with other top EA priorities)
Factors for shrimp welfare (implicit conditional: shrimp are moral patients)
The assymetry and the far future (implicit conditional: Asymmetry views, among others
ways forecasting can be useful for the longterm-future (implicit conditional: the LT future matters in decision-relevant ways)
The AI timelines example, again (because mathematically you can’t have >50% credence in both long and short AI timelines)
But perhaps “many people (and many EAs) likely place <50% credence on” is not a good operationalization of “controversial.” In that case maybe it’d be helpful to operationalize what we mean by that word.
I think the relevant consideration here isn’t whether a post is (implicitly or not) assuming controversial premises, it’s the degree to which it’s (implicitly or not) recommending controversial courses of action.
There’s a big difference between a longtermist analysis of the importance of nuclear nonproliferation and a longtermist analysis of airstrikes on foreign data centers, for instance.