Hmm good point that my examples are maybe too uncontroversial, so it’s somewhat biased and not a fair comparison. Still, maybe I don’t really understand what counts as controversial, but at the very least, it’s easy to come up with examples of conditionals that many people (and many EAs) likely place <50% credence on, but are still useful to have on the forum:
The AI timelines example, again (because mathematically you can’t have >50% credence in both long and short AI timelines)
But perhaps “many people (and many EAs) likely place <50% credence on” is not a good operationalization of “controversial.” In that case maybe it’d be helpful to operationalize what we mean by that word.
But perhaps “many people (and many EAs) likely place <50% credence on” is not a good operationalization of “controversial.” In that case maybe it’d be helpful to operationalize what we mean by that word.
I think the relevant consideration here isn’t whether a post is (implicitly or not) assuming controversial premises, it’s the degree to which it’s (implicitly or not) recommending controversial courses of action.
There’s a big difference between a longtermist analysis of the importance of nuclear nonproliferation and a longtermist analysis of airstrikes on foreign data centers, for instance.
Hmm good point that my examples are maybe too uncontroversial, so it’s somewhat biased and not a fair comparison. Still, maybe I don’t really understand what counts as controversial, but at the very least, it’s easy to come up with examples of conditionals that many people (and many EAs) likely place <50% credence on, but are still useful to have on the forum:
evaluating organophosphate pesticides and other neurotoxicants (implicit conditional: global health is a plausibly cost-competitive priority with other top EA priorities)
Factors for shrimp welfare (implicit conditional: shrimp are moral patients)
The assymetry and the far future (implicit conditional: Asymmetry views, among others
ways forecasting can be useful for the longterm-future (implicit conditional: the LT future matters in decision-relevant ways)
The AI timelines example, again (because mathematically you can’t have >50% credence in both long and short AI timelines)
But perhaps “many people (and many EAs) likely place <50% credence on” is not a good operationalization of “controversial.” In that case maybe it’d be helpful to operationalize what we mean by that word.
I think the relevant consideration here isn’t whether a post is (implicitly or not) assuming controversial premises, it’s the degree to which it’s (implicitly or not) recommending controversial courses of action.
There’s a big difference between a longtermist analysis of the importance of nuclear nonproliferation and a longtermist analysis of airstrikes on foreign data centers, for instance.