No, I think that extends beyond what I’m saying. I am not proposing a categorical rule here.
However, the usual considerations of neglectedness and counterfactual analysis certainly apply. If someone outside of EA is likely to done the work at some future time, then the cost of an “error” is the utility loss caused by the delay between when we would have done it and when it was done by the non-EA. If developments outside EA convince us to change our minds, the utility loss is measured between now and the time we change our minds. I’ve seen at least one comment suggesting “HBD” is in the same ballpark as AI safety . . . but we likely only get one shot at the AGI revolution for the rest of human history. Even if one assumes p(doom) = 0, the effects of messing up AGI are much more likely to be permanent or extremely costly to reverse/mitigate.
From a longtermist perspective, [1] I would assume that “we are delayed by 20-50 years in unlocking whatever benefit accepting scientific racism would bring” is a flash in the pan over a timespan of millions of years. In fact, those costs may be minimal, as I don’t think there would be a whole lot for EA to do even if it came to accept this conclusion. (I should emphasize that this is definitely not implying that scientific racism is true or that accepting it as true would unlock benefits.)
No, I think that extends beyond what I’m saying. I am not proposing a categorical rule here.
However, the usual considerations of neglectedness and counterfactual analysis certainly apply. If someone outside of EA is likely to done the work at some future time, then the cost of an “error” is the utility loss caused by the delay between when we would have done it and when it was done by the non-EA. If developments outside EA convince us to change our minds, the utility loss is measured between now and the time we change our minds. I’ve seen at least one comment suggesting “HBD” is in the same ballpark as AI safety . . . but we likely only get one shot at the AGI revolution for the rest of human history. Even if one assumes p(doom) = 0, the effects of messing up AGI are much more likely to be permanent or extremely costly to reverse/mitigate.
From a longtermist perspective, [1] I would assume that “we are delayed by 20-50 years in unlocking whatever benefit accepting scientific racism would bring” is a flash in the pan over a timespan of millions of years. In fact, those costs may be minimal, as I don’t think there would be a whole lot for EA to do even if it came to accept this conclusion. (I should emphasize that this is definitely not implying that scientific racism is true or that accepting it as true would unlock benefits.)
I do not identify as a longtermist, but I think it’s even harder to come up with a theory of impact for scientific racism on neartermist grounds.