Most truths have ~0 effect magnitude concerning any action plausibly within EA’s purview. This could be because knowing that X is true, and Y is not true (as opposed to uncertainty or even error regarding X or Y) just doesn’t change any important decision. It also can be because the important action that a truth would influence/enable is outside of EA’s competency for some reason. E.g., if no one with enough money will throw it at a campaign for Joe Smith, finding out that he would be the candidate for President who would usher in the Age of Aquarius actually isn’t valuable.
As relevant to the scientific racism discussion, I don’t see the existence or non-existence of the alleged genetic differences in IQ distributions by racial group as relevant to any action that EA might plausibly take. If some being told us the answers to these disputes tomorrow (in a way that no one could plausibly controvert), I don’t think the course of EA would be different in any meaningful way.
More broadly, I’d note that we can (ordinarily) find a truth later if we did not expend the resources (time, money, reputation, etc.) to find it today. The benefit of EA devoting resources to finding truth X will generally be that truth X was discovered sooner, and that we got to start using it to improve our decisions sooner. That’s not small potatoes, but it generally isn’t appropriate to weigh the entire value of the candidate truth for all time when deciding how many resources (if any) to throw at it. Moreover, it’s probably cheaper to produce scientific truth Z twenty years in the future than it is now. In contrast, global-health work is probably most cost-effective in the here and now, because in a wealthier world the low-hanging fruit will be plucked by other actors anyway.
What i currently take from this is you think that if we start some work which seems unpopular or controversial we should stop because we can discover it later?
If not, how much work should we do before we decide it’s not worth the reputation cost to discuss it carefully?
No, I think that extends beyond what I’m saying. I am not proposing a categorical rule here.
However, the usual considerations of neglectedness and counterfactual analysis certainly apply. If someone outside of EA is likely to done the work at some future time, then the cost of an “error” is the utility loss caused by the delay between when we would have done it and when it was done by the non-EA. If developments outside EA convince us to change our minds, the utility loss is measured between now and the time we change our minds. I’ve seen at least one comment suggesting “HBD” is in the same ballpark as AI safety . . . but we likely only get one shot at the AGI revolution for the rest of human history. Even if one assumes p(doom) = 0, the effects of messing up AGI are much more likely to be permanent or extremely costly to reverse/mitigate.
From a longtermist perspective, [1] I would assume that “we are delayed by 20-50 years in unlocking whatever benefit accepting scientific racism would bring” is a flash in the pan over a timespan of millions of years. In fact, those costs may be minimal, as I don’t think there would be a whole lot for EA to do even if it came to accept this conclusion. (I should emphasize that this is definitely not implying that scientific racism is true or that accepting it as true would unlock benefits.)
Most truths have ~0 effect magnitude concerning any action plausibly within EA’s purview. This could be because knowing that X is true, and Y is not true (as opposed to uncertainty or even error regarding X or Y) just doesn’t change any important decision. It also can be because the important action that a truth would influence/enable is outside of EA’s competency for some reason. E.g., if no one with enough money will throw it at a campaign for Joe Smith, finding out that he would be the candidate for President who would usher in the Age of Aquarius actually isn’t valuable.
As relevant to the scientific racism discussion, I don’t see the existence or non-existence of the alleged genetic differences in IQ distributions by racial group as relevant to any action that EA might plausibly take. If some being told us the answers to these disputes tomorrow (in a way that no one could plausibly controvert), I don’t think the course of EA would be different in any meaningful way.
More broadly, I’d note that we can (ordinarily) find a truth later if we did not expend the resources (time, money, reputation, etc.) to find it today. The benefit of EA devoting resources to finding truth X will generally be that truth X was discovered sooner, and that we got to start using it to improve our decisions sooner. That’s not small potatoes, but it generally isn’t appropriate to weigh the entire value of the candidate truth for all time when deciding how many resources (if any) to throw at it. Moreover, it’s probably cheaper to produce scientific truth Z twenty years in the future than it is now. In contrast, global-health work is probably most cost-effective in the here and now, because in a wealthier world the low-hanging fruit will be plucked by other actors anyway.
What i currently take from this is you think that if we start some work which seems unpopular or controversial we should stop because we can discover it later?
If not, how much work should we do before we decide it’s not worth the reputation cost to discuss it carefully?
No, I think that extends beyond what I’m saying. I am not proposing a categorical rule here.
However, the usual considerations of neglectedness and counterfactual analysis certainly apply. If someone outside of EA is likely to done the work at some future time, then the cost of an “error” is the utility loss caused by the delay between when we would have done it and when it was done by the non-EA. If developments outside EA convince us to change our minds, the utility loss is measured between now and the time we change our minds. I’ve seen at least one comment suggesting “HBD” is in the same ballpark as AI safety . . . but we likely only get one shot at the AGI revolution for the rest of human history. Even if one assumes p(doom) = 0, the effects of messing up AGI are much more likely to be permanent or extremely costly to reverse/mitigate.
From a longtermist perspective, [1] I would assume that “we are delayed by 20-50 years in unlocking whatever benefit accepting scientific racism would bring” is a flash in the pan over a timespan of millions of years. In fact, those costs may be minimal, as I don’t think there would be a whole lot for EA to do even if it came to accept this conclusion. (I should emphasize that this is definitely not implying that scientific racism is true or that accepting it as true would unlock benefits.)
I do not identify as a longtermist, but I think it’s even harder to come up with a theory of impact for scientific racism on neartermist grounds.