I don’t understand your model here. You think that it’s wrong because it’s bad to actively work to counteract a bias, or because you don’t think the bias exists, or because it will predictably lead to worse outcomes?
Because [actively working to correct for a bias] is less good than [figure out what the correct unbiased answer should be]
Especially when the bias is “do X a bit more”
(there are probably some other ways I’d pick where I would or wouldn’t use this strategy, but TL;DR: Deciding how many people should do something like ai-safety seems like a “figure out the correct solution” and not “adjust slightly for biases” situation. Do you agree with that part?)
“People are biased not to work on AI Safety because it often seems weird to their families, so we should push more people to work on it”—I don’t actually believe this, but I am saying that we can find biases like these to many many directions (and so it’s probably not a good way to make decisions)
I think that it’s reasonable to think about which biases are relevant, and consider whether they matter and what should be done to account for them. More specifically, AI safety being weird-sounding is definitely something that people in EA have spent significant effort working to counteract.
I don’t understand your model here. You think that it’s wrong because it’s bad to actively work to counteract a bias, or because you don’t think the bias exists, or because it will predictably lead to worse outcomes?
Because [actively working to correct for a bias] is less good than [figure out what the correct unbiased answer should be]
Especially when the bias is “do X a bit more”
(there are probably some other ways I’d pick where I would or wouldn’t use this strategy, but TL;DR: Deciding how many people should do something like ai-safety seems like a “figure out the correct solution” and not “adjust slightly for biases” situation. Do you agree with that part?)
As a naive example to make my point more clear:
“People are biased not to work on AI Safety because it often seems weird to their families, so we should push more people to work on it”—I don’t actually believe this, but I am saying that we can find biases like these to many many directions (and so it’s probably not a good way to make decisions)
What do you think?
I think that it’s reasonable to think about which biases are relevant, and consider whether they matter and what should be done to account for them. More specifically, AI safety being weird-sounding is definitely something that people in EA have spent significant effort working to counteract.