“People are biased not to work on AI Safety because it often seems weird to their families, so we should push more people to work on it”—I don’t actually believe this, but I am saying that we can find biases like these to many many directions (and so it’s probably not a good way to make decisions)
I think that it’s reasonable to think about which biases are relevant, and consider whether they matter and what should be done to account for them. More specifically, AI safety being weird-sounding is definitely something that people in EA have spent significant effort working to counteract.
As a naive example to make my point more clear:
“People are biased not to work on AI Safety because it often seems weird to their families, so we should push more people to work on it”—I don’t actually believe this, but I am saying that we can find biases like these to many many directions (and so it’s probably not a good way to make decisions)
What do you think?
I think that it’s reasonable to think about which biases are relevant, and consider whether they matter and what should be done to account for them. More specifically, AI safety being weird-sounding is definitely something that people in EA have spent significant effort working to counteract.