One man’s bias is another’s intrinsic value, at least for “normative” biases like scope insensitivity, status-quo bias, and failure to aggregate. But at least I understand your meaning better. :) Most of LessWrong is not hedonistic utilitarian (most people there are more preference utilitarian or complexity-of-value consequentialist), so one might wonder why other people who think a lot about overcoming those normative biases aren’t hedonistic utilitarians.
Of course, one could give people the experience of having grown up in a culture that valued paperclips, of meeting the Great Paperclip in the Sky and hearing him tell them that paperclips are the meaning of life, and so on. These might “naturally” incline people to intrinsically value paperclips. But I agree there seem to be some differences between this case and the pleasure case.
I’m glad that comment was useful. :) I think it’s unfortunate that it’s so often assumed that “human-controlled AI” means something like CEV, when in fact CEV seems to me a remote possibility.
I don’t know that you should downshift your ability to reason about the far future that much. :) Over time you’ll hear more and more perspectives, which can help challenge previous assumptions.
Thanks!
One man’s bias is another’s intrinsic value, at least for “normative” biases like scope insensitivity, status-quo bias, and failure to aggregate. But at least I understand your meaning better. :) Most of LessWrong is not hedonistic utilitarian (most people there are more preference utilitarian or complexity-of-value consequentialist), so one might wonder why other people who think a lot about overcoming those normative biases aren’t hedonistic utilitarians.
Of course, one could give people the experience of having grown up in a culture that valued paperclips, of meeting the Great Paperclip in the Sky and hearing him tell them that paperclips are the meaning of life, and so on. These might “naturally” incline people to intrinsically value paperclips. But I agree there seem to be some differences between this case and the pleasure case.
I’m glad that comment was useful. :) I think it’s unfortunate that it’s so often assumed that “human-controlled AI” means something like CEV, when in fact CEV seems to me a remote possibility. I don’t know that you should downshift your ability to reason about the far future that much. :) Over time you’ll hear more and more perspectives, which can help challenge previous assumptions.
Simple: just because LessWrongers know that these biases exist doesn’t mean they’re immune to them.
It was already pretty low, this is just an example of why I think it should be low.