Bias and irrationality are huge problems today. Should I make an effort to do better? Yes. Should I trust myself? No – at least as little as possible. It’s better to assume I will fail sometimes and design around that. E.g. what policies would limit the negative impact of the times I am biased? What constraints or rules can I impose on myself so that my irrationalities have less impact?
So when I see an answer like “I think people [at EA] try pretty hard [… to be rational]”, I find it unsatisfactory. Trying is good, but I think planning for failures of rationality is needed. Being above average at rationality, and trying more than most people, can actually, paradoxically, partly make things worse, because it can reduce how much people plan for rationality failures.
Following written debate methods is one way to reduce the impact of bias and irrationality. I might be very biased but not find any loophole in the debate rules that lets my bias win. Similarly, transparency policies help reduce the impact of bias – when I don’t have the option to hide what I’m doing, and I have to explain myself, then I won’t take some biased actions because I don’t see how to get away with them (or I may do them anyway, get caught, and be overruled so the problem is fixed).
We should develop as much rationality and integrity as we can. But I think we should also work to reduce the need for personal rationality and integrity by building some rationality and integrity into rules and policies. We should limit our reliance on personal rationality and integrity. Explicit rules and policies, and other constraints against arbitrary action, help with that.
Being above average at rationality, and trying more than most people, can actually, paradoxically, partly make things worse, because it can reduce how much people plan for rationality failures.
I think this is possible but will mostly come from arrogance and ignoring big rationality failures after getting small wins
I might be very biased but not find any loophole in the debate rules that lets my bias win.
For example, you can wear your more busy (and possibly more knowledgeable) interlocutors down with boredom.
We should develop as much rationality and integrity as we can. But I think we should also work to reduce the need for personal rationality and integrity by building some rationality and integrity into rules and policies
I agree that relying entirely on personal rationality/integrity is not sufficient. To make up for individual failings, I feel more optimistic about cultural and maybe technological shifts than rules and policies. Top-down rules and policies especially feel a bit suss to me, given the lack of a track record.
Bias and irrationality are huge problems today. Should I make an effort to do better? Yes. Should I trust myself? No – at least as little as possible. It’s better to assume I will fail sometimes and design around that. E.g. what policies would limit the negative impact of the times I am biased? What constraints or rules can I impose on myself so that my irrationalities have less impact?
So when I see an answer like “I think people [at EA] try pretty hard [… to be rational]”, I find it unsatisfactory. Trying is good, but I think planning for failures of rationality is needed. Being above average at rationality, and trying more than most people, can actually, paradoxically, partly make things worse, because it can reduce how much people plan for rationality failures.
Following written debate methods is one way to reduce the impact of bias and irrationality. I might be very biased but not find any loophole in the debate rules that lets my bias win. Similarly, transparency policies help reduce the impact of bias – when I don’t have the option to hide what I’m doing, and I have to explain myself, then I won’t take some biased actions because I don’t see how to get away with them (or I may do them anyway, get caught, and be overruled so the problem is fixed).
We should develop as much rationality and integrity as we can. But I think we should also work to reduce the need for personal rationality and integrity by building some rationality and integrity into rules and policies. We should limit our reliance on personal rationality and integrity. Explicit rules and policies, and other constraints against arbitrary action, help with that.
I think this is possible but will mostly come from arrogance and ignoring big rationality failures after getting small wins
For example, you can wear your more busy (and possibly more knowledgeable) interlocutors down with boredom.
I agree that relying entirely on personal rationality/integrity is not sufficient. To make up for individual failings, I feel more optimistic about cultural and maybe technological shifts than rules and policies. Top-down rules and policies especially feel a bit suss to me, given the lack of a track record.