I think people try pretty hard to come to accurate answers given the information available, and have inherited or come up with various tools for this (e.g. probabilistic forecasting). Whether that counts as “rationality” or not depends a lot on your definitions of what it means to be rational, and how low your bar is.
I don’t think we’re perfectly rational, and there’s an argument that we aren’t investing as much resources as optimal for rationality or epistemics-enhancing interventions. But it’s pretty hard to answer a broad question like “How Is EA Rational?”, and I don’t think the crux is a specific form of argument mapping that we use or don’t use.
At face value, the answer is something like we’re reasonably good at coming to accurate enough answers to hard-ish questions. Whether this is “good enough” depends on whether “accurate enough” is good enough, how hard the questions we ultimately want to solve are, and whether/how much we can do better given the resources available.
But I don’t think this is exactly what you’re asking. In sum, I don’t think “is X rational” has a binary answer.
Bias and irrationality are huge problems today. Should I make an effort to do better? Yes. Should I trust myself? No – at least as little as possible. It’s better to assume I will fail sometimes and design around that. E.g. what policies would limit the negative impact of the times I am biased? What constraints or rules can I impose on myself so that my irrationalities have less impact?
So when I see an answer like “I think people [at EA] try pretty hard [… to be rational]”, I find it unsatisfactory. Trying is good, but I think planning for failures of rationality is needed. Being above average at rationality, and trying more than most people, can actually, paradoxically, partly make things worse, because it can reduce how much people plan for rationality failures.
Following written debate methods is one way to reduce the impact of bias and irrationality. I might be very biased but not find any loophole in the debate rules that lets my bias win. Similarly, transparency policies help reduce the impact of bias – when I don’t have the option to hide what I’m doing, and I have to explain myself, then I won’t take some biased actions because I don’t see how to get away with them (or I may do them anyway, get caught, and be overruled so the problem is fixed).
We should develop as much rationality and integrity as we can. But I think we should also work to reduce the need for personal rationality and integrity by building some rationality and integrity into rules and policies. We should limit our reliance on personal rationality and integrity. Explicit rules and policies, and other constraints against arbitrary action, help with that.
Being above average at rationality, and trying more than most people, can actually, paradoxically, partly make things worse, because it can reduce how much people plan for rationality failures.
I think this is possible but will mostly come from arrogance and ignoring big rationality failures after getting small wins
I might be very biased but not find any loophole in the debate rules that lets my bias win.
For example, you can wear your more busy (and possibly more knowledgeable) interlocutors down with boredom.
We should develop as much rationality and integrity as we can. But I think we should also work to reduce the need for personal rationality and integrity by building some rationality and integrity into rules and policies
I agree that relying entirely on personal rationality/integrity is not sufficient. To make up for individual failings, I feel more optimistic about cultural and maybe technological shifts than rules and policies. Top-down rules and policies especially feel a bit suss to me, given the lack of a track record.
I think people try pretty hard to come to accurate answers given the information available, and have inherited or come up with various tools for this (e.g. probabilistic forecasting). Whether that counts as “rationality” or not depends a lot on your definitions of what it means to be rational, and how low your bar is.
I don’t think we’re perfectly rational, and there’s an argument that we aren’t investing as much resources as optimal for rationality or epistemics-enhancing interventions. But it’s pretty hard to answer a broad question like “How Is EA Rational?”, and I don’t think the crux is a specific form of argument mapping that we use or don’t use.
At face value, the answer is something like we’re reasonably good at coming to accurate enough answers to hard-ish questions. Whether this is “good enough” depends on whether “accurate enough” is good enough, how hard the questions we ultimately want to solve are, and whether/how much we can do better given the resources available.
But I don’t think this is exactly what you’re asking. In sum, I don’t think “is X rational” has a binary answer.
Bias and irrationality are huge problems today. Should I make an effort to do better? Yes. Should I trust myself? No – at least as little as possible. It’s better to assume I will fail sometimes and design around that. E.g. what policies would limit the negative impact of the times I am biased? What constraints or rules can I impose on myself so that my irrationalities have less impact?
So when I see an answer like “I think people [at EA] try pretty hard [… to be rational]”, I find it unsatisfactory. Trying is good, but I think planning for failures of rationality is needed. Being above average at rationality, and trying more than most people, can actually, paradoxically, partly make things worse, because it can reduce how much people plan for rationality failures.
Following written debate methods is one way to reduce the impact of bias and irrationality. I might be very biased but not find any loophole in the debate rules that lets my bias win. Similarly, transparency policies help reduce the impact of bias – when I don’t have the option to hide what I’m doing, and I have to explain myself, then I won’t take some biased actions because I don’t see how to get away with them (or I may do them anyway, get caught, and be overruled so the problem is fixed).
We should develop as much rationality and integrity as we can. But I think we should also work to reduce the need for personal rationality and integrity by building some rationality and integrity into rules and policies. We should limit our reliance on personal rationality and integrity. Explicit rules and policies, and other constraints against arbitrary action, help with that.
I think this is possible but will mostly come from arrogance and ignoring big rationality failures after getting small wins
For example, you can wear your more busy (and possibly more knowledgeable) interlocutors down with boredom.
I agree that relying entirely on personal rationality/integrity is not sufficient. To make up for individual failings, I feel more optimistic about cultural and maybe technological shifts than rules and policies. Top-down rules and policies especially feel a bit suss to me, given the lack of a track record.