It’s true for me, and others, that we got much more interested in the “rationality” project when we came to understand it as improving our altruism. Learning the most common of biases can quickly render the selection process for interventions abysmal, as evidenced by more learned peers correcting others in ways which hindsight seems obvious. Many of us gained motivation to understand rationality as an instrumental tool necessary for doing as much good as possible. I think the influence of CFAR and LessWrong on effective altruism is remarkable, considering virtually every metacharity and their supporters I can think of uses tools from Bayesian epistemology learned on LessWrong to explain the reasoning behind choices that are at odds with conclusions of the parent of LessWrong, the Machine Intelligence Research Institute.
Your suggestions near the end of this article are some of the first showing how effective altruism may enhance rationality. In how effective altruism reaches out to the world at large, discussing awareness of bias may become integral to how effective altruism spreads its message(s).
Mere awareness of commonly existing biases is insufficient to reduce them, and can induce people to rationalize their choices when they know about biases, since they figure they’ll no longer fall prey to them. Further, one can be motivated to accuse others of a specific named biases while ceasing to check one’s own thought for errors. I believe on LessWrong this is referred to as “the valley of bad rationality”, the metaphor being you must make it through a low level of ratioanlity for quite some time before reaching the peak(s) of clear thinking, as if we’re journeying up a mountain. I think lots of us are beyond this. I believe I am. Since dedicated altruists are so passionate, we’re willing to debate for what we believe in quite fervently, to win an argument with a predetermined conclusion, in an adversarial way, rather than collaborative truth-seeking. This has been the biggest problem in effective altruism thus far.
However, many hundreds of people passionate about a preselected policy when entering effective altruism sought out the community because they perceived its great potential, and we’re willing to adopt more epistemic humility, or at least mimic it in public, and learn more about other causes. Of the friends I’ve observed having not changed their minds from the cause they came into effective altruism with, most of them seem to be able to engage disagreement at a higher level, with a greater grasp of facts and without strawmanning positions counter to their own as often. I think in some ways doing all this seems like an imperative personal responsibility to effective altruists to keep the movement from collapsing into disparate factions who lose the ability to each alone then continue raisnug the imperative profile of effectiveness in do-gooding. While there always is and perhaps will be debates with too much vitriol in effective altruism, I think as long veterans of zero-sum debates continue imploring fresher community members to temper their overconfidence under pain of driving apart such a fragile but potent alliance, effective altruism will sustain itself.
Has all this enhanced the ability of some effective altruists to temper their own biases in domains or causes unrelated to effective altruism? I think so. However, I think we overcame initial ignorance, actually got worse in our uncalibrated passion as many young intellects do, and then came back to the zero level as we realized more information without stricter habit of thought biased us more. Coming out of the valley of bad rationality leaves us at the base of the mountain. I form beliefs relating to politics with more lightnesss than before, putting less confidence on them, and willing to dtich them faster when faced with opposing evidence. I feel like I know now how to better avoid the eorst ideas, but not good ones. I don’t have policy prescriptions, I don’t know who to vote for, and I don’t have anything like a model which would derive from what I want to happen what I think societies should actually do, other than adopting past practices history has shown us are anti-effective, e.g., totalitarianism, as mentioned above.
It’s true for me, and others, that we got much more interested in the “rationality” project when we came to understand it as improving our altruism. Learning the most common of biases can quickly render the selection process for interventions abysmal, as evidenced by more learned peers correcting others in ways which hindsight seems obvious. Many of us gained motivation to understand rationality as an instrumental tool necessary for doing as much good as possible. I think the influence of CFAR and LessWrong on effective altruism is remarkable, considering virtually every metacharity and their supporters I can think of uses tools from Bayesian epistemology learned on LessWrong to explain the reasoning behind choices that are at odds with conclusions of the parent of LessWrong, the Machine Intelligence Research Institute.
Your suggestions near the end of this article are some of the first showing how effective altruism may enhance rationality. In how effective altruism reaches out to the world at large, discussing awareness of bias may become integral to how effective altruism spreads its message(s).
Mere awareness of commonly existing biases is insufficient to reduce them, and can induce people to rationalize their choices when they know about biases, since they figure they’ll no longer fall prey to them. Further, one can be motivated to accuse others of a specific named biases while ceasing to check one’s own thought for errors. I believe on LessWrong this is referred to as “the valley of bad rationality”, the metaphor being you must make it through a low level of ratioanlity for quite some time before reaching the peak(s) of clear thinking, as if we’re journeying up a mountain. I think lots of us are beyond this. I believe I am. Since dedicated altruists are so passionate, we’re willing to debate for what we believe in quite fervently, to win an argument with a predetermined conclusion, in an adversarial way, rather than collaborative truth-seeking. This has been the biggest problem in effective altruism thus far.
However, many hundreds of people passionate about a preselected policy when entering effective altruism sought out the community because they perceived its great potential, and we’re willing to adopt more epistemic humility, or at least mimic it in public, and learn more about other causes. Of the friends I’ve observed having not changed their minds from the cause they came into effective altruism with, most of them seem to be able to engage disagreement at a higher level, with a greater grasp of facts and without strawmanning positions counter to their own as often. I think in some ways doing all this seems like an imperative personal responsibility to effective altruists to keep the movement from collapsing into disparate factions who lose the ability to each alone then continue raisnug the imperative profile of effectiveness in do-gooding. While there always is and perhaps will be debates with too much vitriol in effective altruism, I think as long veterans of zero-sum debates continue imploring fresher community members to temper their overconfidence under pain of driving apart such a fragile but potent alliance, effective altruism will sustain itself.
Has all this enhanced the ability of some effective altruists to temper their own biases in domains or causes unrelated to effective altruism? I think so. However, I think we overcame initial ignorance, actually got worse in our uncalibrated passion as many young intellects do, and then came back to the zero level as we realized more information without stricter habit of thought biased us more. Coming out of the valley of bad rationality leaves us at the base of the mountain. I form beliefs relating to politics with more lightnesss than before, putting less confidence on them, and willing to dtich them faster when faced with opposing evidence. I feel like I know now how to better avoid the eorst ideas, but not good ones. I don’t have policy prescriptions, I don’t know who to vote for, and I don’t have anything like a model which would derive from what I want to happen what I think societies should actually do, other than adopting past practices history has shown us are anti-effective, e.g., totalitarianism, as mentioned above.