Indeed, effective altruism makes some people more likely to act like this by providing ready-made rationalisations which treat them as working towards overwhelming important ends, and indeed as vastly important figures whose productivity must be bolstered at all costs. I’ve seen prominent EAs use these justifications for actions that would shock people in more normal circles.
I, and others, noticed myself making such rationalizations. Also, a friend of mind who interned at an effective altruist organization in Oxford reported several anecdotes of our allies there doing the same. I consider myself equally part of effective altruism, and ‘more normal circles’. I was shocked at myself, and others. Then, I read more LessWrong, and made an effort to learn more about moral psychology and ethics. I’ve concluded if consequentialism from humans invariably leads to, e.g., repugnant conclusions and unintended consequences, then, while consequentialism might be true, humans aren’t equipped for it. So, in practice, we’re going to fail our ideals.
Impressed upon me from LessWrong is that humans, including rationalists and effective altruists, can and will rationalize everything, including their values. I realized it’s especially crucial for ourselves to protect against such rationalizations because we might be more prone to them, due to being in an intellectual bubble of over-confidence and self-aggrandizing. Also, if we betray our own values, our hypocrisy revealed to ourselves feels much more damning to us than hypocrisy feels to others. Aspiring to effective altruism seems to contain a kernel of integrity and commitment that spoils the movement as a whole if we make false ‘special exceptions’ for our own (otherwise bad) behavior.
In practice, this has made me not want to identify as consequentialist[1]. At the very least, in practice, I’d want supporters of effective altruism to personally adhere to a form of act or rule utilitarianism. I’d want there to be a moratorium in making special exceptions for themselves (or ourselves, including me, whatever) to “bolster our productivity at all costs”. I really believe rationalizing otherwise reprehensible behavior on consequentialist grounds, combined with the concrete over-confidence that could come with effective altruism, is a slope posing costs too high if we start rationalizing ever worse behavior.
Outside of circles around LessWrong, and effective altruism, I don’t call myself a consequentialist as much because I don’t want to be mistaken as explicitly non-consequentialist peers as totally in agreement with, e.g., utilitarianism. Inside these circles, I feel more comfortable expressing my actual sympathy for consequentialism. I’m not confident consequentialism should be an ultimate means for determining our moral actions. However, it seems to me a good set of heuristics, along with other moral traditions, for determining what actions are “right” in the face of otherwise inadequate moral intuition, or dogma.
I, and others, noticed myself making such rationalizations. Also, a friend of mind who interned at an effective altruist organization in Oxford reported several anecdotes of our allies there doing the same. I consider myself equally part of effective altruism, and ‘more normal circles’. I was shocked at myself, and others. Then, I read more LessWrong, and made an effort to learn more about moral psychology and ethics. I’ve concluded if consequentialism from humans invariably leads to, e.g., repugnant conclusions and unintended consequences, then, while consequentialism might be true, humans aren’t equipped for it. So, in practice, we’re going to fail our ideals.
Impressed upon me from LessWrong is that humans, including rationalists and effective altruists, can and will rationalize everything, including their values. I realized it’s especially crucial for ourselves to protect against such rationalizations because we might be more prone to them, due to being in an intellectual bubble of over-confidence and self-aggrandizing. Also, if we betray our own values, our hypocrisy revealed to ourselves feels much more damning to us than hypocrisy feels to others. Aspiring to effective altruism seems to contain a kernel of integrity and commitment that spoils the movement as a whole if we make false ‘special exceptions’ for our own (otherwise bad) behavior.
In practice, this has made me not want to identify as consequentialist[1]. At the very least, in practice, I’d want supporters of effective altruism to personally adhere to a form of act or rule utilitarianism. I’d want there to be a moratorium in making special exceptions for themselves (or ourselves, including me, whatever) to “bolster our productivity at all costs”. I really believe rationalizing otherwise reprehensible behavior on consequentialist grounds, combined with the concrete over-confidence that could come with effective altruism, is a slope posing costs too high if we start rationalizing ever worse behavior.
Outside of circles around LessWrong, and effective altruism, I don’t call myself a consequentialist as much because I don’t want to be mistaken as explicitly non-consequentialist peers as totally in agreement with, e.g., utilitarianism. Inside these circles, I feel more comfortable expressing my actual sympathy for consequentialism. I’m not confident consequentialism should be an ultimate means for determining our moral actions. However, it seems to me a good set of heuristics, along with other moral traditions, for determining what actions are “right” in the face of otherwise inadequate moral intuition, or dogma.