I don’t agree with most of these points, though I appreciate you writing them up. Here are my thoughts on each of them, in turn:
Altruism implies a naive model of human cognition. I feel like this argument proves too much. If “altruism” is not a good concept because humans are inconsistent, why would “self-interest” be any less vulnerable to this criticism? It seems that you could even-handedly apply this criticism to any concept we might want to maximise, which ends up bringing everything back to neutral anyway.
Altruism as emergent from reward-seeking. This brings up a good point in my opinion, though perhaps not the same point you were making. Specifically, I think altruism is often poorly defined. On some level it’s obvious that people are altruistic because of self-interest. But it also seems to me that if your view of what you want the world to look like includes other people’s preferences, and you make non-trivial sacrifices (E.g, donating 10%) to meet those preferences, that should certainly count as altruism, even if you’re doing it because you want to.
Need for self / other distinction. I’m not actually following this one, so I won’t comment on it.
Information asymmetry. Perfectly true—if all humans were roughly equally well-off, the optimal thing to do would be to focus on yourself. However, this is not the case. I may understand more about my preferences than I understand about the preferences of someone in Bangladesh earning $2/day, but I can reasonably predict that a marginal $20 would help them more than it would help me. Thus, it seems totally reasonable that there are ways you can help others even with less information on their internal states.
Game-theoretic perspective. This argument is just confusing to me. Your first sentence says that self-interested agents can co-operate for everyone’s benefit, and your second sentence says that altruistic groups may behave suboptimally. Well...so might self-interested agents! “Can” does not mean “will”. You’ve done some sleight of hand here where you say that self-interested agents can sometimes co-ordinate optimally, then you say that altruistic groups do not always co-ordinate optimally, and then used that to imply that self-interested groups are better. You haven’t actually shown that self-interested groups are more effective in general, merely that it’s possible, in some cases (1 in 10? 1 in 100? 1 in 1000?) for a self-interested group to outperform an altruistic one.
Human nature. Humans aren’t hardwired to care about spreadsheets, or to build rockets, or to program computers. One of the greatest things about humans, in my mind, is our ability to transcend our nature. I view evolutionary psychology as a useful field in the same way that sitcoms are useful for romance advice—they give solid advice on what not to do, or what pitfalls to watch out for. I am naturally wired for self-interest...so I should watch out for that.
Also...I’m not sure if we can do this and still keep what makes the movement great. In the end, effective altruism is about trying to improve the world, and that requires thinking beyond oneself, even if that’s hard and we’re wired to do otherwise. I don’t think I’m likely to be convinced that donating 10% of my income to people I’ll never see is actually in my own self-interest, and yet I do it anyway. There are absolutely positives to being part of the movement from the point of view of self-interest, and those are good to smuggle along to get your monkey-brain on board. Nevertheless—if you’re focused on self-interest, that limits a lot of what you can do to improve the world compared to having that goal directly. So I think altruism is still very important.
Thanks for these comments, will think about them! I particularly liked “ if your view of what you want the world to look like includes other people’s preferences, and you make non-trivial sacrifices (E.g, donating 10%) to meet those preferences, that should certainly count as altruism, even if you’re doing it because you want to”. This seems like a more useful and practical framing of the concept; based on behaviours rather than internal motivations.
Altruism is not necessarily only knowable through observations in the wild of altruistic behavior or through introspection of altruistic intentions. “Altruism” serves to label a goal in some circumstances of planning, that is, when wanting to have altruistic consequences and calling those consequences “altruistic” as opposed to “harmful”, for example.
What actions serve to achieve altruistic consequences is subject to debate and intersubjective validation of the consequences of actions in context. My altruistic actions are not revealed just by examining my inner motives for altruistic action or my history of personal behavior that I call “altruistic”. I have to have a causal model that links my actions to consequences that satisfy the description “altruistic”.
More simply, you can distinguish what you intend through your actions versus what you cause, and in each case, whether the consequences are altruistic. An emphasis on altruism is then an emphasis on outcomes, not on personal motives for behavior. Effective altruism is then an outcome-oriented set of behaviors, subject to feedback about their consequences in a process like:
analyzing consequences
planning actions
executing actions
reviewing consequences
and back to step 1
That’s just a process model, similar to ones used in business process analyses of various sorts. EA folks care a lot about metrics, QALY’s or WELLBY’s or whatever, you can look into them. The structure of EA institutions mirrors those of other nonprofits, with similar restraints on personal action, I would think. As a cog in an EA machine, your actual role might not feel like self-actualization of altruistic aspirations, but either way the outcome is the same.
I don’t agree with most of these points, though I appreciate you writing them up. Here are my thoughts on each of them, in turn:
Altruism implies a naive model of human cognition. I feel like this argument proves too much. If “altruism” is not a good concept because humans are inconsistent, why would “self-interest” be any less vulnerable to this criticism? It seems that you could even-handedly apply this criticism to any concept we might want to maximise, which ends up bringing everything back to neutral anyway.
Altruism as emergent from reward-seeking. This brings up a good point in my opinion, though perhaps not the same point you were making. Specifically, I think altruism is often poorly defined. On some level it’s obvious that people are altruistic because of self-interest. But it also seems to me that if your view of what you want the world to look like includes other people’s preferences, and you make non-trivial sacrifices (E.g, donating 10%) to meet those preferences, that should certainly count as altruism, even if you’re doing it because you want to.
Need for self / other distinction. I’m not actually following this one, so I won’t comment on it.
Information asymmetry. Perfectly true—if all humans were roughly equally well-off, the optimal thing to do would be to focus on yourself. However, this is not the case. I may understand more about my preferences than I understand about the preferences of someone in Bangladesh earning $2/day, but I can reasonably predict that a marginal $20 would help them more than it would help me. Thus, it seems totally reasonable that there are ways you can help others even with less information on their internal states.
Game-theoretic perspective. This argument is just confusing to me. Your first sentence says that self-interested agents can co-operate for everyone’s benefit, and your second sentence says that altruistic groups may behave suboptimally. Well...so might self-interested agents! “Can” does not mean “will”. You’ve done some sleight of hand here where you say that self-interested agents can sometimes co-ordinate optimally, then you say that altruistic groups do not always co-ordinate optimally, and then used that to imply that self-interested groups are better. You haven’t actually shown that self-interested groups are more effective in general, merely that it’s possible, in some cases (1 in 10? 1 in 100? 1 in 1000?) for a self-interested group to outperform an altruistic one.
Human nature. Humans aren’t hardwired to care about spreadsheets, or to build rockets, or to program computers. One of the greatest things about humans, in my mind, is our ability to transcend our nature. I view evolutionary psychology as a useful field in the same way that sitcoms are useful for romance advice—they give solid advice on what not to do, or what pitfalls to watch out for. I am naturally wired for self-interest...so I should watch out for that.
Also...I’m not sure if we can do this and still keep what makes the movement great. In the end, effective altruism is about trying to improve the world, and that requires thinking beyond oneself, even if that’s hard and we’re wired to do otherwise. I don’t think I’m likely to be convinced that donating 10% of my income to people I’ll never see is actually in my own self-interest, and yet I do it anyway. There are absolutely positives to being part of the movement from the point of view of self-interest, and those are good to smuggle along to get your monkey-brain on board. Nevertheless—if you’re focused on self-interest, that limits a lot of what you can do to improve the world compared to having that goal directly. So I think altruism is still very important.
Thanks for these comments, will think about them! I particularly liked “ if your view of what you want the world to look like includes other people’s preferences, and you make non-trivial sacrifices (E.g, donating 10%) to meet those preferences, that should certainly count as altruism, even if you’re doing it because you want to”. This seems like a more useful and practical framing of the concept; based on behaviours rather than internal motivations.
Altruism is not necessarily only knowable through observations in the wild of altruistic behavior or through introspection of altruistic intentions. “Altruism” serves to label a goal in some circumstances of planning, that is, when wanting to have altruistic consequences and calling those consequences “altruistic” as opposed to “harmful”, for example.
What actions serve to achieve altruistic consequences is subject to debate and intersubjective validation of the consequences of actions in context. My altruistic actions are not revealed just by examining my inner motives for altruistic action or my history of personal behavior that I call “altruistic”. I have to have a causal model that links my actions to consequences that satisfy the description “altruistic”.
More simply, you can distinguish what you intend through your actions versus what you cause, and in each case, whether the consequences are altruistic. An emphasis on altruism is then an emphasis on outcomes, not on personal motives for behavior. Effective altruism is then an outcome-oriented set of behaviors, subject to feedback about their consequences in a process like:
analyzing consequences
planning actions
executing actions
reviewing consequences
and back to step 1
That’s just a process model, similar to ones used in business process analyses of various sorts. EA folks care a lot about metrics, QALY’s or WELLBY’s or whatever, you can look into them. The structure of EA institutions mirrors those of other nonprofits, with similar restraints on personal action, I would think. As a cog in an EA machine, your actual role might not feel like self-actualization of altruistic aspirations, but either way the outcome is the same.