Firstly, I should point out that there is an understanding of the power of both stories and emotions in EA already—see here for an example.
Secondly, I think you’re associating optimisation with a set of concepts (spreadsheets, data-oriented decision-making, technology) rather than the actual meaning of it, which is to maximise what we can do with our resources. If you associate optimisation with these concepts, it’s possible to follow it off a cliff. When people refer to the term “overoptimising”, this is what I think they mean—applying optimisation-as-concept to the point where it actually becomes less optimal overall. Maximising our resources does mean taking into account the importance of human connection and acting accordingly—applying technology and outcome-oriented thinking as much as we can to make things better, but no further. If you “optimise” to the point of turning people away or quitting the movement in disillusionment, that’s not optimising. The goal is not the process.
Thirdly, you mention the following:
Many altruistic organizations “fall back into modeling the oppressive tendencies against which we claim to be pushing…. Many align with the capitalistic belief that constant growth and critical mass is the only way to create change” (2017). A major flaw of Effective Altruism lies in its lack of reimagination, its focus on reducing suffering within the capitalist system as it exists rather than conceptualizing a new system in which oppression is reduced and eventually eliminated.
There’s quite a few unspoken assumptions in these sentences—assumptions that are quite common among progressive circles, but assumptions not everyone in EA shares. Primarily, you assume that EA’s focus is on “reducing suffering within the capitalist system”. This is not how I personally view EA’s mission, and I’m not alone in this. I view EA’s mission as reducing suffering and helping people, period, regardless of what caused their problem in the first place. For instance, I don’t see malaria and schistosomiasis (worms) as “suffering within the capitalist system”, but rather suffering caused by nature. It’s possible that colonialism exacerbated this through keeping Africa poor and unable to fight off these things that Western countries have essentially eliminated, but it is important to understand that this is not a necessary precondition for us to oppose it. I would still support malaria prevention even if it was proven that capitalism/colonialism had absolutely nothing to do with the proliferation of malaria in sub-Saharan Africa. Something doesn’t have to be labelled as oppression for it to be worth fixing.
Now that we’ve established this, that means that calling something capitalistic does not automatically make it bad in the eyes of effective altruism. It is not a failure of imagination that causes many EA’s to oppose creating a new system than capitalism—it is a legitimate difference in viewing the world. EA is not ethically homogenous—for every point I make I’m sure you can find people who identify as EA and disagree with me. That said, there is definitely an existing viewpoint similar to what I have described, and I believe enough EA’s are within this cluster that they must be taken into account.
Finally, you mention several problems you see within EA, but I don’t see concrete ideas for fixing them. You mention fixes on the conceptual level, like “We must recognize that people hold value simply by existing, that time is a tool with which to cultivate meaningful social change rather than a good to be commodified.”, but—how exactly do you propose people do that? Let’s say I’m a community builder who is convinced by your message. What can I do, this week, to begin moving in that direction? What might I be doing that’s currently harmful, and what can I replace it with?
To summarise:
- Optimisation means “doing the most good”, not “Applying techniques generally associated with optimisation”. We should be careful not to confuse the goal from the process, since as you have noticed, excessive application of the process can move us further from the goal.
- Many in EA do not view the world primarily in terms of oppressor and oppressed. Many, like myself, view the world primarily as “Humanity versus suffering” where suffering often comes from the way nature and the universe happened to be laid out. Social structures didn’t cause malaria, or aging, or cholera. You don’t have to agree with, or even engage with, this viewpoint to be in EA. But if you wish to change the minds of people in EA with this viewpoint, it would help to understand this viewpoint deeply.
- Concrete steps to help fix a problem that you’ve identified will make for more valuable feedback.
As someone who intuitively relates to Maya & can understand where they’re coming from, I really enjoyed your comment. In particular, I thought your point on “maximising our resources does mean taking into account the importance of human connection and acting accordingly” was eloquently articulated.
I will note, however, that this frame isn’t wholly satisfactory to me as it can lead me to view self-care etc. only as instrumental to the goal of optimizing for impact. While this is somewhat addressed by the postAiming for the minimum of self-care is dangerous, this outcome-focused frame (e.g., “self-care is necessary to sustainably make impact”) still leads me to feel like I have no value outside of the impact I can have and ties my self-worth too much to consequences.
But I know this isn’t a problem for everyone—maybe this is just because I don’t identify as a consequentialist, or because of my mental health issues! Regardless, I appreciated your thorough response to this post.
While the point I was making was about authenticity rather than self-care, (“the importance of human connection” being about 1:1′s with potential EA’s, rather than all human connection in one’s life) I think your frame could apply to both.
It’s definitely true that self-care is necessary for sustainable impact. However, given the question of “In the least convenient possible world, if you actually could maximise your overall lifetime impact by throwing self-care under the bus, should you?” I notice that I am still reluctant to do this or recommend it to anyone else, and that applies to authenticity too. I don’t think we should expect anyone to sacrifice their own happiness or their own morality, even if doing so actually would maximise impact.*
It would be wrong to say we should never sacrifice. Some of us sacrifice money, some of us sacrifice time, some of us sacrifice the causes that intuitively feel dear to us in favor of ones that are further away in space or time and don’t feel as compelling. But there are definitely things I would never ask anyone to sacrifice, and happiness/morals are two of them.
Part of the reason is consequential. The more demanding EA as a group is, the less people we attract and the greater the risk of burnout we already have. But even in the least convenient possible world where this wasn’t a problem, I think that if I had the ability to mandate what people in EA should sacrifice, I would still say “Sacrifice what you can without meaningfully impacting your quality of life”. If someone wants to sacrifice more I wouldn’t stop them, but I wouldn’t ask it of them.
And if the amount someone can sacrifice without meaningfully impacting their quality of life is next-to-nothing, I would tell them to focus on taking care of themselves and building themselves up. Not because it would lead to maximum impact later, even though it probably would. But because it’s the right thing to do.
*I could imagine ridiculous scenarios like “Do something you find morally wrong or the entire planet blows up” where this no longer applies, but here I’m referring mostly to the real tradeoffs we face every day.
I agree. The EA community does need to be better at supporting people as people.
I think your post could include more of the community health work that is being done. While it’s by no means near completion—you’re touching on topics that a lot of people (especially those not fitting the dominant EA profile) feel andare working on. It’s hard to measure improvements in culture and community, but I hope you would agree that it’s a work in progress and more focused on now than ever before. The diversity and inclusion tag on the Forum is one place to look for such examples.
I would push back on your point about feelings. You say “feelings lie to us, sure, but they also tell us the truth. While we should learn to recognize that our feelings can (and do) deceive us, we should also realize that we feel for a reason. Humans are not computers for a reason. If compassion is the casualty of effectiveness then we are sacrificing the very thing that makes us who we are as humans”.
I don’t believe that being in EA means you can’t have feelings and compassion. I think it’s the opposite—many people seek out EA because they are so driven by their feelings to help, and helping more with the same amount of resources is one of the most compassionate, feeling-driven things you can do. I agree that amongst all the fellowships and readings and focus on doing, the empathy and feelings can get a bit verbally lost—but I think it’s still an unspoken sentiment and driver among most EAs.
I’d be curious what the next steps are? Have there been things in your community building and outreach that you think we should focus more attention on? These reflections are good to note—but we should also use them to improve the community.
I appreciate your willingness to put this aspect of your lived experience into writing and share it. I hear that’s its not inclusive of every angle or experience—and think the forum can be a really helpful place to find people going through similar experiences. Mostly commenting to support the processing, naming, and sharing of the vast, deep, and sometimes confuddling experience that is community building in a goal-oriented, highly calculating community that is very much in the thick of teenage growing pains.
Firstly, I should point out that there is an understanding of the power of both stories and emotions in EA already—see here for an example.
Secondly, I think you’re associating optimisation with a set of concepts (spreadsheets, data-oriented decision-making, technology) rather than the actual meaning of it, which is to maximise what we can do with our resources. If you associate optimisation with these concepts, it’s possible to follow it off a cliff. When people refer to the term “overoptimising”, this is what I think they mean—applying optimisation-as-concept to the point where it actually becomes less optimal overall. Maximising our resources does mean taking into account the importance of human connection and acting accordingly—applying technology and outcome-oriented thinking as much as we can to make things better, but no further. If you “optimise” to the point of turning people away or quitting the movement in disillusionment, that’s not optimising. The goal is not the process.
Thirdly, you mention the following:
Many altruistic organizations “fall back into modeling the oppressive tendencies against which we claim to be pushing…. Many align with the capitalistic belief that constant growth and critical mass is the only way to create change” (2017). A major flaw of Effective Altruism lies in its lack of reimagination, its focus on reducing suffering within the capitalist system as it exists rather than conceptualizing a new system in which oppression is reduced and eventually eliminated.
There’s quite a few unspoken assumptions in these sentences—assumptions that are quite common among progressive circles, but assumptions not everyone in EA shares. Primarily, you assume that EA’s focus is on “reducing suffering within the capitalist system”. This is not how I personally view EA’s mission, and I’m not alone in this. I view EA’s mission as reducing suffering and helping people, period, regardless of what caused their problem in the first place. For instance, I don’t see malaria and schistosomiasis (worms) as “suffering within the capitalist system”, but rather suffering caused by nature. It’s possible that colonialism exacerbated this through keeping Africa poor and unable to fight off these things that Western countries have essentially eliminated, but it is important to understand that this is not a necessary precondition for us to oppose it. I would still support malaria prevention even if it was proven that capitalism/colonialism had absolutely nothing to do with the proliferation of malaria in sub-Saharan Africa. Something doesn’t have to be labelled as oppression for it to be worth fixing.
Now that we’ve established this, that means that calling something capitalistic does not automatically make it bad in the eyes of effective altruism. It is not a failure of imagination that causes many EA’s to oppose creating a new system than capitalism—it is a legitimate difference in viewing the world. EA is not ethically homogenous—for every point I make I’m sure you can find people who identify as EA and disagree with me. That said, there is definitely an existing viewpoint similar to what I have described, and I believe enough EA’s are within this cluster that they must be taken into account.
Finally, you mention several problems you see within EA, but I don’t see concrete ideas for fixing them. You mention fixes on the conceptual level, like “We must recognize that people hold value simply by existing, that time is a tool with which to cultivate meaningful social change rather than a good to be commodified.”, but—how exactly do you propose people do that? Let’s say I’m a community builder who is convinced by your message. What can I do, this week, to begin moving in that direction? What might I be doing that’s currently harmful, and what can I replace it with?
To summarise:
- Optimisation means “doing the most good”, not “Applying techniques generally associated with optimisation”. We should be careful not to confuse the goal from the process, since as you have noticed, excessive application of the process can move us further from the goal.
- Many in EA do not view the world primarily in terms of oppressor and oppressed. Many, like myself, view the world primarily as “Humanity versus suffering” where suffering often comes from the way nature and the universe happened to be laid out. Social structures didn’t cause malaria, or aging, or cholera. You don’t have to agree with, or even engage with, this viewpoint to be in EA. But if you wish to change the minds of people in EA with this viewpoint, it would help to understand this viewpoint deeply.
- Concrete steps to help fix a problem that you’ve identified will make for more valuable feedback.
As someone who intuitively relates to Maya & can understand where they’re coming from, I really enjoyed your comment. In particular, I thought your point on “maximising our resources does mean taking into account the importance of human connection and acting accordingly” was eloquently articulated.
I will note, however, that this frame isn’t wholly satisfactory to me as it can lead me to view self-care etc. only as instrumental to the goal of optimizing for impact. While this is somewhat addressed by the post Aiming for the minimum of self-care is dangerous, this outcome-focused frame (e.g., “self-care is necessary to sustainably make impact”) still leads me to feel like I have no value outside of the impact I can have and ties my self-worth too much to consequences.
But I know this isn’t a problem for everyone—maybe this is just because I don’t identify as a consequentialist, or because of my mental health issues! Regardless, I appreciated your thorough response to this post.
While the point I was making was about authenticity rather than self-care, (“the importance of human connection” being about 1:1′s with potential EA’s, rather than all human connection in one’s life) I think your frame could apply to both.
It’s definitely true that self-care is necessary for sustainable impact. However, given the question of “In the least convenient possible world, if you actually could maximise your overall lifetime impact by throwing self-care under the bus, should you?” I notice that I am still reluctant to do this or recommend it to anyone else, and that applies to authenticity too. I don’t think we should expect anyone to sacrifice their own happiness or their own morality, even if doing so actually would maximise impact.*
It would be wrong to say we should never sacrifice. Some of us sacrifice money, some of us sacrifice time, some of us sacrifice the causes that intuitively feel dear to us in favor of ones that are further away in space or time and don’t feel as compelling. But there are definitely things I would never ask anyone to sacrifice, and happiness/morals are two of them.
Part of the reason is consequential. The more demanding EA as a group is, the less people we attract and the greater the risk of burnout we already have. But even in the least convenient possible world where this wasn’t a problem, I think that if I had the ability to mandate what people in EA should sacrifice, I would still say “Sacrifice what you can without meaningfully impacting your quality of life”. If someone wants to sacrifice more I wouldn’t stop them, but I wouldn’t ask it of them.
And if the amount someone can sacrifice without meaningfully impacting their quality of life is next-to-nothing, I would tell them to focus on taking care of themselves and building themselves up. Not because it would lead to maximum impact later, even though it probably would. But because it’s the right thing to do.
*I could imagine ridiculous scenarios like “Do something you find morally wrong or the entire planet blows up” where this no longer applies, but here I’m referring mostly to the real tradeoffs we face every day.
I agree. The EA community does need to be better at supporting people as people.
I think your post could include more of the community health work that is being done. While it’s by no means near completion—you’re touching on topics that a lot of people (especially those not fitting the dominant EA profile) feel and are working on. It’s hard to measure improvements in culture and community, but I hope you would agree that it’s a work in progress and more focused on now than ever before. The diversity and inclusion tag on the Forum is one place to look for such examples.
I would push back on your point about feelings. You say “feelings lie to us, sure, but they also tell us the truth. While we should learn to recognize that our feelings can (and do) deceive us, we should also realize that we feel for a reason. Humans are not computers for a reason. If compassion is the casualty of effectiveness then we are sacrificing the very thing that makes us who we are as humans”.
I don’t believe that being in EA means you can’t have feelings and compassion. I think it’s the opposite—many people seek out EA because they are so driven by their feelings to help, and helping more with the same amount of resources is one of the most compassionate, feeling-driven things you can do. I agree that amongst all the fellowships and readings and focus on doing, the empathy and feelings can get a bit verbally lost—but I think it’s still an unspoken sentiment and driver among most EAs.
I’d be curious what the next steps are? Have there been things in your community building and outreach that you think we should focus more attention on? These reflections are good to note—but we should also use them to improve the community.
I whole-heartedly agree! I have been trying to write a post on the same topic for a while now—would love to connect!
I appreciate your willingness to put this aspect of your lived experience into writing and share it. I hear that’s its not inclusive of every angle or experience—and think the forum can be a really helpful place to find people going through similar experiences. Mostly commenting to support the processing, naming, and sharing of the vast, deep, and sometimes confuddling experience that is community building in a goal-oriented, highly calculating community that is very much in the thick of teenage growing pains.