I just want to quickly call attention to one point: “these are still pure benefits” seems like a mistaken way of thinking about this—or perhaps I’m just misinterpreting you. To me “pure benefits” suggests something costless, or where the costs are so trivial they should be discarded in analysis, and I think that really underestimates the labor that goes into building inclusive communities. Researching and compiling these recommendations took work, and implementing them will take a lot of work. Mentoring people can have wonderful returns, but it requires significant commitments of time, energy, and often other resources. Writing up community standards about conduct tends to be emotionally exhausting work which demands weeks of time and effort from productive and deeply involved community members who are necessarily sidelining other EA projects in order to do it.
None of this is to say ‘it isn’t worth it’. I expect that some of these things have great returns to the health, epistemic standards, and resiliency of the community, as well as, like you mentioned, good returns for the reputation of EA (though from my experience in social justice communities, there will be articles criticizing any movement for failures of intersectionality, and the presence of those articles isn’t very strong evidence that a movement is doing something unusually wrong). My goal is not to say ‘this is too much work’ but simply ‘this is work’ - because if we don’t acknowledge that it requires work, then work probably will not get done (or will not be acknowledged and appreciated).
Once we acknowledge that these are suggestions which require varying amounts of time, energy and access to resources, and that they impose varying degrees of mental load, then we can start figuring out which ones are good priorities for people with limited amounts of all of the above. I’ve seen a lot of social justice communities suffer because they’re unable to do this kind of prioritization and accordingly impose excessively high costs on members and lose good people who have limited resources.
So I think it’s a bad idea to think in terms of ‘pure benefit’. Here, like everywhere else, if we want to do the most good we need to keep in mind that not all actions are equally good or equally cheap so we can prioritize the effective and cheap ones.
I’m also curious why you think the magnitude of the current EA movement’s contributions to harmful societal structures in the United States might outweigh the magnitude of the effects EA has on nonhumans and on the poorest humans. To be clear about where I’m coming from, I think the most important thing the EA community can do is be a community that fosters fast progress on the most important things in the world. Obviously, this will include being a community that takes contributions seriously regardless of their origins and elicits contributions from everyone with good ideas, without making any of them feel excluded because of their background. But that makes diversity an instrumental goal, a thing that will make us better at figuring out how to improve the world and acting on the evidence. From your phrasing, I think you might believe that harmful societal structures in the western world are one of the things we can most effectively fix? Have you expanded on that anywhere, or is there anyone else who has argued for that who you can point me to?
While I thoroughly appreciate your thoughts here and I’m glad you voiced them, I think you started on a miscommunication:
I don’t think the fact that there are costs to this, as anything, is controversial (though I know its cost-effectiveness is), and it sounds to me like Tyler just meant “intrinsic benefits,” in addition to the instrumental benefits to EA community-building. If he thought improving diversity and inclusion in the community had no cost, I would think he’d say its case is irrefutable, not that these benefits merely “strengthen” its case.
Hi KelseyPiper, thanks so much for a thoughtful reply. I really agree with most of this—I was talking in terms of these benefits as “pure” benefits because I assumed the many costs you rightly point out up front. That is, assuming that we read Kelly’s piece and we come away with a sense of the costs and benefits that promoting diversity and inclusion in the Effective Altruism movement will have, these benefits I’ve pointed out above are “pure” because they come along for free with that labor involved in making the EA community more inclusive, and don’t require additional effort. But I understand how that could be misleading, and so I take all of your criticism on board. I also agree that this will involve priority-setting—even if we think that all of these suggestions are important and some people should be doing all of them to some extent (and especially if not), there are some that we ought to spend more time on than others as a community.
I also agree that the EA community should focus on identifying and working on the very most important things. Although I might disagree slightly with how you’ve characterized that. I don’t think that we should be a community doing work that fosters “fast progress on the most important things,” because I think that we should be doing whatever does the most good in the long run, all-things-considered, and fostering “fast progress” on the most important things does not necessarily correlate with doing the most good in the long run, all-things-considered—unless we define “fosters fast progress” in a way that makes this trivial. But if, for example, we could perform one of two different interventions, one which added an additional +5 well-being to all of the global poor, on average, over twenty years, for one generation, and one which added an additional +5 well-being to all of the global poor, on average, over one hundred years, for all generations, we should choose the latter intervention, even though the former intervention is in a sense fostering faster progress. I make this point not to be pedantic, but because I think some EAs sometimes forget that what we (or many of us) are trying to do is to produce the most benefits and avert the most harm all-things-considered, and not simply make a lot of progress on some very important projects very quickly, and I think that this is quite relevant to this conversation.
To your question as to why “the magnitude of the current EA movement’s contributions to harmful societal structures in the United States might outweigh the magnitude of the effects EA has on nonhumans and on the poorest humans,” I unfortunately haven’t written something on this and perhaps I should. But I can say a few things. I should first say that I certainly don’t think it’s obvious that the EA movement’s contributions to such harmful structures clearly will outweigh the magnitude of the effects we have on nonhumans and on the poorest humans. I only claimed that it was non-obvious that the effect size was “very small” compared to the positive effects we have. It’s something more EAs should treat as non-negligible more often than they do.
Still, here are some of the basic reasons why I think that the EA movement’s contributions to harmful social structures could well be of sufficient magnitude that we should keep constant accounting of them in our efforts to do good in the world, apart from reputation costs and instrumental epistemic benefits of inclusion and diversity work. First, the fundamental structure of society and its social, legal, and political norms profoundly shape the kinds and quality of life of all beings, as well as profoundly shaping cultural and moral mores, and so ensuring that the fundamental structure of society and these norms are good ones is crucial to ensuring that the long-run future is good, and shaping these structures for the better may make the trajectory of the future far better than the counterfactual where we shape these structures for the worse (for reasons of legal precedent, memetics, psychological and value anchoring, and more). Second, norms against harming others are very sticky—much stickier than norms favoring helping others except in certain particular cases (e.g. within one’s own family). They are psychologically sticky, whether for innate biological reasons which fix this, or for entirely cultural reasons. Which of these is true makes a difference to how much staying power this stickiness has. But whichever is true, ensuring that we set good norms in place around not causing harm to others and ensuring that these norms are stringently upheld and not violated so that we internalize them as commonsense norms seems like a good way to shape how the future goes. They are also easier to enforce through sanction, blame, and punishment, whereas norms of aid (especially effective aid) are more difficult to enforce. And our human legal and political history suggests that they are much easier to codify into law. So for all these reasons, ensuring that we have good norms in these areas and not violating them looks like a very important intervention for shaping the social and legal institutions of future societies. Third, there are reasons to think that our moral and political attitudes towards others are psychologically intertwined in complex ways. How we treat and think about some groups, and the norms we have around harming and helping them, seems to have an impact on how we treat and think about other groups. This seems especially important if we are interested in expanding our human moral circle to include nonhuman animals and silicon-based sentient life. If our negative attitudes, norms, laws, and practices around other humans have negative downstream effects on our attitudes, norms, laws, and practices around other animals and other, inorganic sentient beings, then the benefits of prioritizing moral development and averting harmful social structures which favor some sentient beings over others may be very important. If AI value alignment is decided as a result of a political arms race, then it seems that having a broader moral circle may significantly shape the impact of intelligent and superintelligent AI for better or worse. (Here I’m out of my depth, and my impression is that this is a matter of significant disagreement, so I certainly won’t come down hard on this.) The main point is that the downstream effects of our norms, attitudes, laws, and practices around humans, and who our society decides is worthy of full moral consideration, may have significant downstream effects in complicated and to some extent unpredictable ways. The more skeptical we are about how much we know about the future, the greater our uncertainty should be about these effects. I think it’s reasonable to be concerned that this may be too speculative or too optimistic about the downstream consequences of our norm-shaping on the far future, but we should be careful to remember that there are also skeptical considerations cutting in the opposite direction—measurability bias may lead us to exclude less measurable, long-term effects in favor of more measurable, short-term effects of our actions irrationally.
I am not arguing that actively averting oppressive social structures and hierarchies of dominance should be a main cause area for EAs (although that could be an upshot of this conversation, too, depending on the probabilities we assign to the hypotheses delineated above), but given the psychological, social, and legal stickiness of norms against harming and the fact that failing to make EA a more diverse and inclusive community will raise the probability of EAs harming marginalized communities and failing to create and uphold norms around not harming them. And the more influential the EA community is as a community, the more this holds true. So it seems to me that there’s a plausible case to be made that entrenching strong norms against treating marginalized communities inequitably within the EA community is an effective cause area that we should spend some of our time on, even if we should spend the majority of our time advocating for farmed and wild animals and the global poor.
I just want to quickly call attention to one point: “these are still pure benefits” seems like a mistaken way of thinking about this—or perhaps I’m just misinterpreting you. To me “pure benefits” suggests something costless, or where the costs are so trivial they should be discarded in analysis, and I think that really underestimates the labor that goes into building inclusive communities. Researching and compiling these recommendations took work, and implementing them will take a lot of work. Mentoring people can have wonderful returns, but it requires significant commitments of time, energy, and often other resources. Writing up community standards about conduct tends to be emotionally exhausting work which demands weeks of time and effort from productive and deeply involved community members who are necessarily sidelining other EA projects in order to do it.
None of this is to say ‘it isn’t worth it’. I expect that some of these things have great returns to the health, epistemic standards, and resiliency of the community, as well as, like you mentioned, good returns for the reputation of EA (though from my experience in social justice communities, there will be articles criticizing any movement for failures of intersectionality, and the presence of those articles isn’t very strong evidence that a movement is doing something unusually wrong). My goal is not to say ‘this is too much work’ but simply ‘this is work’ - because if we don’t acknowledge that it requires work, then work probably will not get done (or will not be acknowledged and appreciated).
Once we acknowledge that these are suggestions which require varying amounts of time, energy and access to resources, and that they impose varying degrees of mental load, then we can start figuring out which ones are good priorities for people with limited amounts of all of the above. I’ve seen a lot of social justice communities suffer because they’re unable to do this kind of prioritization and accordingly impose excessively high costs on members and lose good people who have limited resources.
So I think it’s a bad idea to think in terms of ‘pure benefit’. Here, like everywhere else, if we want to do the most good we need to keep in mind that not all actions are equally good or equally cheap so we can prioritize the effective and cheap ones.
I’m also curious why you think the magnitude of the current EA movement’s contributions to harmful societal structures in the United States might outweigh the magnitude of the effects EA has on nonhumans and on the poorest humans. To be clear about where I’m coming from, I think the most important thing the EA community can do is be a community that fosters fast progress on the most important things in the world. Obviously, this will include being a community that takes contributions seriously regardless of their origins and elicits contributions from everyone with good ideas, without making any of them feel excluded because of their background. But that makes diversity an instrumental goal, a thing that will make us better at figuring out how to improve the world and acting on the evidence. From your phrasing, I think you might believe that harmful societal structures in the western world are one of the things we can most effectively fix? Have you expanded on that anywhere, or is there anyone else who has argued for that who you can point me to?
While I thoroughly appreciate your thoughts here and I’m glad you voiced them, I think you started on a miscommunication:
I don’t think the fact that there are costs to this, as anything, is controversial (though I know its cost-effectiveness is), and it sounds to me like Tyler just meant “intrinsic benefits,” in addition to the instrumental benefits to EA community-building. If he thought improving diversity and inclusion in the community had no cost, I would think he’d say its case is irrefutable, not that these benefits merely “strengthen” its case.
Hi KelseyPiper, thanks so much for a thoughtful reply. I really agree with most of this—I was talking in terms of these benefits as “pure” benefits because I assumed the many costs you rightly point out up front. That is, assuming that we read Kelly’s piece and we come away with a sense of the costs and benefits that promoting diversity and inclusion in the Effective Altruism movement will have, these benefits I’ve pointed out above are “pure” because they come along for free with that labor involved in making the EA community more inclusive, and don’t require additional effort. But I understand how that could be misleading, and so I take all of your criticism on board. I also agree that this will involve priority-setting—even if we think that all of these suggestions are important and some people should be doing all of them to some extent (and especially if not), there are some that we ought to spend more time on than others as a community.
I also agree that the EA community should focus on identifying and working on the very most important things. Although I might disagree slightly with how you’ve characterized that. I don’t think that we should be a community doing work that fosters “fast progress on the most important things,” because I think that we should be doing whatever does the most good in the long run, all-things-considered, and fostering “fast progress” on the most important things does not necessarily correlate with doing the most good in the long run, all-things-considered—unless we define “fosters fast progress” in a way that makes this trivial. But if, for example, we could perform one of two different interventions, one which added an additional +5 well-being to all of the global poor, on average, over twenty years, for one generation, and one which added an additional +5 well-being to all of the global poor, on average, over one hundred years, for all generations, we should choose the latter intervention, even though the former intervention is in a sense fostering faster progress. I make this point not to be pedantic, but because I think some EAs sometimes forget that what we (or many of us) are trying to do is to produce the most benefits and avert the most harm all-things-considered, and not simply make a lot of progress on some very important projects very quickly, and I think that this is quite relevant to this conversation.
To your question as to why “the magnitude of the current EA movement’s contributions to harmful societal structures in the United States might outweigh the magnitude of the effects EA has on nonhumans and on the poorest humans,” I unfortunately haven’t written something on this and perhaps I should. But I can say a few things. I should first say that I certainly don’t think it’s obvious that the EA movement’s contributions to such harmful structures clearly will outweigh the magnitude of the effects we have on nonhumans and on the poorest humans. I only claimed that it was non-obvious that the effect size was “very small” compared to the positive effects we have. It’s something more EAs should treat as non-negligible more often than they do.
Still, here are some of the basic reasons why I think that the EA movement’s contributions to harmful social structures could well be of sufficient magnitude that we should keep constant accounting of them in our efforts to do good in the world, apart from reputation costs and instrumental epistemic benefits of inclusion and diversity work. First, the fundamental structure of society and its social, legal, and political norms profoundly shape the kinds and quality of life of all beings, as well as profoundly shaping cultural and moral mores, and so ensuring that the fundamental structure of society and these norms are good ones is crucial to ensuring that the long-run future is good, and shaping these structures for the better may make the trajectory of the future far better than the counterfactual where we shape these structures for the worse (for reasons of legal precedent, memetics, psychological and value anchoring, and more). Second, norms against harming others are very sticky—much stickier than norms favoring helping others except in certain particular cases (e.g. within one’s own family). They are psychologically sticky, whether for innate biological reasons which fix this, or for entirely cultural reasons. Which of these is true makes a difference to how much staying power this stickiness has. But whichever is true, ensuring that we set good norms in place around not causing harm to others and ensuring that these norms are stringently upheld and not violated so that we internalize them as commonsense norms seems like a good way to shape how the future goes. They are also easier to enforce through sanction, blame, and punishment, whereas norms of aid (especially effective aid) are more difficult to enforce. And our human legal and political history suggests that they are much easier to codify into law. So for all these reasons, ensuring that we have good norms in these areas and not violating them looks like a very important intervention for shaping the social and legal institutions of future societies. Third, there are reasons to think that our moral and political attitudes towards others are psychologically intertwined in complex ways. How we treat and think about some groups, and the norms we have around harming and helping them, seems to have an impact on how we treat and think about other groups. This seems especially important if we are interested in expanding our human moral circle to include nonhuman animals and silicon-based sentient life. If our negative attitudes, norms, laws, and practices around other humans have negative downstream effects on our attitudes, norms, laws, and practices around other animals and other, inorganic sentient beings, then the benefits of prioritizing moral development and averting harmful social structures which favor some sentient beings over others may be very important. If AI value alignment is decided as a result of a political arms race, then it seems that having a broader moral circle may significantly shape the impact of intelligent and superintelligent AI for better or worse. (Here I’m out of my depth, and my impression is that this is a matter of significant disagreement, so I certainly won’t come down hard on this.) The main point is that the downstream effects of our norms, attitudes, laws, and practices around humans, and who our society decides is worthy of full moral consideration, may have significant downstream effects in complicated and to some extent unpredictable ways. The more skeptical we are about how much we know about the future, the greater our uncertainty should be about these effects. I think it’s reasonable to be concerned that this may be too speculative or too optimistic about the downstream consequences of our norm-shaping on the far future, but we should be careful to remember that there are also skeptical considerations cutting in the opposite direction—measurability bias may lead us to exclude less measurable, long-term effects in favor of more measurable, short-term effects of our actions irrationally.
I am not arguing that actively averting oppressive social structures and hierarchies of dominance should be a main cause area for EAs (although that could be an upshot of this conversation, too, depending on the probabilities we assign to the hypotheses delineated above), but given the psychological, social, and legal stickiness of norms against harming and the fact that failing to make EA a more diverse and inclusive community will raise the probability of EAs harming marginalized communities and failing to create and uphold norms around not harming them. And the more influential the EA community is as a community, the more this holds true. So it seems to me that there’s a plausible case to be made that entrenching strong norms against treating marginalized communities inequitably within the EA community is an effective cause area that we should spend some of our time on, even if we should spend the majority of our time advocating for farmed and wild animals and the global poor.
Thank you so much for writing this.