In replies to this thread, here are some thoughts I have around much of the discourse that have come out so far about recent controversies. By “discourse,” I’m thinking of stuff I mostly see here, on EA Twitter, and EA Facebook. I will not opine on the details of the controversies themselves. Instead I have some thoughts on why I think the ensuing discourse is mostly quite bad, some attempts to reach an understanding, thoughts on how we can do better, as well as some hopefully relevant tangents.
I split my thoughts into multiple comments so people can upvote or downvote specific threads.
While I have thought about this question a bunch, these comments has been really hard for me to write and the final product is likely pretty far from what I’d like, so please bear with me. As usual, all errors are my own.
Some of my other comments have quite grandiose language and claims. In some ways this is warranted: the problems we face are quite hard. But in other ways perhaps the grandiosity is a stretch: we have had a recent surge of scandals, and we’ll like have more scandals in the years and perhaps decades to come. We do need to be somewhat strong to face them well. But as Ozzie Gooen rightfully point out, in contrast to our historical moral heroes[1], the problems we face are pretty minor in comparison.
Nelson Mandela served 27 years in prison. Frederick Douglass was enslaved for twenty years. Abraham Lincoln faced some of the US’s worst years, during which most of his children died, and just after he won the civil war, was assassinated.
In comparison, the problems of our movement just seems kind of small in comparison? “We kind of went down from two billionaires and very little political/social pushback, to one billionaire and very little political/social pushback?” A few people very close to us committed crimes? We had one of our intellectual heavyweights say something very racist 20+ years ago, and then apologized poorly? In the grand arc of social movements or intellectual progress, maybe we are (I am) taking everything way too seriously. Maybe I should step back, take a deep breath, touch grass and laugh a little at the absurdity of the whole situation.
The Mohists were also likely severely persecuted during Qin’s unification of China, though this is far back in history that the relevant evidence is light.
My big concern is that permanent harm could be suffered by either EA or it’s championed causes. Somewhat like how transhumanism became tarred with the brush of racism and eugenics, I worry things like AI safety or X-risk work could be viewed in the same light as racism. And there may be much more at stake than people realize.
The problem is that even without a hinge of history, our impacts, especially in a longtermism framework, are far far larger than previous generations, and we could very well lose all that value if EA or it’s causes become viewed as badly as say eugenics or racism was.
We (EA Forum) are maybe not strong enough (yet?) to talk about certain topics
A famous saying in LessWrong-speak is “Politics is the Mind-Killer”. In context, the post was about attempting to avoid using political examples in non-political contexts, to avoid causing people to become less rational with political blinders on, and also to avoid making people with different politics feel unwelcome. More broadly, it’s been taken by the community to mean a general injunction against talking about politics when unnecessary most of the time.
Likewise, I think there are topics that are as or substantially more triggering of ideological or tribal conflict as modern partisan politics. I do not think we are currently strong enough epistemically, or safe enough emotionally, to be able to discuss those topics with the appropriate level of nuance and intellect and tact. Except for the topics that are extremely decision-relevant (e.g. “which UK political party should I join to reduce malaria/factory farming/AI doom probabilities”) I will personally prefer that we steer clear of them for now, and wait until our epistemics and cohesion are one day perhaps good enough to approach them.
I agree, but I think we may well never reach that point, in which case this is tantamount to saying “never discuss it”. And I think it’s reasonable for people who care about those issues to point out that we’re ignoring them even though they’re important. Unsure how to resolve this.
Honestly, a lot of the problems from politics stems from both it’s totalizing nature, comparable to strong longtermism, plus emotion hampers more often than it helps in political discussions compared to longtermism.
I’d say that if EA can’t handle politics in the
general forum, then I think a subforum for EA politics should be made. Discussions about the politics of EA or how to effectively do politics can go there.
Meanwhile, the general EA forum can simply ban political posts and discussions.
Yes, it’s a strong measure to ban politics here. But bluntly, in social communities that want to have any level of civility and charity, ultimately it does tend towards banning politics and discussion around it, except maybe in a subforum.
It’s easy to get jaded about this, but in many ways I find the EA community genuinely inspirational. I’m sometimes reminded of this when I hear about a new good thing EAs have done, and at EA Global, and when new EAs from far-away countries reach out to me with a specific research or career question. At heart, the EA community is a community of thousands of people, many of whom are here because they genuinely want to do the most good, impartially construed, and are actively willing to use reason and evidence to get there. This is important, and rare, and I think too easily forgotten.
I think it’s helpful to think about a few things you’re grateful for in the community (and perhaps even your specific interlocutors) before engaging in heated discourse.
I think many people on both sides of the discussion
have drawn bright lines way too quickly
were quick to assume bad faith from their interlocutors, understood people who disagreed with them as opponents or enemies in a culture war
rather than (possibly mistaken) collaborators in the pursuit of the good
mostly did not interpret their opponents particularly charitably
said things to denigrate their opponents rather than try to understand and empathize with them
appeared to have acted out of ideology or perhaps fear, rather than love
seem to mostly ignore the (frequently EA) bystanders to this ideological conflict, and ignore (or even delight in) the harm their words or other speech acts may have caused to bystanders.
Perhaps I’m just reasserting basic forum norms, but I think we should instead at least try to interpret other people on this forum more charitably. Moreover, I think we should generally try to be kind to our fellow EAs[1]. Most of us are here to do good. Many of us have made substantial sacrifices in order to do so. We may have some disagreements and misunderstandings now, and we likely will again in the future, but morality is hard. Serving the good is a very difficult enterprise, and it is unlikely that we’ll be able to do so in the future while a) maintaining perfect ideological lockstep and b) without any significant misunderstandings. We should use our current moment for tolerance, empathy, and for learning.
Of course, we should also be kind to non-EAs. But being kind to other EAs should sort of be the bare minimum. “It’s a low bar, but we have yet to reach it.”
In recent days, I’ve noticed an upsurge of talking at people rather than with them. I think there’s something lost here, where people stopped assuming interlocutors are (possibly mistaken) fellow collaborators in the pursuit of doing good, but more like opponents to be shot down and minimized. I think something important is lost both socially and epistemically when we do this, and it’s worthwhile to consider ways to adapt a more collaborative mindset. Some ideas:
1. Try to picture yourself in the other person’s shoes. Try to understand, appreciate, and anticipate both their worries and their emotions before dashing off a comment.
2. Don’t say “do better, please” to people you will not want to hear the same words from. It likely comes across as rather patronizing, and I doubt the background rates of people updating positively from statements like that is particularly high.
3. In general, start with the assumption of some basic symmetry on how and what types of feedback you’d like to receive before providing it to others.
I suspect at least some of the optics and epistemics around the recent controversies are somewhat manipulated by what I call “enemy action.” That is, I think there are people out there who are not invested in this project of doing good, and are instead, for idiosyncratic reasons I don’t fully understand[1], interested in taking our movement down. This distorts a) much of the optics around the recent controversies, b) much of the epistemics in what we talk about and what we choose to pay attention to and c) much of our internal sense of cohesion.
I don’t have strong evidence of this, but I think it is plausible that at least some of the current voting on the forum on controversial issues is being manipulated by external actors in voting rings. I also think it is probable that some quotes from both on and off this forum are selectively mined in external sources, so if you come to the controversies from them, you should make take a step back and think of ways in which your epistemics or general sense of reality is being highjacked. Potential next steps:
Keep your cool
Assume good faith from community members most of the time
If someone has a known history of repeatedly acting in bad faith in the past, tentatively drop this assumption and be more willing to believe that they might predictably misquote or lie in the future.
I don’t want to belabor this point too much (“criticisms of our group comes from external actors who are trying to divide us!1!!” has a famously poor track record). But it’s worthwhile to keep in mind whenever you see a particularly egregious statement or quote or other evidence.
A better person than me would try to emphasize with them and understand where they’re coming from. I currently think it’s higher priority for EAs to love our fellow EAs first before extending the olive branch to more clearly ideological haters.
I’ve been thinking a lot about this, even before the FTX collapse but especially since then. There are clearly some actors who are prioritising causing harm to EA as one of their major goals. Separately, but related, is the fact that as EA has grown the number of actors who view us negatively or as something to be challenged/defeated has grown. This means that EA no longer acts in a neutral information space.
Whatever we do from this point on there will be strong pushback. Some of it might be justified, most of it (I hope) not, but regardless this is now something I think we have to be more aware of. Bad actors can act against EA unilaterally, and I have my doubts that the current approach to spreading EA ideas may not be the best approach against this.
I may try to fill this out with more helpful ideas, concepts, and examples in a top-level post.
I’m not sure this comment is decision-relevant, but I want us to consider the need for us, both individually and collectively, to become stronger. We face great problems ahead of us, and we may not be able up for the challenge. We need to face them with intellect, and care, and creativity and reason. We need to face them with cooperation, and cohesion, and love for fellow man, but also strong independence and skepticism and ability to call each out on BS.
We need to be clear enough in our thinking to identify the injustices in the world, careful enough in our planning to identify the best ways to fight them, and committed and steady enough in our actions to decisively act when we need to. We need to approach the world with fire in our hearts and ice in our veins.
We should try to help each other identify, grow, and embody the relevant abilities and virtues needed to solve the world’s most pressing problems. We should try our best to help each other grow together.
This may not be enough, but we should at least try our best.
One basic lesson I learned from trying to do effective altruism for much of my adult life is that morality is hard. Morality is hard at all levels of abstraction: Cause prioritization, or trying to figure out the most pressing problems to work on, is hard. Intervention prioritization, or trying to figure out how we can tackle the most important problems to work on, is hard. Career choice, or trying to figure out what I personally should do to work on the most important interventions for the most important problems is hard. Day-to-day prioritization is hard. In practice, juggling a long and ill-defined list of desiderata to pick the morally least-bad outcome is hard. And dedication and commitment to continuously hammer away at doing the right thing is hard.
And the actual problems we face are really hard. Millions of children die every year from preventable causes. Hundreds of billions of animals are tortured in factory farms. Many of us believe that there are double-digit percentage points of existential risk this century. And if we can navigate all the perils and tribulations of this century, we still need to prepare our descendants for a world worth living in.
In that context, I think I want to cut other members of this community some slack. People are going to act in less than optimal ways. People are going to screw up, in words and deeds. It is likely still better to work with other people than to do my own thing. The problems of the world are challenging enough that we need the entire community, and more, to confront them.
We should not forget that the purpose that unites us here is the desire to do good. The ultimate arbiter of our actions is Utility. Perhaps we’ll eventually realize that the community is not the best way to do good, and needs to be broken up. If that happens, I will strongly prefer that the community, or large parts of it, dies on its own terms, with solemnity and grace, after extensive and careful cost-benefit analysis that tells us that destroying the community is the best way to serve the good. We should not just casually allow the community to shatter because of reputational crises, or internal infighting.
Anthropic awareness or “you’re not just in traffic, you are traffic.”
An old standup comedy bit I like is “You’re not in traffic, you are traffic.”Traffic isn’t just something that happens to you, but something you actively participate in (for example, by choosing to leave work during rush hour). Put another way, you are other people’s traffic.
I take the generalized version of this point pretty seriously. Another example of this was I remember complaining about noise at a party. Soon after, I realized that the noise I was complaining about was just other people talking! And of course I participated in (and was complicit in) this issue.
Similarly, in recent months I complained to friends about the dropping kindness and epistemic standards on this forum. It took me way too long to realize the problem with that statement, but the reality is that discourse, like traffic, isn’t something that just happens to me. If anything, as one of the most active users on this forum, I’m partially responsible for the dropping forum standards, especially if I don’t active try to make epistemic standards better.
So this thread is my partial attempt to rectify the situation.
I’d love for more forum users to also recognize this, and personally embody the virtues that you’d like other users to have.
For example, high-decoupler rationalist types rightly note the importance of debating the best versions of the relevant arguments, and not just distorted uncharitable readings of poor ones. To that end, I think it’s helpful for rationalist-y types to attempt to steelman or at least try to understand and emphasize with the perspectives of people who disagree with them.
Similarly, high-contextualizer types rightly note the frequent importance of understanding statements and actions in the proper context, and not always take all statements or actions on their literal face value. To that end, it will be helpful for people to take in the full context of statements and actions to account, and note, for example, that it’d in most circumstances be quite surprising for a grantmaker of Jewish descent to knowingly want to give money to neo-Nazi organizations , and this surprise should make us hesitant to immediately jump to conclusions or otherwise demand extreme haste or concessions.[1]
Another example where I think high-contextualizers have failed this is in an earlier controversy by taking offense at a few lines in Nick Beckstead’s theses implying that we should give less to people in poorer countries, completely ignoring that he was an earlier member of Giving What We Can and giving >=10% of his graduate school stipend to global health charities.
Understanding and acknowledging the subtext of fear
I think a subtext for some of the EA Forum discussions (particularly the more controversial/ideological ones) is that a) often two ideological camps form, b) many people in both camps are scared, c) ideology feeds on fear and d) people often don’t realize they’re afraid and cover it up in high-minded ideals (like “Justice” or “Truth”)[1].
I think if you think other EAs are obviously, clearly Wrong or Evil, it’s probably helpful to
a) realize that your interlocutors (fellow EAs!) are human, and most of them are here because they want to serve the good
b) internally try to simulate their object-level arguments
c) try to understand the emotional anxieties that might have generated such arguments
d) internally check in on what fears you might have, as well as whether (from the outside, looking from 10,000 feet up) you might acting out the predictable moves of a particular Ideology.
e) take a deep breath and a step back, and think about your intentions for communicating.
In the draft of a low-winded post I probably will never publish, I framed it thusly: “High contextualizers are scared. (They may not realize they’re scared, because from the inside, fear often looks like you’re just Right and you’re filled with righteous rage against a shadowy intolerant elite”) Or ” High decouplers are scared. (They may not realize they’re scared, because from the inside, fear often looks like you’re just Right and you’re suffused with the cold clarity of truth against an unenlightened mob)”
When embroiled in ideological conflict, I think it’s far too easy to be ignorant of (or in some cases, deliberately downplay for bravado reasons) the existence of bystanders to your ideological war. For example, I think some black EAs are directly hurt by the lack of social sensitivity displayed in much of the discourse around the Bostrom controversy (and perhaps the discussions themselves). Similarly, some neurodivergent people are hurt by the implication that maximally sensitive language is a desiderata on the forum, and the related implication that people like them are not welcome. Controversies can also create headaches for community builders (including far away from the original controversy), for employees at the affected or affiliated organizations, and for communications people more broadly.
The move to be making is to stop for a bit. Note that people hurting are real people, not props. And real people could be seriously hurting for reasons other than direct ideological disagreement.
While I think it is tempting to use bystanders to make your rhetoric stronger, embroiling bystanders in your conflict is I think predictably bad. If you know people who you think might be hurting, I suspect it’s a more productive use of your time to reach out to the affected people, check in with them, and make sure they’re okay.[1]
Potential updates:
Think about the bystanders your speech or other actions may harm
Probably try to minimize harm.
Don’t try to deliberately harm people for bravado- (“edgelord”-) adjacent reasons.
If you do know people who might be harmed by recent events, and you’re in a position to do so, consider reaching out to them privately and see if you can help.
In replies to this thread, here are some thoughts I have around much of the discourse that have come out so far about recent controversies. By “discourse,” I’m thinking of stuff I mostly see here, on EA Twitter, and EA Facebook. I will not opine on the details of the controversies themselves. Instead I have some thoughts on why I think the ensuing discourse is mostly quite bad, some attempts to reach an understanding, thoughts on how we can do better, as well as some hopefully relevant tangents.
I split my thoughts into multiple comments so people can upvote or downvote specific threads.
While I have thought about this question a bunch, these comments has been really hard for me to write and the final product is likely pretty far from what I’d like, so please bear with me. As usual, all errors are my own.
Some counters to grandiosity
Some of my other comments have quite grandiose language and claims. In some ways this is warranted: the problems we face are quite hard. But in other ways perhaps the grandiosity is a stretch: we have had a recent surge of scandals, and we’ll like have more scandals in the years and perhaps decades to come. We do need to be somewhat strong to face them well. But as Ozzie Gooen rightfully point out, in contrast to our historical moral heroes[1], the problems we face are pretty minor in comparison.
In comparison, the problems of our movement just seems kind of small in comparison? “We kind of went down from two billionaires and very little political/social pushback, to one billionaire and very little political/social pushback?” A few people very close to us committed crimes? We had one of our intellectual heavyweights say something very racist 20+ years ago, and then apologized poorly? In the grand arc of social movements or intellectual progress, maybe we are (I am) taking everything way too seriously. Maybe I should step back, take a deep breath, touch grass and laugh a little at the absurdity of the whole situation.
The Mohists were also likely severely persecuted during Qin’s unification of China, though this is far back in history that the relevant evidence is light.
My big concern is that permanent harm could be suffered by either EA or it’s championed causes. Somewhat like how transhumanism became tarred with the brush of racism and eugenics, I worry things like AI safety or X-risk work could be viewed in the same light as racism. And there may be much more at stake than people realize.
The problem is that even without a hinge of history, our impacts, especially in a longtermism framework, are far far larger than previous generations, and we could very well lose all that value if EA or it’s causes become viewed as badly as say eugenics or racism was.
We (EA Forum) are maybe not strong enough (yet?) to talk about certain topics
A famous saying in LessWrong-speak is “Politics is the Mind-Killer”. In context, the post was about attempting to avoid using political examples in non-political contexts, to avoid causing people to become less rational with political blinders on, and also to avoid making people with different politics feel unwelcome. More broadly, it’s been taken by the community to mean a general injunction against talking about politics when unnecessary most of the time.
Likewise, I think there are topics that are as or substantially more triggering of ideological or tribal conflict as modern partisan politics. I do not think we are currently strong enough epistemically, or safe enough emotionally, to be able to discuss those topics with the appropriate level of nuance and intellect and tact. Except for the topics that are extremely decision-relevant (e.g. “which UK political party should I join to reduce malaria/factory farming/AI doom probabilities”) I will personally prefer that we steer clear of them for now, and wait until our epistemics and cohesion are one day perhaps good enough to approach them.
I agree, but I think we may well never reach that point, in which case this is tantamount to saying “never discuss it”. And I think it’s reasonable for people who care about those issues to point out that we’re ignoring them even though they’re important. Unsure how to resolve this.
Honestly, a lot of the problems from politics stems from both it’s totalizing nature, comparable to strong longtermism, plus emotion hampers more often than it helps in political discussions compared to longtermism.
I’d say that if EA can’t handle politics in the general forum, then I think a subforum for EA politics should be made. Discussions about the politics of EA or how to effectively do politics can go there.
Meanwhile, the general EA forum can simply ban political posts and discussions.
Yes, it’s a strong measure to ban politics here. But bluntly, in social communities that want to have any level of civility and charity, ultimately it does tend towards banning politics and discussion around it, except maybe in a subforum.
Thanks for writing this, I think this is a helpful frame on some things that have been annoying me about the Forum recently.
Some gratitude for the existing community
It’s easy to get jaded about this, but in many ways I find the EA community genuinely inspirational. I’m sometimes reminded of this when I hear about a new good thing EAs have done, and at EA Global, and when new EAs from far-away countries reach out to me with a specific research or career question. At heart, the EA community is a community of thousands of people, many of whom are here because they genuinely want to do the most good, impartially construed, and are actively willing to use reason and evidence to get there. This is important, and rare, and I think too easily forgotten.
I think it’s helpful to think about a few things you’re grateful for in the community (and perhaps even your specific interlocutors) before engaging in heated discourse.
Your forum contributions in recent months and this thread in particular 🙏🙏🙏
A plea for basic kindness and charity
I think many people on both sides of the discussion
have drawn bright lines way too quickly
were quick to assume bad faith from their interlocutors, understood people who disagreed with them as opponents or enemies in a culture war
rather than (possibly mistaken) collaborators in the pursuit of the good
mostly did not interpret their opponents particularly charitably
said things to denigrate their opponents rather than try to understand and empathize with them
appeared to have acted out of ideology or perhaps fear, rather than love
seem to mostly ignore the (frequently EA) bystanders to this ideological conflict, and ignore (or even delight in) the harm their words or other speech acts may have caused to bystanders.
Perhaps I’m just reasserting basic forum norms, but I think we should instead at least try to interpret other people on this forum more charitably. Moreover, I think we should generally try to be kind to our fellow EAs[1]. Most of us are here to do good. Many of us have made substantial sacrifices in order to do so. We may have some disagreements and misunderstandings now, and we likely will again in the future, but morality is hard. Serving the good is a very difficult enterprise, and it is unlikely that we’ll be able to do so in the future while a) maintaining perfect ideological lockstep and b) without any significant misunderstandings. We should use our current moment for tolerance, empathy, and for learning.
Of course, we should also be kind to non-EAs. But being kind to other EAs should sort of be the bare minimum. “It’s a low bar, but we have yet to reach it.”
Talk to people, not at people
In recent days, I’ve noticed an upsurge of talking at people rather than with them. I think there’s something lost here, where people stopped assuming interlocutors are (possibly mistaken) fellow collaborators in the pursuit of doing good, but more like opponents to be shot down and minimized. I think something important is lost both socially and epistemically when we do this, and it’s worthwhile to consider ways to adapt a more collaborative mindset. Some ideas:
1. Try to picture yourself in the other person’s shoes. Try to understand, appreciate, and anticipate both their worries and their emotions before dashing off a comment.
2. Don’t say “do better, please” to people you will not want to hear the same words from. It likely comes across as rather patronizing, and I doubt the background rates of people updating positively from statements like that is particularly high.
3. In general, start with the assumption of some basic symmetry on how and what types of feedback you’d like to receive before providing it to others.
Enemy action?
I suspect at least some of the optics and epistemics around the recent controversies are somewhat manipulated by what I call “enemy action.” That is, I think there are people out there who are not invested in this project of doing good, and are instead, for idiosyncratic reasons I don’t fully understand[1], interested in taking our movement down. This distorts a) much of the optics around the recent controversies, b) much of the epistemics in what we talk about and what we choose to pay attention to and c) much of our internal sense of cohesion.
I don’t have strong evidence of this, but I think it is plausible that at least some of the current voting on the forum on controversial issues is being manipulated by external actors in voting rings. I also think it is probable that some quotes from both on and off this forum are selectively mined in external sources, so if you come to the controversies from them, you should make take a step back and think of ways in which your epistemics or general sense of reality is being highjacked. Potential next steps:
Keep your cool
Assume good faith from community members most of the time
If someone has a known history of repeatedly acting in bad faith in the past, tentatively drop this assumption and be more willing to believe that they might predictably misquote or lie in the future.
I don’t want to belabor this point too much (“criticisms of our group comes from external actors who are trying to divide us!1!!” has a famously poor track record). But it’s worthwhile to keep in mind whenever you see a particularly egregious statement or quote or other evidence.
A better person than me would try to emphasize with them and understand where they’re coming from. I currently think it’s higher priority for EAs to love our fellow EAs first before extending the olive branch to more clearly ideological haters.
I’ve been thinking a lot about this, even before the FTX collapse but especially since then. There are clearly some actors who are prioritising causing harm to EA as one of their major goals. Separately, but related, is the fact that as EA has grown the number of actors who view us negatively or as something to be challenged/defeated has grown. This means that EA no longer acts in a neutral information space.
Whatever we do from this point on there will be strong pushback. Some of it might be justified, most of it (I hope) not, but regardless this is now something I think we have to be more aware of. Bad actors can act against EA unilaterally, and I have my doubts that the current approach to spreading EA ideas may not be the best approach against this.
I may try to fill this out with more helpful ideas, concepts, and examples in a top-level post.
We need to become stronger
I’m not sure this comment is decision-relevant, but I want us to consider the need for us, both individually and collectively, to become stronger. We face great problems ahead of us, and we may not be able up for the challenge. We need to face them with intellect, and care, and creativity and reason. We need to face them with cooperation, and cohesion, and love for fellow man, but also strong independence and skepticism and ability to call each out on BS.
We need to be clear enough in our thinking to identify the injustices in the world, careful enough in our planning to identify the best ways to fight them, and committed and steady enough in our actions to decisively act when we need to. We need to approach the world with fire in our hearts and ice in our veins.
We should try to help each other identify, grow, and embody the relevant abilities and virtues needed to solve the world’s most pressing problems. We should try our best to help each other grow together.
This may not be enough, but we should at least try our best.
Morality is hard, and we’re in this together.
One basic lesson I learned from trying to do effective altruism for much of my adult life is that morality is hard. Morality is hard at all levels of abstraction: Cause prioritization, or trying to figure out the most pressing problems to work on, is hard. Intervention prioritization, or trying to figure out how we can tackle the most important problems to work on, is hard. Career choice, or trying to figure out what I personally should do to work on the most important interventions for the most important problems is hard. Day-to-day prioritization is hard. In practice, juggling a long and ill-defined list of desiderata to pick the morally least-bad outcome is hard. And dedication and commitment to continuously hammer away at doing the right thing is hard.
And the actual problems we face are really hard. Millions of children die every year from preventable causes. Hundreds of billions of animals are tortured in factory farms. Many of us believe that there are double-digit percentage points of existential risk this century. And if we can navigate all the perils and tribulations of this century, we still need to prepare our descendants for a world worth living in.
In that context, I think I want to cut other members of this community some slack. People are going to act in less than optimal ways. People are going to screw up, in words and deeds. It is likely still better to work with other people than to do my own thing. The problems of the world are challenging enough that we need the entire community, and more, to confront them.
We should not forget that the purpose that unites us here is the desire to do good. The ultimate arbiter of our actions is Utility. Perhaps we’ll eventually realize that the community is not the best way to do good, and needs to be broken up. If that happens, I will strongly prefer that the community, or large parts of it, dies on its own terms, with solemnity and grace, after extensive and careful cost-benefit analysis that tells us that destroying the community is the best way to serve the good. We should not just casually allow the community to shatter because of reputational crises, or internal infighting.
Anthropic awareness or “you’re not just in traffic, you are traffic.”
An old standup comedy bit I like is “You’re not in traffic, you are traffic.”Traffic isn’t just something that happens to you, but something you actively participate in (for example, by choosing to leave work during rush hour). Put another way, you are other people’s traffic.
I take the generalized version of this point pretty seriously. Another example of this was I remember complaining about noise at a party. Soon after, I realized that the noise I was complaining about was just other people talking! And of course I participated in (and was complicit in) this issue.
Similarly, in recent months I complained to friends about the dropping kindness and epistemic standards on this forum. It took me way too long to realize the problem with that statement, but the reality is that discourse, like traffic, isn’t something that just happens to me. If anything, as one of the most active users on this forum, I’m partially responsible for the dropping forum standards, especially if I don’t active try to make epistemic standards better.
So this thread is my partial attempt to rectify the situation.
I’d love for more forum users to also recognize this, and personally embody the virtues that you’d like other users to have.
For example, high-decoupler rationalist types rightly note the importance of debating the best versions of the relevant arguments, and not just distorted uncharitable readings of poor ones. To that end, I think it’s helpful for rationalist-y types to attempt to steelman or at least try to understand and emphasize with the perspectives of people who disagree with them.
Similarly, high-contextualizer types rightly note the frequent importance of understanding statements and actions in the proper context, and not always take all statements or actions on their literal face value. To that end, it will be helpful for people to take in the full context of statements and actions to account, and note, for example, that it’d in most circumstances be quite surprising for a grantmaker of Jewish descent to knowingly want to give money to neo-Nazi organizations , and this surprise should make us hesitant to immediately jump to conclusions or otherwise demand extreme haste or concessions.[1]
Another example where I think high-contextualizers have failed this is in an earlier controversy by taking offense at a few lines in Nick Beckstead’s theses implying that we should give less to people in poorer countries, completely ignoring that he was an earlier member of Giving What We Can and giving >=10% of his graduate school stipend to global health charities.
Understanding and acknowledging the subtext of fear
I think a subtext for some of the EA Forum discussions (particularly the more controversial/ideological ones) is that a) often two ideological camps form, b) many people in both camps are scared, c) ideology feeds on fear and d) people often don’t realize they’re afraid and cover it up in high-minded ideals (like “Justice” or “Truth”)[1].
I think if you think other EAs are obviously, clearly Wrong or Evil, it’s probably helpful to
a) realize that your interlocutors (fellow EAs!) are human, and most of them are here because they want to serve the good
b) internally try to simulate their object-level arguments
c) try to understand the emotional anxieties that might have generated such arguments
d) internally check in on what fears you might have, as well as whether (from the outside, looking from 10,000 feet up) you might acting out the predictable moves of a particular Ideology.
e) take a deep breath and a step back, and think about your intentions for communicating.
In the draft of a low-winded post I probably will never publish, I framed it thusly: “High contextualizers are scared. (They may not realize they’re scared, because from the inside, fear often looks like you’re just Right and you’re filled with righteous rage against a shadowy intolerant elite”) Or ” High decouplers are scared. (They may not realize they’re scared, because from the inside, fear often looks like you’re just Right and you’re suffused with the cold clarity of truth against an unenlightened mob)”
Bystanders exist
When embroiled in ideological conflict, I think it’s far too easy to be ignorant of (or in some cases, deliberately downplay for bravado reasons) the existence of bystanders to your ideological war. For example, I think some black EAs are directly hurt by the lack of social sensitivity displayed in much of the discourse around the Bostrom controversy (and perhaps the discussions themselves). Similarly, some neurodivergent people are hurt by the implication that maximally sensitive language is a desiderata on the forum, and the related implication that people like them are not welcome. Controversies can also create headaches for community builders (including far away from the original controversy), for employees at the affected or affiliated organizations, and for communications people more broadly.
The move to be making is to stop for a bit. Note that people hurting are real people, not props. And real people could be seriously hurting for reasons other than direct ideological disagreement.
While I think it is tempting to use bystanders to make your rhetoric stronger, embroiling bystanders in your conflict is I think predictably bad. If you know people who you think might be hurting, I suspect it’s a more productive use of your time to reach out to the affected people, check in with them, and make sure they’re okay.[1]
Potential updates:
Think about the bystanders your speech or other actions may harm
Probably try to minimize harm.
Don’t try to deliberately harm people for bravado- (“edgelord”-) adjacent reasons.
If you do know people who might be harmed by recent events, and you’re in a position to do so, consider reaching out to them privately and see if you can help.
I’ve done this a bunch myself, with mixed success.
Are you black?
No, different racial minority in the US. Why?