In light of recent events in the EA community, several professional EA community builders have been working on a statement for the past few weeks: EA Community Builders’ Commitment to Anti-Racism & Anti-Sexism. You can see the growing list of signatories at the link.
We have chosen to be a part of the effective altruism community because we agree that the world can and should be a better place for everyone in it. We have chosen to be community builders because we recognize that lasting, impactful change comes out of collective effort. The positive change we want to see in the world requires a diverse set of actors collaborating within an inclusive community for the greater good.
But inclusive, diverse, collaborative communities need to be protected, not just built. Bigoted ideologies, such as racism and sexism, are intrinsically harmful. They also fundamentally undermine the very collaborations needed to produce a world that is better for everyone in it.
We unequivocally condemn racism and sexism, including “scientific” justifications for either, and believe they have no place in the effective altruism community. As community builders within the effective altruism space, we commit to practicing and promoting anti-racism and anti-sexism within our communities.
If you are the leader/organizer of an EA community building group (including national and city groups, professional groups, affinity groups, and university groups), you can add your signature and any additional commentary specific to you/your organization (that will display as a footnote on the statement) by filling out this form.
Thank you to the many community builders who contributed to the creation of this document.
I am opposed to this.
I am also not an EA leader in any sense of the word, so perhaps my being opposed to this is moot. But I figured I would lay out the basics of my position in case there are others who were not speaking up out of fear [EDIT: I now know of at least one bona fide EA leader who is not voicing their own objection, out of something that could reasonably be described as “fear”].
Here are some things that are true:
Racism is harmful and bad
Sexism is harmful and bad
Other “isms” such as homophobia or religious oppression are harmful and bad.
To the extent that people can justify their racist, sexist, or otherwise bigoted behavior, they are almost always abusing information, in a disingenuous fashion. e.g. “we showed a 1% difference in the medians of the bell curves for these two populations, thereby ‘proving’ one of those populations to be fundamentally superior!” This is bullshit from a truth-seeking perspective, and it’s bullshit from a social progress perspective, and in most circumstances it doesn’t need to be entertained or debated at all. In practice, it is already the case that the burden of proof on someone wanting to have a discussion about these things is overwhelmingly on the person starting the discussion, to demonstrate that they are both a) genuinely well-intentioned, and b) have something real to talk about.
However:
Intelligent, moral, and well-meaning people will frequently disagree about to-what-extent a given situation is explained by various bigotries as opposed to other factors. Intelligent, moral, and well-meaning people will frequently disagree about which actions are wise and appropriate to take, in response to the presence of various bigotries.
By taking anti-racism and anti-sexism and other anti-bigotry positions which are already overwhelmingly popular and overwhelmingly agreed-upon within the Effective Altruism community, and attempting to convert them to Anti-Racism™, Anti-Sexism™, and Anti-Bigotry™ applause lights with no clear content underneath them, all that’s happening is the creation of a motte-and-bailey, ripe for future abuse.
There were versions of the above proposal which were not contentless and empty, which stake out clear and specific positions, which I would’ve been glad to see and enthusiastically supported and considered concrete progress for the community. It is indeed true that EA as a whole can do better, and that there exist new norms and new commitments that would represent an improvement over the current status quo.
But by just saying “hey, [thing] is bad! We’re going to create social pressure to be vocally Anti-[thing]!” you are making the world worse, not better. Now, there is a List Of Right-Minded People Who Were Wise Enough To Sign The Thing, and all of the possible reasons to have felt hesitant to sign the thing are compressible to “oh, so you’re NOT opposed to bigotry, huh?”
Similarly, if four-out-of-five signatories of The Anti-Racist Pledge think we should take action X, but four-out-of-five non-signatories think it’s a bad idea for various pragmatic or logistical reasons, it’s pretty easy to imagine that being rounded off to “the opposition is racist.”
(I can imagine people saying “we won’t do that!” and my response is “great—you won’t. Are you claiming no one will? Because at the level of 1000+ person groups, this is how this always goes.”)
The best possible outcome from this document is that everybody recognizes it as a basically meaningless non-thing, and nobody really pays attention to it in the future, and thus having signed it means basically nothing. This is also a bad outcome, though, because it saps momentum for creating and signing useful versions of such a pledge. It’s saturating the space, and inoculating us against progress of this form; the next time someone tries to make a pledge that actually furthers equity and equality, the audience will be that much less likely to click, and that much less willing to believe that anything useful will result.
The road to hell is paved with good intentions. This is clearly a good intention. It does not manage to avoid being a pavestone.
Thank you for the thorough feedback. Those involved in drafting the statement considered much of what you laid out and created a more substantive, action-specific version before ultimately deciding against it. There were several reasons for this decision, among them: not wanting to commit (often under-resourced) groups to obligations they would currently be unable to fulfill, the various needs and dynamics of different EA communities, and the time-sensitive nature of getting a statement out. We do not intend for this to be the final word and there is already discussion about follow-up collaborations. We also chose to use the footnote method in the statement document to allow groups to make their own additional individual commitments publicly now.
I do want to push back on the idea that this statement is vacuous, counterproductive, and/or harmful. We chose to create this because of our collective, global, on-the-ground experiences discussing recent events with the communities we lead. I agree it should be silly or meaningless to declare one’s opposition to racism and sexism. But right now, for many following EA discourse, it unfortunately isn’t obvious where much of the community stands. And this is having a tangible impact on our communities and our community members’ sense of belonging and safety. This statement doesn’t solve this. But by putting our shared commitment in plain language, I believe we’ve laid a pavestone, however small, on the path toward a version of EA where statements like this truly are not needed.
I wonder if the statement would have been stronger with a nod in that direction, e.g. something vaguely like: “We recognize that signing a statement is not enough. As signatories, we will be considering specific ideas to combat racism and sexism in the context of the resources, needs, and dynamics of the specific community we help build. The organizers will be continuing to collaborate on a more substantive, action-specific proposal in the coming months.”
I would like for all involved to consider this, basically, a bet, on “making and publishing this pledge” being an effective intervention on … something.
I’m not sure whether the something is “actual racism and sexism and other bigotry within EA,” or “the median EA’s discomfort at their uncertainty about whether racism and sexism are a part of EA,” or what.
But (in the spirit of the E in EA) I’d like that bet to be more clear, so since you were willing to leave a comment above: would you be willing to state with a little more detail which problem this was intended to solve, and how confident you (the group involved) are that it will be a good intervention?
Just to be clear, I think many of us in the community are not uncertain about whether racism and sexism are part of EA. Rather I’m certain that they are, in the sense that many in the community exhibited them in discussions in the last few weeks.
Therefore it’s very meaningful to see a large core of community builders speak out about this explicitly, including disavowing “scientific” racism and sexism specifically. I’m also especially glad to see the head of my own country’s community among them.
I think if we found a comment that you considered racist/sexist and asked the author if they thought their comment was racist/sexist, the author would likely say no.
I wish the Google Doc had been more specific. It could’ve said things like:
It’s important to treat people with respect regardless of their race/sex
It’s important to reduce suffering and increase joy for everyone regardless of their race/sex
We should be reluctant to make statements which could be taken as “scientific” justification for ignoring either of the previous bullet points
As written, it seems like the doc has the disadvantage of being ripe for abuse, without the advantage of providing guidelines that let someone know whether the signatories dislike their comment. I think on the margin, this doc pushes us towards a world where EAs are spending less time on high-impact do-gooding, and more time reading social media to make sure we comply with the latest thinking around anti-racism/anti-sexism.
Thank you for stating plainly what I suspect the original doc was trying to hint at.
That said, now that it’s plainly stated, I disagree with it. The world is too connected for that.
Taken literally, “could be taken” is a ridiculously broad standard. I’m sure a sufficiently motivated reasoner could take “2+2=4″ as justification for racism. This is not as silly a concern as it sounds, since we’re mostly worried about motivated reasoners, and it’s unclear how motivated a reasoner we should be reluctant to offer comfort to. But let’s look at some more concrete examples:
In early 2020, people were reluctant to warn about covid-19 because it could be taken as justification for anti-chinese racism. I can’t actually follow the logic that goes from “A dangerous new disease emerged in China” to “I should go beat up someone of Chinese ancestry” but it seems a few people who had been itching for an excuse did. Nevertheless, given the relative death tolls, we clearly should have had more warnings and more preparations. The next pandemic will likely also emerge in a place containing people against whom racism is possible (base rate, if nothing else), and pandemic preparedness people need to be ready to act anyway.
Similarly, many people tried to bury the fact that monkeypox was sexually transmitted because it could lead to homophobia. So instead they warned of a coming pandemic. False warnings are extremely bad for preparedness, draining both our energy and our credibility.
Political and Economic Institutions are a potentially high-impact cause area in both near- and far-term (albeit, dubiously tractable). Investigating them is pretty much going to require looking at history, and at least sometimes saying that western institutions are better than others.
Going back to Bostrom’s original letter, many anti-racists have taken to denying the very idea of intelligence in order to reject it. Hard to work on super-intelligence-based x-risk (or many other things) without that concept.
I think you make good points—these are good cases to discuss.
I also think that motivated reasoners are not the main concern.
My last bullet point was meant as a nudge towards consequentialist communication. I don’t think consequentialism should be the last word in communication (e.g. lying to people because you think it will lead to good consequences is not great).
But consequences are an important factor, and I think there’s a decent case to be made that e.g. Bostrom neglected consequences in his apology letter. (Essentially making statements which violated important and valuable taboos, without any benefit. See my previous comment on this.)
For something like COVID, it seems bad to downplay it, but it also seems bad to continually emphasize its location of origin in contexts where that information isn’t relevant or important.
“We should be reluctant” represents a consideration against doing something, not a complete ban.
James Watson’s denial of having made racist statements is a social fact worth noting. Most ‘alt-center,’ etc. researchers in HBD and the latest thinking on euphemisms intended to reappropriate racism for metapolitical and game-theoretic purposes scientifically will, perforce, never outright say this.
To be clear, I don’t think many EAs are formally working in race science, and surely skeptical and morally astute EAs can have the integrity to admit to having made racist comments or reasonably disagree. (And no: as an African American EA on the left, I don’t think we should unsubscribe every HBD-EA, Bostrom, etc., from social life. Instead, we should model a safer environment for us all to be wrong categorically. Effective means getting all x-risks and compound x-risks, etc. right the first time.)
But after mulling over most of the HBD-affirmed defenses of Bostrom’s email/apology that I’ve read or engaged on the EA forum that weren’t obviously (yet also highly upvoted) red pills by bad actors, I think there are other reasons many of those EAs won’t say their comments were racist even if they themselves are not actually certain they are non-racist.
My hunch is whether those EAs see HBD as part of hard core or protective belt of longtermism/EA’s program may be a good predictor of whether they believe and therefore would be willing say that their comments were racist.[1]
For these, among other reasons, I think this instance of Hirshman’s rhetoric of reaction above is mistaken. It is not disvaluable that community builders in a demographically, socially and epistemically isolated elitist technocratic movement like EA doesn’t allow the best provisional statement clearly stating their stance on these issues to become the enemy of the good.
As I was relieved to see this, as well as the fact that Guy made the pushback I wish I had time to do 3 days ago. If there’s any way I can support your efforts, please let me know!
1.1 For want of an intensional definition of value-alignment.
1.2. I take little pleasure in suggesting that HBD-relevant beliefs strongly coupled with, e.g., Beckstead et al.’s (frankly narrow and imaginatively lacking) stance on the most likely sources of economic innovation in the future which therefore may have greater instrumental value to longtermist utopia may be one contributing factor for this problem within EA. And even anti-eugenics has its missteps.
Writing very quickly as someone that signed this from EA Italy
I agree that this 12-line letter is not perfect and will not solve racism or sexism, and probably not do much (otherwise, these issues would have already been solved).
If you think it’s important and useful, please do work on this, it might be concrete progress for the community! Or if the versions were already made it might be useful to share them
I would be extremely surprised by this, what % do you give that something like this will happen? If a request to sign reached me I assume it reached hundreds or thousands of people
I used to share this thinking, and worry a lot about replaceability, but on the current margin it seems to me that the alternative to thing is almost always not better thing but no thing. So I think it would be useful if you had made concrete proposals for how thing could be improved for next time, but not what I perceive as disincentivizing people from generally doing stuff.
I wouldn’t want Rob Mather not to found the Against Malaria Foundation out of fear of sapping momentum for creating an even better version of a bednet distribution org. I would agree with you if you could share some reason for expecting the counterfactual to be better (e.g. “there was this other much better letter that I was just about to post, but now I don’t want to spam people about this so I will not”)
Imho the road to anywhere is paved with good intentions, and the most likely counterfactual is standing still, not moving in a better direction, unless you know of some existing and better plans that were hindered by this 12-line letter.
In terms of practical actions, someone from EA Italy (not me) is publishing a Code of Conduct this Sunday instead of in the next weeks, we’re sharing an anonymous form on the website and via other channels, following the advice from EA Philippines (in addition to links to two contact people and the CEA community health team), and we’re going to ask city and university groups to publish these as well.
Would probably have happened anyway, but likely a few weeks later, and it’s nice to have links to resources from other groups while we figure out a strategy. I personally found the advice/experience from EA Philippines to be more useful, otherwise we might have just added contact infos but forgotten to add an Italian anonymous form. So I would endorse asking various groups to share what practical actions they are taking, but it doesn’t seem to me that this letter sapped momentum from doing it.
Not writing as anything
I disagree-voted on this because I think it is overly accusatory and paints things in a black-and-white way.
Who says we can’t have both? I don’t get the impression that EA NYC wants this to be the only action taken on anti-racism and anti-sexism, nor did I get the impression that this is the last action EA NYC will take on this topic.
I don’t think this is the case—I, for one, am definitely not thinking that anyone who chose not to sign didn’t do so because they are not opposed to bigotry. (Confusing double-negative—but basically, I can think of other reasons why people might not have wanted to sign this.)
I can think of better outcomes than that—the next time there is a document or initiative with a bit more substance, here’s a big list of people who will probably be on board and could be contacted. The next time a journalist looks through the forum to get some content, here’s a big list of people who have publicly declared their commitment to anti-racism and anti-sexism. The next time someone else makes a post delving into this topic, here’s some community builders they can talk to for their stance on this. There’s nothing inherently wrong with symbolic gestures as long as they are not in place of more meaningful change, and I don’t get the sense from this post that this will be the last we hear about this.
Can you give some details?
I mean, I don’t have this hypothetical document made in my head (or I would’ve posted it myself).
But an easy example is something of the shape:
[EDIT: The below was off-the-cuff and, on reflection, I endorse the specific suggestion much less. The structural thing it was trying to gesture at, though, of something clear and concrete and observable, is still the thing I would be looking for, that is a prerequisite for enduring endorsement.]
“We commit to spending at least 2% of our operational budgets on outreach to [racial group/gender group/otherwise unrepresented group] for the next 5 years.”
Maybe the number is 1%, or 10%, or something else; maybe it’s 1 year or 10 years or instead of years it’s “until X members of our group/board/whatever are from [nondominant demographic].”
The thing that I like about the above example in contrast with the OP is that it’s clear, concrete, specific, and evaluable, and not just an applause light.
there’s a thing of like
in the current environment there’s a lot of discourse that goes like “EA needs to be less tolerant of weird people especially people who do things like poly and kink, in order for EA to feel more safe for women”
given that the omission of homophobia, transphobia, etc especially since these people are v overrepresented here seems … notable?
Hi.
I’m going to talk about sexism in particular here, since it was this problem mentioned in the declaration, which I had a chance to experience in my life personally.
.
I agree with every single point Duncan made, and I felt relieved seeing it.
To add up to it, the declaration doesn’t make me feel safe, quite the opposite, I feel that my “safe place where we are serious about problems and take the best possible actions to fight them” got a bit invaded (it’s simply my own, purely emotional reaction—but since, I guess, this post was made to make me feel safe, let me share it). I am a part of the EA, because I’m impressed with how effective it is. I wish sexism was treated in the same way as malaria, because I think it deserves it. I want it to be eradicated. And I believe it’s possible. I don’t believe this declaration helps.
.
To me, the words used in the declaration feel empty and, to be frank, sometimes so vague that I have trouble understanding what exactly you wanted to communicate. I certainly can’t say what exact actions do you declare to take.
Here are the actions I think would be better:
- Sexism is a VERY broad topic.I’d like to see which particular embodiment of sexism you, as community leaders, identify as the most prevalent and harmful. I would really like your analysis to be country or culture specific. I’d like to see numbers, and if not, solid qualitative analysis.
-I’d like to see a comparison of the impact of each of the forms of sexism to another issues the community faces—also the ones which are not spoken about or haven’t been recently mentioned by the mainstream media
- If in the process you decide that fighting a particular form of sexism or other discrimination is not something we should do (i.e. because it is not neglected) please—focus your resources on those in the community, who suffer more.
- I’d like to see plans of specific actions you want to take, addressing specific community issues (i.e. specific forms of sexism). I’d like to see evidence on how the actions are going to help and why they are the best solution.
- I’d like to see vivid, open, rational, honest discussion about how exactly each defined problem can be addressed—and if it’s defined properly. I’d like the problem to be approached from so many angles, that we are left with its pure and strict definition, and with bullet proof action plan.
- Also, if you decide to deal with a particular form of community problem (i.e. particular form of sexism), I’d like to know what is it, how does it manifest, if it concerns me (i.e. because of my age or location), how do I avoid it, how can I help if needed. If this particular problem concerns me personally, I’d love to be asked on how I am affected and how you can help—I’d like to feel listened to.
- Then, I’d like to see your chosen actions helping the community to be better. I’d like the impact to be measured and learned from.
Maybe you are currently working on all or some of the above. If yes, I think it would be helpful to me if you mentioned specific efforts of yours in the post, because this context certainly would change my perception. If you are not working on it, I think this post not supported by similar efforts may actually have a negative impact (please see Duncan’s arguments, I agree with them).
Edit: I no longer agree with the content of this comment. Jason convinced me that this pledge is worth more than just applause lights. In addition, I don’t think anymore that this is a very appropriate place for a slippery slope-argument.
_____________
I’d like to explain why I won’t sign this document, because a voice like mine seems to still be missing from the debate: Someone who is worried about this pledge while at the same time having been thoroughly involved in leftist discourse for several years pre-EA.
So here you go for my TED talk.
I’m not a Sam in a bunch of ways: I come from a working-class background. I studied continental philosophy and classical greek at an unknown small town uni in Germany (and was ashamed of that for at least my first two years of involvement with EA). Though I was thunderstruck by the simple elegance of utilitarian reasoning as a grad student, I never really developed a mind for numbers and never made reading academic papers my guilty pleasure. I’ve been with the libertarian socialists long enough before getting into EA that I’m still way better at explaining Hegel, Marx, Freud, the Frankfurt school, the battle lines between materialist and queer feminism, or how to dive a dumpster than even basic concepts of economy. In short: As far as knowing the anti-racist and anti-sexist discourse is concerned, I may well be in the 95th percentile of the EA community.
And because of all of this life experience, reading this statement sent a cold shower down my spine. Here’s why.
I have been going under female pronouns for a couple of years. That’s not a fortunate position to be in in a small German university city whose cultural discourse is always 10-20 years behind any Western capital city, especially of the anglo-saxon world. I’ve grown to love the feeling of comfort, familiarity, and safety that anti-discriminatory safe spaces provide, and I’ve actively taken part in making these spaces safe—sometimes in a more, sometimes in a less constructive tone.
But while enjoying that safety, comfort, and sense of community, I constantly lived with a nagging half-conscious fear of getting ostracized myself one day for accidentally calling the wrong piece of group consensus into question. In the meantime, I never was quite sure what the group consensus actually was, because I’m not always great at reading rooms, and because just asking all the dumb questions felt like a way too big risk for my standing in the tribe. Humility has not always been a strength of mine, and I haven’t always valued epistemic integrity over having friends.
The moment when the extent of this clusterfuck of groupthink dawned on me was after we went to the movies for a friend’s birthday party: Iron Sky 2 was on the menu. After leaving the cinema, my friend told me that during the film, she occasionally glanced over to me to gauge whether it’s “okay” to laugh about, well, Hitler riding on a T-Rex. She glanced over to me in order to gauge what’s acceptable. She, who was so radically leninist that I didn’t ever dare mention that I’m not actually really all that fond of Lenin. Because she had plenty of other wonderful qualities besides being a leninist. And had I risked getting kicked out of the tribe for a petty who’s-your-favorite-philosopher-debate, that would have been very sad.
On that day, I realized that both of us had lived with the same fear all along. And that all our radical radicalism was at least two thirds really, really stupid virtue signalling. Wiser versions of us would have cut the bullshit and said: “I really like you and I don’t want to lose you.” But we didn’t, because we were too busy virtue signalling at each other that really, you can trust me and don’t have to ostracize me, I’m totally one of the Good Guys(TM).
Later, I found the intersection between EAs and rationalists: A community that valued keeping your identity small. A community where the default response to a crass disagreement was not moral outrage or carefully reading the room to grasp the group consensus, but “Let’s double crux that!”, and then actually looking at the evidence and finding an answer or agreeing that the matter isn’t clear. A community where it was considered okay and normal and obvious to say that life sometimes involves very difficult tradeoffs. A community where it was considered virtuous to talk and think as clearly and level-headedly as possible about these difficult tradeoffs.
And in this community, I found mental frameworks that helped me understand what went wrong in my socialist bubble: Most memorably, Yudkowsky’s Politics is the Mind-Killer and his Death Spirals sequence. I’d place a bet that the majority of the people who are concerned about this commitment know their content, and that the majority of the people who support it don’t. And I think it would be good if all of us were to (re-)read them amidst this drama.
I’m a big fan of being considerate of each others’ feelings and needs (though I’m not always good at that). I’m a big fan of not being a bigot (though I’m not always good at that). Overall, I’d like EA to feel way more like the warm, familiar, supportive anti-discriminatory safe spaces of my early twenties.
Unfortunately, I don’t think this pledge makes much of a difference there.
At the same time, after I saw the destructive virtue signalling of my early 20s play out as it did, I do fear that this pledge and similar contributions to the current debate might make all the difference for breaking EA’s discourse norms.
And by “breaking EA’s discourse norms”, I mean moving them way closer to the conformity pressure and groupthink I left behind.
If we start throwing around loaded and vague buzzwords like “(anti-)sexism” and “(anti-)racism” instead of tabooing our words and talking about concrete problems, how we feel about them, and what we think needs doing in order to fix them, we might end up at the point where parts of the left seem to be right now: Ostracizing people not only when that is necessary to protect other community members from harm, but also when we merely talk past each other and are too tired from infighting to explain ourselves and try and empathize with one another.
I’d be sad about that. Because then I’d have to look for a new community all over again.
For the people who think the statement is applause lights, I’d suggest considering the following response: If someone comes up with a reasonable concrete plan for addressing racism and sexism within EA, and it doesn’t get (sufficient) funding through the usual sources, you will contribute to helping fund it in some manner. That’s unavoidably vague and non-specific because we are talking about a hypothetical proposal, but it would be at least a slightly costly signal of support.
I’ll commit to funding a hypothetical reasonable underfunded plan that develops in 2023 somewhere in the three-figure range. I’m not going to pretend that is a particularly significant amount in real-world effect terms, but I think it’s enough for someone in the public sector like me to dispel the idea that it’s just an applause-light level commitment.
(I recognize some people may be students or otherwise not in a position to make more than a symbolic commitment—but symbolic commitments having some cost still have signalling value.)
This shifted my opinion towards being agnostic/mildly positive about this public statement.
I’m still concerned that some potential versions of EA getting more explicitly political might be detrimental to our discourse norms for the reasons Duncan, Chris, Liv, and I outlined in our comments. But yea, this amount of public support may definitely nudge grantmakers/donators to invest more into community health. If yes, I’m definitely in favor of that.
I was pretty surprised by these Twitter poll results (of course, who is responding may have various selection biases involved) where I ask how people feel about organizations putting out statements along the lines of “we oppose racism and sexism and believe diversity is important” (note: the setting of my poll—I give the example of a software accounting firm or animal rights org - is quite different from the setting of the above post):
https://twitter.com/SpencrGreenberg/status/1624044864584273920
There’s one important consideration I didn’t see anyone mention in the comments here or on that twitter poll. This statement would be viewed very positively 30 years ago (by people who cared about racism/sexism) when it may have been very rare. Since it is commonplace now, the signal is week but maybe still positive.
However, a more important consideration is what signal the lack of such a statement gives. Especially now that it is so commonplace. If I’m trying to pick between 10 software accounting firms to apply to and only 2 are missing this statement (which is very plausible today), I would interpret the lack of even a simple/vague/low-accountability (and thereby low-cost) statement as a strong negative signal.
There are different ways to read the signal that the lack of a statement gives. Someone could read it to mean that these two firms have rampant racism/sexism. Alternatively, someone could read it to mean that these two firms have the same low rates of racism/sexism as the other ten, and choose to focus their energies on software accounting rather than identity politics. A third possible reading is that the 10 firms put out statements precisely because they had more problems with racism/sexism, and therefor the two firms without the statements probably have the fewest racism/sexism problems. How you read the lack of a statement will depend a lot on your priors about the dynamics of racism/sexism in your particular place and time. But if you adopt the second or third readings, then the signal from the lack of a statement seems positive.
I wish I lived in a world where I could support this. I am definitely worried about how recent events may have harmed minorities and women and made it harder for them to trust the movement.
However, coming out of a few years where the world essentially went crazy with canceling people, sometimes for the most absurd reasons, I’m naturally wary of anything in the social justice vein, even whilst I respect the people proposing/signing it and believe that most of them are acting in good faith and attempting to address real harms.
Before the world went crazy for a few years, I would have happily signed such a statement and encouraged people to sign it as well, since I support my particular understanding of those words. Although now I find myself agreeing with Duncan that there are real costs with signing a statement if that then allows other people to use your signature as support for an interpretation that doesn’t match your beliefs. And I think it’s pretty clear to anyone who has been following online discourse that terms can be stretched surprisingly far.
This comment is more political than I’d like it to be, however, I think it is justified given that the standard position within social justice is that political neutrality is fake and an attempt to impose values whilst pretending that you aren’t.
Maybe it’s unfair to attribute possible beliefs to group of people who haven’t made that claim, but this has to be balanced against reasoning transparency which feels particularly important to me when I suspect that this is many people’s true rejection. And maybe it makes sense in the current environment when people are leaning more towards sharing.
I wish we lived in a different world, but in this world, there are certain nice things that we don’t get to have. That all said, there’s definitely been times when I’ve failed to properly account for the needs or perspectives of people with other backgrounds and certainly intend to become as good at navigating these situations as I can because I really don’t want to offend or be unfair to anyone.
People downvoting this post, apparently due to disagreement, is burying it deep in the community stack fewer than 24 hours after its release. I don’t think that is a desirable outcome.
It’s unfortunate that there’s no disagree-vote on posts. In its abscence, I wish people would not downvote posts like this in a way that buries them soon after release. Whatever you think of the statement this post announces, it is not a lousy post.
I’m not expressing an opinion on the statement either way, but it should have a reasonable chance to be seen.
Thanks, I removed my downvote after reading this comment.
feel bad for piling on, but I want to copy over my note from slack because I think it is a succinct epistemology concern and less comprehensive than the other comments:
idk what channel is best for this comment, which I hesitate to make, because I share the broad goals (besides one nagging detail) of the document and don’t wanna be that guy and it’s not my hill to die on, and etc. etc. I know some people will feel like this comment is a call to relitigate some object level thing that a lot of people don’t even want to be in the overton window, and I’m sorry.
but I think it might be poisonous to precommit against science. believing true things is dual use. empirical beliefs are not assigned any moral status whatsoever. I don’t care a lot about the object level here because it’s not morally relevant, and it’s only tactically relevant for things way outside my wheelhouse. But a culture that says “if you’re investigating this mindkilled empirical topic that vanishingly few people have real expertise on, you’re on thin ice, because a priori we know there’s a right answer and a wrong answer socially speaking” is alarming and kinda anti-EA. Pointing to hypothetical harms that can be downstream of beliefs propagating (by belief I mean in the strictest sense of an empirical and falsifiable map of the territory) doesn’t get you out of that for free.
source: co-run EA Philly with someone. my diversity credentials: used to tutor math at a community college, was highly involved BLMer 2014-2016
For the record: Duncan’s comment may have swayed me more to the harms of virtue signaling, making me more negative about the statement than I was when I chimed in on slack.
This is really sad and frustrating to see, that a community which prides itself in rigorous and independent thinking has taken to reciting by the same platitudes that every left wing organization does. We’re supposed to hold ourselves to higher standards than this.
Posts like this makes me much less being interested in being a part of EA.
Great job, Rocky and signatories. Statements are not programs, but neither are they nothing. They take a ton of courage and hard work to write. Proud of everyone who engaged in good faith to put this forward and to strengthen EA as a community.
This response was meant to be a separate post on the forum, but seeing as the original pledge post is getting (semi-)downvoted, I’ve decided to just leave it as a comment to not boost its visibility.
Anti-Racism and Anti-Sexism in EA shouldn’t be top cause areas for 99% of people
Epistemic status: semi-rant after seeing who signed the “Anti-sexism and Anti-racism pledge” , but I mean all that is written here.
Tl;dr : “Sexism and racism in EA” are bad, if they are present, but even if they are, malaria and AI risks are worse. So 99% of EAs should not bother with the former at all.
Part I- Anti-Racism and Anti-Sexism in EA, shouldn’t be the top cause area for almost anyone
The list of world most pressing problems on the 80,000 Hours webpage does not include “sexism and racism in EA”, nor even “sexism and racism” in the world at large. And I think that is not a mistake.
How does using one’s time on an e-mail from 20 years ago or on an even very inappropriate and abhorrent sexual comment score on the Importance / Tractability / Neglectedness framework?
Do those go out the window the moment someone acts (or acted 20 years ago) in a way that is fashionable in today’s mainstream to get outraged about?
Some people should be taking care of possible -isms in EA. But this should be limited in my opinion to the: people involved in the situation, people closest to them (for support), Community Health team and maybe authorities if the act in question is of certain magnitude.
If sexism / racism / etc. happens to someone in the EA community they should be able to report it to the Community Health team or other appropriate body or authorities and those people should take appropriate actions.
But for the community as whole my actual Fermi estimate is that 99.8% of people shouldn’t bother with “-isms in EA” at all.
The signatories of the pledge include a lot of directors/members of groups outside of that 0.2%. Do Atlanta, Deloitte or MIT have no more important cause areas than this and should really focus on this?
Part II—Virtue signalling about Anti-racism and Anti-sexism shouldn’t be the top cause area for absolutely anyone
While 0.2% of people in EA should be taking care of possible instances of -isms in EA, I think no-one should be bothering themselves with fuzzy, imprecise pledges that do nothing except virtue signal (and take out space for actual actions).
Duncan’s comment already pointed that out better than I ever would be able to, so I’m just going to leave this link to Jonathan Pie’s video (not endorsing racism is as impressive as not endorsing paedophilia, you don’t get a medal for it).
Part III—Parable: The shallow pond and the sexist comment thought experiment
Imagine you are on your way to a party in a very fancy and expensive suit. On your way you pass a shallow pond in which you notice a child drowning. At the exact same moment you notice that on the other side of the road someone is making a very inappropriate and abhorrent comment about masturbating before meeting them to an adolescent half their age (but not a child). Ask yourself an honest question—truly no judging here—which of those situations gets you more outraged?
It is okay to have different moral systems, and it is okay to be more outraged or terrified by mediocre sexism/racism than by people dying of malaria, or by AI risks. But if one does so, I don’t think one should call themselves an EA.
I am strongly ambivalent about the publishing of the pledge as written; I was invited to sign by someone who I trust and respect, the original post is made by someone who I’ve clearly seen exhibit thoughtfulness and acumen in thorny situations, and yet when I initially read the statement my thoughts were largely along the same lines as Duncan’s. I came here to find that he’d articulated them more thoroughly than I could. As is so often the case, his opinion is clearer than mine, pointing in the same direction, but also seems to go further than mine.
In this instance I think the claim of probable harm by crowding out the space is overstated, and I am moved by Lorenzo’s frame of “the counterfactual is likely to have been nothing rather than something better.” So rather than this being a well-intentioned pavestone on the road to hell or on the road to clear improvements,
it’s moreI see it more as a symbolic marker at the fork of those two paths. Because I’m not sure which direction the community will take (or if this pledge actually is farther along the better/worse paths and not just in the fork), I haven’t signed.What I am more confident about is that this response of “Anti-racism and Anti-sexism in EA shouldn’t be top cause areas for 99% of people” is clearly detrimental. Because while I agree with the quoted title, I’m also confident that the writers and signers of the pledge would agree with the title. Nothing that I’ve seen from the people involved, including the posting of this pledge, indicates that anyone believes that anti-racism and anti-sexism is considered a top cause area or that action in those directions should be prioritized over other work. The position that you are taking, and the obvious implication made by the “shallow pond and the sexist comment thought experiment” relies on the false conflation of “taking any action” with “advancing this cause to one’s top priority.”
Rocky wrote, solicited feedback, and published the pledge as part of her role as a community builder, while doing other EA movement building work that I think can (arguably) be valued as a top cause area. The various signatories likewise took a few minutes to read and sign, and none of them indicated that they were choosing this as a focus. Granted, the lack of specificity is part of my problem with the pledge as written, but I don’t think it justifies pushback of the nature:
I model those who wrote and signed this pledge as considering taking some, limited, action on maintaining robust anti-racism and anti-sexism norms in the community as essential needs to keep the community healthy and functioning. Like any other work in EA infrastructure, this is just part of the necessary business; does someone spending part of the day making sure the WiFi is working in an EA office mean they think “providing internet access to EAs is a top cause area?”
Your post on the other hand, if it follows its own logic as I understand it, indicates that you think it should be a top cause area for yourself to argue against anti-racism and anti-sexism work. If you believe that writing and posting on the EA forum equates to considering a topic one’s focus, then I genuinely ask: what moral calculus justifies your own response? I personally don’t hold the view that writing a forum post is equivalent to prioritizing its topic over any other. I think, like keeping the WiFi running, the conversation initiated by Rocky and the criticism it generated in Duncan and myself are part of what keeps the EA movement healthy and capable of working effectively and impactfully on the causes we actually consider important.
Thank you for your reply and questions. As stated at the beginning my post—it was bit of a rant, but still I probably used too much of hyperboles and you point out them well.
So please let me clarify / answer your last paragraph:
If it was just a forum post I wouldn’t have reacted like that. I consider “working on a statement for the past few weeks” (as quoted from the beginning of the OP), making a public pledge and gathering a group of signatories to sign it as something notably bigger, than just a forum post. Something that shouldn’t be used like that. Especially for applause lights (vide Duncan’s answer).
Not a top cause area (again, forum post < pledge with signatories), but something I think is somewhat important. Because IMO -isms are not going to erode this community—I would bet almost everyone here is against them, a lot of us actively. But losing EA values—good epistemic, avoiding virtue signalling and poseuring, avoiding group-think, etc. might.
To paraphrase Does Sam make me want to renounce the actions of the EA community? No. Does your reaction? Absolutely.:
Does sexism in EA make me want to renounce the actions of the EA community? No.[1] Does this pledge? A bit.
Because it is unanimously condemned.
I do wish that I had been more constructive in my own reply, rather than merely arguing against your arguments. I will try to remedy that here by specifically addressing why I declined an invitation to sign, despite my understanding that there are genuine problems of racism and sexism in our community and my desire to work against them.
As I led with, I am strongly ambivalent. So while it would not take much to tip me over into the belief that signing is probably net good, it would take a great deal more to assuage me that all harms of such a pledge were acknowledged and mitigated. I updated positively on Rocky’s response to Duncan, explaining the rationale of removing specific actions due to an attempt to create a generalized statement upon which groups (with varying levels of resources) could build. If there were language in the pledge that clearly addressed this, it would probably be sufficient for me to sign and cautiously endorse the pledge.
The best possible version of this proposal that I can imagine is not a pledge, but a roll call. For example, I would be completely on board if EA-NYC had finished their public DEI policy and made an announcement to the effect of: “We condemn racism and sexism, here is what we are trying to do about it. Please give us feedback and feel free to use any element of ours to establish your own policy. Once you have done so, please sign and link to your own policy, and in that way we can make a strong demonstration to those who are uncertain about the EA community’s commitment against bigotry.”
Similarly, I do think the statement as written can easily be perceived as applause lights. I think it can be (and is) completely true that many EAs’ experience is that racism and sexism are already universally condemned and that community builders regularly encounter those with uncertainty. So I am very empathetic to the perceived need to put out a statement even before specific proposals are ironed out. (Having been through the process myself, I can well imagine the “weeks” that the simple pledge above took were not actually workweeks of any individual(s), but merely the difficulty of establishing any kind of consensus around messaging). If the pledge made clear that it did not aim to reify some new commitment to anti-racism/anti-sexism and was intended primarily to be a reference of common knowledge towards which we could direct uncertain newcomers (or antagonistic journalists), that too might have been sufficient to convince me to sign.
The largest factor for me is that which Duncan addressed: the potential consequences of dividing of the relevant parties into those who did sign, and those who didn’t. I see the value in a list of names, I really do. And as I’ve said, now that the list exists, it wouldn’t take very much more to get me to sign it. But I would still prefer a world where there wasn’t one.
I model those who don’t see the possible harm as making the same (imho) mistake as those who dismiss privacy concerns with “if you don’t have anything to hide, you don’t have anything to fear.” Because who could object to declaring oneself against bigotry? I don’t really think it’s probable that anyone in EA will weaponize the division any more than I think it’s dangerous that some people post their home addresses on lists of EA couchsurfing options. I nevertheless want to support robust privacy norms, and norms against creating lists of “Right Minded” individuals.
I think the value created by the list, of demonstration of commitment, could be accomplished better by lists of actions taken (and, in EA fashion, a bunch of discussion about the best ways to measure the impact of those actions). I don’t have a suggestion on how to mitigate the potential harms of dividing people into signers and not signers (besides my most vehement exhortation to not create a list of non-signers. If the list of those invited to sign existed and clearly delineated between those who signed and those who didn’t, I would absolutely object to the entire attempt). I’m not sure if it would have been good to add language that acknowledges the possible harm, though I would appreciate it.
I do want to note that in addition to the comments above, a primary consideration for why I did not/have not yet signed the pledge is that I do not speak on behalf of any community building organization. I do community building work, and may do so in a more official capacity in the near future. I am in the early stages of establishing a role that will hopefully come to fruition; if I were currently holding that position I would have signed the pledge (to represent the stance of the organization) and voiced my objections. As it stands, I think it is valuable to speak as an individual (the only, so far as I know) invited to sign the pledge who has declined. Because although some significant part of my objection is that the existence of such a pledge could in theory be used as a weapon against those who do not sign it, in practice I do not believe the individuals who created the pledge would in fact do so.
I do think perspective is important. Worth it’s own thread imo.