I am also not an EA leader in any sense of the word, so perhaps my being opposed to this is moot. But I figured I would lay out the basics of my position in case there are others who were not speaking up out of fear [EDIT: I now know of at least one bona fide EA leader who is not voicing their own objection, out of something that could reasonably be described as “fear”].
Here are some things that are true:
Racism is harmful and bad
Sexism is harmful and bad
Other “isms” such as homophobia or religious oppression are harmful and bad.
To the extent that people can justify their racist, sexist, or otherwise bigoted behavior, they are almost always abusing information, in a disingenuous fashion. e.g. “we showed a 1% difference in the medians of the bell curves for these two populations, thereby ‘proving’ one of those populations to be fundamentally superior!” This is bullshit from a truth-seeking perspective, and it’s bullshit from a social progress perspective, and in most circumstances it doesn’t need to be entertained or debated at all. In practice, it is already the case that the burden of proof on someone wanting to have a discussion about these things is overwhelmingly on the person starting the discussion, to demonstrate that they are both a) genuinely well-intentioned, and b) have something real to talk about.
However:
Intelligent, moral, and well-meaning people will frequently disagree about to-what-extent a given situation is explained by various bigotries as opposed to other factors. Intelligent, moral, and well-meaning people will frequently disagree about which actions are wise and appropriate to take, in response to the presence of various bigotries.
By taking anti-racism and anti-sexism and other anti-bigotry positions which are already overwhelmingly popular and overwhelmingly agreed-upon within the Effective Altruism community, and attempting to convert them to Anti-Racism™, Anti-Sexism™, and Anti-Bigotry™ applause lights with no clear content underneath them, all that’s happening is the creation of a motte-and-bailey, ripe for future abuse.
There were versions of the above proposal which were not contentless and empty, which stake out clear and specific positions, which I would’ve been glad to see and enthusiastically supported and considered concrete progress for the community. It is indeed true that EA as a whole can do better, and that there exist new norms and new commitments that would represent an improvement over the current status quo.
But by just saying “hey, [thing] is bad! We’re going to create social pressure to be vocally Anti-[thing]!” you are making the world worse, not better. Now, there is a List Of Right-Minded People Who Were Wise Enough To Sign The Thing, and all of the possible reasons to have felt hesitant to sign the thing are compressible to “oh, so you’re NOT opposed to bigotry, huh?”
Similarly, if four-out-of-five signatories of The Anti-Racist Pledge think we should take action X, but four-out-of-five non-signatories think it’s a bad idea for various pragmatic or logistical reasons, it’s pretty easy to imagine that being rounded off to “the opposition is racist.”
(I can imagine people saying “we won’t do that!” and my response is “great—you won’t. Are you claiming no one will? Because at the level of 1000+ person groups, this is how this always goes.”)
The best possible outcome from this document is that everybody recognizes it as a basically meaningless non-thing, and nobody really pays attention to it in the future, and thus having signed it means basically nothing. This is also a bad outcome, though, because it saps momentum for creating and signing useful versions of such a pledge. It’s saturating the space, and inoculating us against progress of this form; the next time someone tries to make a pledge that actually furthers equity and equality, the audience will be that much less likely to click, and that much less willing to believe that anything useful will result.
The road to hell is paved with good intentions. This is clearly a good intention. It does not manage to avoid being a pavestone.
Thank you for the thorough feedback. Those involved in drafting the statement considered much of what you laid out and created a more substantive, action-specific version before ultimately deciding against it. There were several reasons for this decision, among them: not wanting to commit (often under-resourced) groups to obligations they would currently be unable to fulfill, the various needs and dynamics of different EA communities, and the time-sensitive nature of getting a statement out. We do not intend for this to be the final word and there is already discussion about follow-up collaborations. We also chose to use the footnote method in the statement document to allow groups to make their own additional individual commitments publicly now.
I do want to push back on the idea that this statement is vacuous, counterproductive, and/or harmful. We chose to create this because of our collective, global, on-the-ground experiences discussing recent events with the communities we lead. I agree it should be silly or meaningless to declare one’s opposition to racism and sexism. But right now, for many following EA discourse, it unfortunately isn’t obvious where much of the community stands. And this is having a tangible impact on our communities and our community members’ sense of belonging and safety. This statement doesn’t solve this. But by putting our shared commitment in plain language, I believe we’ve laid a pavestone, however small, on the path toward a version of EA where statements like this truly are not needed.
I wonder if the statement would have been stronger with a nod in that direction, e.g. something vaguely like: “We recognize that signing a statement is not enough. As signatories, we will be considering specific ideas to combat racism and sexism in the context of the resources, needs, and dynamics of the specific community we help build. The organizers will be continuing to collaborate on a more substantive, action-specific proposal in the coming months.”
I would like for all involved to consider this, basically, a bet, on “making and publishing this pledge” being an effective intervention on … something.
I’m not sure whether the something is “actual racism and sexism and other bigotry within EA,” or “the median EA’s discomfort at their uncertainty about whether racism and sexism are a part of EA,” or what.
But (in the spirit of the E in EA) I’d like that bet to be more clear, so since you were willing to leave a comment above: would you be willing to state with a little more detail which problem this was intended to solve, and how confident you (the group involved) are that it will be a good intervention?
Just to be clear, I think many of us in the community are not uncertain about whether racism and sexism are part of EA. Rather I’m certain that they are, in the sense that many in the community exhibited them in discussions in the last few weeks.
Therefore it’s very meaningful to see a large core of community builders speak out about this explicitly, including disavowing “scientific” racism and sexism specifically. I’m also especially glad to see the head of my own country’s community among them.
Just to be clear, I think many of us in the community are not uncertain about whether racism and sexism are part of EA. Rather I’m certain that they are, in the sense that many in the community exhibited them in discussions in the last few weeks.
I think if we found a comment that you considered racist/sexist and asked the author if they thought their comment was racist/sexist, the author would likely say no.
I wish the Google Doc had been more specific. It could’ve said things like:
It’s important to treat people with respect regardless of their race/sex
It’s important to reduce suffering and increase joy for everyone regardless of their race/sex
We should be reluctant to make statements which could be taken as “scientific” justification for ignoring either of the previous bullet points
As written, it seems like the doc has the disadvantage of being ripe for abuse, without the advantage of providing guidelines that let someone know whether the signatories dislike their comment. I think on the margin, this doc pushes us towards a world where EAs are spending less time on high-impact do-gooding, and more time reading social media to make sure we comply with the latest thinking around anti-racism/anti-sexism.
We should be reluctant to make statements which could be taken as “scientific” justification for ignoring either of the previous bullet points
Thank you for stating plainly what I suspect the original doc was trying to hint at.
That said, now that it’s plainly stated, I disagree with it. The world is too connected for that.
Taken literally, “could be taken” is a ridiculously broad standard. I’m sure a sufficiently motivated reasoner could take “2+2=4″ as justification for racism. This is not as silly a concern as it sounds, since we’re mostly worried about motivated reasoners, and it’s unclear how motivated a reasoner we should be reluctant to offer comfort to. But let’s look at some more concrete examples:
In early 2020, people were reluctant to warn about covid-19 because it could be taken as justification for anti-chinese racism. I can’t actually follow the logic that goes from “A dangerous new disease emerged in China” to “I should go beat up someone of Chinese ancestry” but it seems a few people who had been itching for an excuse did. Nevertheless, given the relative death tolls, we clearly should have had more warnings and more preparations. The next pandemic will likely also emerge in a place containing people against whom racism is possible (base rate, if nothing else), and pandemic preparedness people need to be ready to act anyway.
Similarly, many people tried to bury the fact that monkeypox was sexually transmitted because it could lead to homophobia. So instead they warned of a coming pandemic. False warnings are extremely bad for preparedness, draining both our energy and our credibility.
Political and Economic Institutions are a potentially high-impact cause area in both near- and far-term (albeit, dubiously tractable). Investigating them is pretty much going to require looking at history, and at least sometimes saying that western institutions are better than others.
Going back to Bostrom’s original letter, many anti-racists have taken to denying the very idea of intelligence in order to reject it. Hard to work on super-intelligence-based x-risk (or many other things) without that concept.
I think you make good points—these are good cases to discuss.
I also think that motivated reasoners are not the main concern.
My last bullet point was meant as a nudge towards consequentialist communication. I don’t think consequentialism should be the last word in communication (e.g. lying to people because you think it will lead to good consequences is not great).
But consequences are an important factor, and I think there’s a decent case to be made that e.g. Bostrom neglected consequences in his apology letter. (Essentially making statements which violated important and valuable taboos, without any benefit. See my previous comment on this.)
For something like COVID, it seems bad to downplay it, but it also seems bad to continually emphasize its location of origin in contexts where that information isn’t relevant or important.
“We should be reluctant” represents a consideration against doing something, not a complete ban.
I think if we found a comment that you considered racist/sexist and asked the author if they thought their comment was racist/sexist, the author would likely say no.
James Watson’s denial of having made racist statements is a social fact worth noting. Most ‘alt-center,’ etc. researchers in HBD and the latest thinking on euphemisms intended to reappropriate racism for metapolitical and game-theoretic purposes scientifically will, perforce, never outright say this.
To be clear, I don’t think many EAs are formally working in race science, and surely skeptical and morally astute EAs can have the integrity to admit to having made racist comments or reasonably disagree. (And no: as an African American EA on the left, I don’t think we should unsubscribe every HBD-EA, Bostrom, etc., from social life. Instead, we should model a safer environment for us all to be wrong categorically. Effective means getting all x-risks and compound x-risks, etc. right the first time.)
But after mulling over most of the HBD-affirmed defenses of Bostrom’s email/apology that I’ve read or engaged on the EA forum that weren’t obviously (yet also highly upvoted) red pills by bad actors, I think there are other reasons many of those EAs won’t say their comments were racist even if they themselves are not actually certain they are non-racist.
My hunch is whether those EAs see HBD as part of hard core or protective belt of longtermism/EA’s program may be a good predictor of whether they believe and therefore would be willing say that their comments were racist.[1]
For these, among other reasons, I think this instance of Hirshman’s rhetoric of reaction above is mistaken. It is not disvaluable that community builders in a demographically, socially and epistemically isolated elitist technocratic movement like EA doesn’t allow the best provisional statement clearly stating their stance on these issues to become the enemy of the good.
As I was relieved to see this, as well as the fact that Guy made the pushback I wish I had time to do 3 days ago. If there’s any way I can support your efforts, please let me know!
1.1 For want of an intensional definition of value-alignment. 1.2. I take little pleasure in suggesting that HBD-relevant beliefs strongly coupled with, e.g., Beckstead et al.’s (frankly narrow and imaginatively lacking) stance on the most likely sources of economic innovation in the future which therefore may have greater instrumental value to longtermist utopia may be one contributing factor for this problem within EA. And even anti-eugenics has its missteps.
Writing very quickly as someone that signed this from EA Italy
I agree that this 12-line letter is not perfect and will not solve racism or sexism, and probably not do much (otherwise, these issues would have already been solved).
There were versions of the above proposal which were not contentless and empty, which stake out clear and specific positions, which I would’ve been glad to see and enthusiastically supported and considered concrete progress for the community.
If you think it’s important and useful, please do work on this, it might be concrete progress for the community! Or if the versions were already made it might be useful to share them
Similarly, if four-out-of-five signatories of The Anti-Racist Pledge think we should take action X, but four-out-of-five non-signatories think it’s a bad idea for various pragmatic or logistical reasons, it’s pretty easy to imagine that being rounded off to “the opposition is racist.”
I would be extremelysurprised by this, what % do you give that something like this will happen? If a request to sign reached me I assume it reached hundreds or thousands of people
This is also a bad outcome, though, because it saps momentum for creating and signing useful versions of such a pledge. It’s saturating the space, and inoculating us against progress of this form; the next time someone tries to make a pledge that actually furthers equity and equality, the audience will be that much less likely to click, and that much less willing to believe that anything useful will result.
I used to share this thinking, and worry a lot about replaceability, but on the current margin it seems to me that the alternative to thing is almost always not better thing but no thing. So I think it would be useful if you had made concrete proposals for how thing could be improved for next time, but not what I perceive as disincentivizing people from generally doing stuff. I wouldn’t want Rob Mather not to found the Against Malaria Foundation out of fear of sapping momentum for creating an even better version of a bednet distribution org. I would agree with you if you could share some reason for expecting the counterfactual to be better (e.g. “there was this other much better letter that I was just about to post, but now I don’t want to spam people about this so I will not”)
The road to hell is paved with good intentions. This is clearly a good intention. It does not manage to avoid being a pavestone.
Imho the road to anywhere is paved with good intentions, and the most likely counterfactual is standing still, not moving in a better direction, unless you know of some existing and better plans that were hindered by this 12-line letter.
In terms of practical actions, someone from EA Italy (not me) is publishing a Code of Conduct this Sunday instead of in the next weeks, we’re sharing an anonymous form on the website and via other channels, following the advice from EA Philippines (in addition to links to two contact people and the CEA community health team), and we’re going to ask city and university groups to publish these as well.
Would probably have happened anyway, but likely a few weeks later, and it’s nice to have links to resources from other groups while we figure out a strategy. I personally found the advice/experience from EA Philippines to be more useful, otherwise we might have just added contact infos but forgotten to add an Italian anonymous form. So I would endorse asking various groups to share what practical actions they are taking, but it doesn’t seem to me that this letter sapped momentum from doing it.
I disagree-voted on this because I think it is overly accusatory and paints things in a black-and-white way.
There were versions of the above proposal which were not contentless and empty, which stake out clear and specific positions, which I would’ve been glad to see and enthusiastically supported and considered concrete progress for the community.
Who says we can’t have both? I don’t get the impression that EA NYC wants this to be the only action taken on anti-racism and anti-sexism, nor did I get the impression that this is the last action EA NYC will take on this topic.
But by just saying “hey, [thing] is bad! We’re going to create social pressure to be vocally Anti-[thing]!” you are making the world worse, not better. Now, there is a List Of Right-Minded People Who Were Wise Enough To Sign The Thing, and all of the possible reasons to have felt hesitant to sign the thing are compressible to “oh, so you’re NOT opposed to bigotry, huh?”
I don’t think this is the case—I, for one, am definitely not thinking that anyone who chose not to sign didn’t do so because they are not opposed to bigotry. (Confusing double-negative—but basically, I can think of other reasons why people might not have wanted to sign this.)
The best possible outcome from this document is that everybody recognizes it as a basically meaningless non-thing, and nobody really pays attention to it in the future, and thus having signed it means basically nothing.
I can think of better outcomes than that—the next time there is a document or initiative with a bit more substance, here’s a big list of people who will probably be on board and could be contacted. The next time a journalist looks through the forum to get some content, here’s a big list of people who have publicly declared their commitment to anti-racism and anti-sexism. The next time someone else makes a post delving into this topic, here’s some community builders they can talk to for their stance on this. There’s nothing inherently wrong with symbolic gestures as long as they are not in place of more meaningful change, and I don’t get the sense from this post that this will be the last we hear about this.
There were versions of the above proposal which were not contentless and empty, which stake out clear and specific positions, which I would’ve been glad to see and enthusiastically supported and considered concrete progress for the community. It is indeed true that EA as a whole can do better, and that there exist new norms and new commitments that would represent an improvement over the current status quo.
I mean, I don’t have this hypothetical document made in my head (or I would’ve posted it myself).
But an easy example is something of the shape:
[EDIT: The below was off-the-cuff and, on reflection, I endorse the specific suggestion much less. The structural thing it was trying to gesture at, though, of something clear and concrete and observable, is still the thing I would be looking for, that is a prerequisite for enduring endorsement.]
“We commit to spending at least 2% of our operational budgets on outreach to [racial group/gender group/otherwise unrepresented group] for the next 5 years.”
Maybe the number is 1%, or 10%, or something else; maybe it’s 1 year or 10 years or instead of years it’s “until X members of our group/board/whatever are from [nondominant demographic].”
The thing that I like about the above example in contrast with the OP is that it’s clear, concrete, specific, and evaluable, and not just an applause light.
in the current environment there’s a lot of discourse that goes like “EA needs to be less tolerant of weird people especially people who do things like poly and kink, in order for EA to feel more safe for women”
given that the omission of homophobia, transphobia, etc especially since these people are v overrepresented here seems … notable?
I am opposed to this.
I am also not an EA leader in any sense of the word, so perhaps my being opposed to this is moot. But I figured I would lay out the basics of my position in case there are others who were not speaking up out of fear [EDIT: I now know of at least one bona fide EA leader who is not voicing their own objection, out of something that could reasonably be described as “fear”].
Here are some things that are true:
Racism is harmful and bad
Sexism is harmful and bad
Other “isms” such as homophobia or religious oppression are harmful and bad.
To the extent that people can justify their racist, sexist, or otherwise bigoted behavior, they are almost always abusing information, in a disingenuous fashion. e.g. “we showed a 1% difference in the medians of the bell curves for these two populations, thereby ‘proving’ one of those populations to be fundamentally superior!” This is bullshit from a truth-seeking perspective, and it’s bullshit from a social progress perspective, and in most circumstances it doesn’t need to be entertained or debated at all. In practice, it is already the case that the burden of proof on someone wanting to have a discussion about these things is overwhelmingly on the person starting the discussion, to demonstrate that they are both a) genuinely well-intentioned, and b) have something real to talk about.
However:
Intelligent, moral, and well-meaning people will frequently disagree about to-what-extent a given situation is explained by various bigotries as opposed to other factors. Intelligent, moral, and well-meaning people will frequently disagree about which actions are wise and appropriate to take, in response to the presence of various bigotries.
By taking anti-racism and anti-sexism and other anti-bigotry positions which are already overwhelmingly popular and overwhelmingly agreed-upon within the Effective Altruism community, and attempting to convert them to Anti-Racism™, Anti-Sexism™, and Anti-Bigotry™ applause lights with no clear content underneath them, all that’s happening is the creation of a motte-and-bailey, ripe for future abuse.
There were versions of the above proposal which were not contentless and empty, which stake out clear and specific positions, which I would’ve been glad to see and enthusiastically supported and considered concrete progress for the community. It is indeed true that EA as a whole can do better, and that there exist new norms and new commitments that would represent an improvement over the current status quo.
But by just saying “hey, [thing] is bad! We’re going to create social pressure to be vocally Anti-[thing]!” you are making the world worse, not better. Now, there is a List Of Right-Minded People Who Were Wise Enough To Sign The Thing, and all of the possible reasons to have felt hesitant to sign the thing are compressible to “oh, so you’re NOT opposed to bigotry, huh?”
Similarly, if four-out-of-five signatories of The Anti-Racist Pledge think we should take action X, but four-out-of-five non-signatories think it’s a bad idea for various pragmatic or logistical reasons, it’s pretty easy to imagine that being rounded off to “the opposition is racist.”
(I can imagine people saying “we won’t do that!” and my response is “great—you won’t. Are you claiming no one will? Because at the level of 1000+ person groups, this is how this always goes.”)
The best possible outcome from this document is that everybody recognizes it as a basically meaningless non-thing, and nobody really pays attention to it in the future, and thus having signed it means basically nothing. This is also a bad outcome, though, because it saps momentum for creating and signing useful versions of such a pledge. It’s saturating the space, and inoculating us against progress of this form; the next time someone tries to make a pledge that actually furthers equity and equality, the audience will be that much less likely to click, and that much less willing to believe that anything useful will result.
The road to hell is paved with good intentions. This is clearly a good intention. It does not manage to avoid being a pavestone.
Thank you for the thorough feedback. Those involved in drafting the statement considered much of what you laid out and created a more substantive, action-specific version before ultimately deciding against it. There were several reasons for this decision, among them: not wanting to commit (often under-resourced) groups to obligations they would currently be unable to fulfill, the various needs and dynamics of different EA communities, and the time-sensitive nature of getting a statement out. We do not intend for this to be the final word and there is already discussion about follow-up collaborations. We also chose to use the footnote method in the statement document to allow groups to make their own additional individual commitments publicly now.
I do want to push back on the idea that this statement is vacuous, counterproductive, and/or harmful. We chose to create this because of our collective, global, on-the-ground experiences discussing recent events with the communities we lead. I agree it should be silly or meaningless to declare one’s opposition to racism and sexism. But right now, for many following EA discourse, it unfortunately isn’t obvious where much of the community stands. And this is having a tangible impact on our communities and our community members’ sense of belonging and safety. This statement doesn’t solve this. But by putting our shared commitment in plain language, I believe we’ve laid a pavestone, however small, on the path toward a version of EA where statements like this truly are not needed.
I wonder if the statement would have been stronger with a nod in that direction, e.g. something vaguely like: “We recognize that signing a statement is not enough. As signatories, we will be considering specific ideas to combat racism and sexism in the context of the resources, needs, and dynamics of the specific community we help build. The organizers will be continuing to collaborate on a more substantive, action-specific proposal in the coming months.”
I would like for all involved to consider this, basically, a bet, on “making and publishing this pledge” being an effective intervention on … something.
I’m not sure whether the something is “actual racism and sexism and other bigotry within EA,” or “the median EA’s discomfort at their uncertainty about whether racism and sexism are a part of EA,” or what.
But (in the spirit of the E in EA) I’d like that bet to be more clear, so since you were willing to leave a comment above: would you be willing to state with a little more detail which problem this was intended to solve, and how confident you (the group involved) are that it will be a good intervention?
Just to be clear, I think many of us in the community are not uncertain about whether racism and sexism are part of EA. Rather I’m certain that they are, in the sense that many in the community exhibited them in discussions in the last few weeks.
Therefore it’s very meaningful to see a large core of community builders speak out about this explicitly, including disavowing “scientific” racism and sexism specifically. I’m also especially glad to see the head of my own country’s community among them.
I think if we found a comment that you considered racist/sexist and asked the author if they thought their comment was racist/sexist, the author would likely say no.
I wish the Google Doc had been more specific. It could’ve said things like:
It’s important to treat people with respect regardless of their race/sex
It’s important to reduce suffering and increase joy for everyone regardless of their race/sex
We should be reluctant to make statements which could be taken as “scientific” justification for ignoring either of the previous bullet points
As written, it seems like the doc has the disadvantage of being ripe for abuse, without the advantage of providing guidelines that let someone know whether the signatories dislike their comment. I think on the margin, this doc pushes us towards a world where EAs are spending less time on high-impact do-gooding, and more time reading social media to make sure we comply with the latest thinking around anti-racism/anti-sexism.
Thank you for stating plainly what I suspect the original doc was trying to hint at.
That said, now that it’s plainly stated, I disagree with it. The world is too connected for that.
Taken literally, “could be taken” is a ridiculously broad standard. I’m sure a sufficiently motivated reasoner could take “2+2=4″ as justification for racism. This is not as silly a concern as it sounds, since we’re mostly worried about motivated reasoners, and it’s unclear how motivated a reasoner we should be reluctant to offer comfort to. But let’s look at some more concrete examples:
In early 2020, people were reluctant to warn about covid-19 because it could be taken as justification for anti-chinese racism. I can’t actually follow the logic that goes from “A dangerous new disease emerged in China” to “I should go beat up someone of Chinese ancestry” but it seems a few people who had been itching for an excuse did. Nevertheless, given the relative death tolls, we clearly should have had more warnings and more preparations. The next pandemic will likely also emerge in a place containing people against whom racism is possible (base rate, if nothing else), and pandemic preparedness people need to be ready to act anyway.
Similarly, many people tried to bury the fact that monkeypox was sexually transmitted because it could lead to homophobia. So instead they warned of a coming pandemic. False warnings are extremely bad for preparedness, draining both our energy and our credibility.
Political and Economic Institutions are a potentially high-impact cause area in both near- and far-term (albeit, dubiously tractable). Investigating them is pretty much going to require looking at history, and at least sometimes saying that western institutions are better than others.
Going back to Bostrom’s original letter, many anti-racists have taken to denying the very idea of intelligence in order to reject it. Hard to work on super-intelligence-based x-risk (or many other things) without that concept.
I think you make good points—these are good cases to discuss.
I also think that motivated reasoners are not the main concern.
My last bullet point was meant as a nudge towards consequentialist communication. I don’t think consequentialism should be the last word in communication (e.g. lying to people because you think it will lead to good consequences is not great).
But consequences are an important factor, and I think there’s a decent case to be made that e.g. Bostrom neglected consequences in his apology letter. (Essentially making statements which violated important and valuable taboos, without any benefit. See my previous comment on this.)
For something like COVID, it seems bad to downplay it, but it also seems bad to continually emphasize its location of origin in contexts where that information isn’t relevant or important.
“We should be reluctant” represents a consideration against doing something, not a complete ban.
James Watson’s denial of having made racist statements is a social fact worth noting. Most ‘alt-center,’ etc. researchers in HBD and the latest thinking on euphemisms intended to reappropriate racism for metapolitical and game-theoretic purposes scientifically will, perforce, never outright say this.
To be clear, I don’t think many EAs are formally working in race science, and surely skeptical and morally astute EAs can have the integrity to admit to having made racist comments or reasonably disagree. (And no: as an African American EA on the left, I don’t think we should unsubscribe every HBD-EA, Bostrom, etc., from social life. Instead, we should model a safer environment for us all to be wrong categorically. Effective means getting all x-risks and compound x-risks, etc. right the first time.)
But after mulling over most of the HBD-affirmed defenses of Bostrom’s email/apology that I’ve read or engaged on the EA forum that weren’t obviously (yet also highly upvoted) red pills by bad actors, I think there are other reasons many of those EAs won’t say their comments were racist even if they themselves are not actually certain they are non-racist.
My hunch is whether those EAs see HBD as part of hard core or protective belt of longtermism/EA’s program may be a good predictor of whether they believe and therefore would be willing say that their comments were racist.[1]
For these, among other reasons, I think this instance of Hirshman’s rhetoric of reaction above is mistaken. It is not disvaluable that community builders in a demographically, socially and epistemically isolated elitist technocratic movement like EA doesn’t allow the best provisional statement clearly stating their stance on these issues to become the enemy of the good.
As I was relieved to see this, as well as the fact that Guy made the pushback I wish I had time to do 3 days ago. If there’s any way I can support your efforts, please let me know!
1.1 For want of an intensional definition of value-alignment.
1.2. I take little pleasure in suggesting that HBD-relevant beliefs strongly coupled with, e.g., Beckstead et al.’s (frankly narrow and imaginatively lacking) stance on the most likely sources of economic innovation in the future which therefore may have greater instrumental value to longtermist utopia may be one contributing factor for this problem within EA. And even anti-eugenics has its missteps.
Writing very quickly as someone that signed this from EA Italy
I agree that this 12-line letter is not perfect and will not solve racism or sexism, and probably not do much (otherwise, these issues would have already been solved).
If you think it’s important and useful, please do work on this, it might be concrete progress for the community! Or if the versions were already made it might be useful to share them
I would be extremely surprised by this, what % do you give that something like this will happen? If a request to sign reached me I assume it reached hundreds or thousands of people
I used to share this thinking, and worry a lot about replaceability, but on the current margin it seems to me that the alternative to thing is almost always not better thing but no thing. So I think it would be useful if you had made concrete proposals for how thing could be improved for next time, but not what I perceive as disincentivizing people from generally doing stuff.
I wouldn’t want Rob Mather not to found the Against Malaria Foundation out of fear of sapping momentum for creating an even better version of a bednet distribution org. I would agree with you if you could share some reason for expecting the counterfactual to be better (e.g. “there was this other much better letter that I was just about to post, but now I don’t want to spam people about this so I will not”)
Imho the road to anywhere is paved with good intentions, and the most likely counterfactual is standing still, not moving in a better direction, unless you know of some existing and better plans that were hindered by this 12-line letter.
In terms of practical actions, someone from EA Italy (not me) is publishing a Code of Conduct this Sunday instead of in the next weeks, we’re sharing an anonymous form on the website and via other channels, following the advice from EA Philippines (in addition to links to two contact people and the CEA community health team), and we’re going to ask city and university groups to publish these as well.
Would probably have happened anyway, but likely a few weeks later, and it’s nice to have links to resources from other groups while we figure out a strategy. I personally found the advice/experience from EA Philippines to be more useful, otherwise we might have just added contact infos but forgotten to add an Italian anonymous form. So I would endorse asking various groups to share what practical actions they are taking, but it doesn’t seem to me that this letter sapped momentum from doing it.
Not writing as anything
I disagree-voted on this because I think it is overly accusatory and paints things in a black-and-white way.
Who says we can’t have both? I don’t get the impression that EA NYC wants this to be the only action taken on anti-racism and anti-sexism, nor did I get the impression that this is the last action EA NYC will take on this topic.
I don’t think this is the case—I, for one, am definitely not thinking that anyone who chose not to sign didn’t do so because they are not opposed to bigotry. (Confusing double-negative—but basically, I can think of other reasons why people might not have wanted to sign this.)
I can think of better outcomes than that—the next time there is a document or initiative with a bit more substance, here’s a big list of people who will probably be on board and could be contacted. The next time a journalist looks through the forum to get some content, here’s a big list of people who have publicly declared their commitment to anti-racism and anti-sexism. The next time someone else makes a post delving into this topic, here’s some community builders they can talk to for their stance on this. There’s nothing inherently wrong with symbolic gestures as long as they are not in place of more meaningful change, and I don’t get the sense from this post that this will be the last we hear about this.
Can you give some details?
I mean, I don’t have this hypothetical document made in my head (or I would’ve posted it myself).
But an easy example is something of the shape:
[EDIT: The below was off-the-cuff and, on reflection, I endorse the specific suggestion much less. The structural thing it was trying to gesture at, though, of something clear and concrete and observable, is still the thing I would be looking for, that is a prerequisite for enduring endorsement.]
“We commit to spending at least 2% of our operational budgets on outreach to [racial group/gender group/otherwise unrepresented group] for the next 5 years.”
Maybe the number is 1%, or 10%, or something else; maybe it’s 1 year or 10 years or instead of years it’s “until X members of our group/board/whatever are from [nondominant demographic].”
The thing that I like about the above example in contrast with the OP is that it’s clear, concrete, specific, and evaluable, and not just an applause light.
there’s a thing of like
in the current environment there’s a lot of discourse that goes like “EA needs to be less tolerant of weird people especially people who do things like poly and kink, in order for EA to feel more safe for women”
given that the omission of homophobia, transphobia, etc especially since these people are v overrepresented here seems … notable?