I agree the racism critique is overstated, but I think there’s a more nuanced argument for a need for greater representation/inclusion for xrisk reduction to be very good for everyone.
Quick toy examples (hypothetical): - If we avoid extinction by very rich, nearly all white people building enough sustainable bunkers, human species continues/rebuilds, but not good for non-white people. - If we do enough to avoid the xrisk scenarios (say, getting stuck at the poles with minimal access to resources needed to progress civilisation or something) in climate change, but not enough to avoid massively disadvantaging most of the global south, we badly exacerbate inequality (maybe better than extinction, but not what we might consider a good outcome).
And so forth. So the more nuanced argument might be we (a) need to avoid extinction, but (b) want to do so in such a way that we don’t exacerbate inequality and other harms. We stand a better chance of doing the latter by including a wider array of stakeholders than are currently in the conversation.
It seems odd to me to criticise a movement as racist without at least acknowledging that the thing we are working on seems more beneficial for non-white people than the things many other philanthropists work on. The examples you give are hypothetical, so they aren’t a criticism of what longtermists do in the real world. Most longtermists are focused on AI, bio and to a lesser extent climate risk. I fail to see how any of that work has the disparate demographic impact described in the hypotheticals.
Thanks Halstead. I’ll try to respond later, but I’d quickly like to be clear re: my own position that I don’t perceive longtermism as racist, and/or am not claiming people within it are racist (I consider this a serious claim not to be made lightly).
and/or am not claiming people within it are racist (I consider this a serious claim not to be made lightly).
Do you mean to say that
P1: People in X are racist
vs
P2: People in X are not racist
are serious claims that are not to be made lightly?
(Non- sequitur below, may not be interesting)
For what it’s worth, my best guess is that having the burden of proof on P1 is the correct decision procedure in the society we live in, as these accusations have a lot of associated baggage and we don’t currently have a socially acceptable way to say naively reasonable things like “Alice is very likely systematically biased against X group to Y degree in Z sphere of life, so I trust her judgements less about X as applied to Z, but all things considered Alice is still a fine person to work or socially interact with.”
But all else equal, a society where having the burden of proof on P2 would be slightly better, as it is a more accurate representation of affairs (see eg for an illustration of what I mean).
In particular, I think it is an accurate claim that most humans and most human institutions in most times and places are at least somewhat racist* (though I think the demographic compositions that people point to as “problematic”in EA, like higher education levels, should on average probably point towards less racism rather than more).
Right now social consensus appears to be that we “call out” and socially exile people above a certain (unspecified) degree of racism, and people below that bar are considered “not racist”, despite clear statistical evidence to the contrary for anybody who bothers to look.
Unfortunately, my best guess is that this topic is too politically charged for EAs to make much headway with, it overall isn’t especially important, and also trying to do so may draw the attention of hostile actors who may use our missteps here against us.
So I think my all-things-considered position is that we should basically go with the social consensus of pretending that racism below a certain degree doesn’t exist, even though the situation is moderately unfortunate for me personally.
* there are different definitions of racism. The operative definition I use is “do I expect to be treated in statistically distinguishable ways from a demographic twin who happens to be Caucasian (or black, or Indian, etc, depending on the topic of conversation)?”
in mind in terms of “serious claim, not to be made lightly”, but I acknowledge your well-made points re: burden of proof on the latter.
I also worry about distribution of claims in terms of signal v noise. I think there’s a lot of racism in modern society, much of it glaring and harmful, but difficult to address (or sometimes out of the overton window to even speak about). I don’t think matters are helped by critiques that go to lengths to read racism into innocuous texts, as the author of one of the critiques above has done in my view (in other materials, and on social media).
I agree that reading racism or white supremacy into innocuous texts is harmful, and for the specific instances I’m aware of, it both involved selective quote mining, and also the mined quote wasn’t very damning even out of context.
even though the situation is moderately unfortunate for me personally.
I think a writeup about this would be very interesting, even if short or at a much lower quality/epistemic certainty than many of your other comments.
Unfortunately, my best guess is that this topic is too politically charged for EAs to make much headway with, it overall isn’t especially important, and also trying to do so may draw the attention of hostile actors who may use our missteps here against us.
I agree that it seems that the EV of most meta race discussions seems negative, even though there is substantial perspective that might be useful and seems unsaid.
For example, steelmanning the Scott Alexander event on both sides would be a useful exercise. This includes steelmanning the NYT writer’s assertion of a sort of SV cabal, a perspective and makes their behavior more virtuous and doesn’t seem to be discussed.
This steelman for the NYT, against Scott Alexander, would say that the doxxing issue is just a layer/proxy for optical issues which Scott Alexander arguably should bear, which in turn is a layer/proxy for silicon valley power and media. The latter two layers are far more interesting and important than doxxing, despite being unexamined by the rationalist community.
This steelman is probably represented by the views in this New Yorker article (that avoids most of the loaded racism issues).
It’s fascinating to watch this conflict between two intelligentsia on opposite coasts. Both seem truth seeking and worthy of respect, but are in a contest whose nature seem unacknowledged by the rationalist side.
While limited, the relevance to this post and similar discussions is that the New Yorkers perspective, which looks down at the self-importance and arcane re-invention of the rationalist community, is probably the mainstream view. If these perspectives are true, EA probably has to deal with this too when advancing longtermism.
There’s probably ways of dealing with this issue (that might be better than chalking up issues to presentation or “weirdness”) but this seems very hard and I haven’t thought about this and I feel like I will write something dumb. Also, I think there’s low demand for this comment, which is already very long.
This is somewhat relevant to the top level post and the articles it refers to (that seem lower in quality than the New Yorker article).
What’s fascinating about this is the conflict between two intelligentsia on opposite coasts. Both seem truth seeking and worthy of respect, but are in a contest whose nature and stakes seem unacknowledged.
For what it’s worth, I got the opposite impression. I think neither side is particularly truth-seeking, and much more out to “win” rather than be deeply concerned with what is true. My own experience during the whole SSC/NYT affair was to get very indignant and follow @balajis* (who I’ve since muted), a tech personality with a crusade against tech journalism, and reading him only helped amplify my sense of zealotry against conventional media. On reflection this was very far from my ideals or behaviors I’d like to have going forwards, and I consider my behavior then moderately large evidence against my own truth-seeking.
I think the SSC/NYT event was a fitting culmination of the Toxoplasma of Rage that SSC itself warned us about, and some members of our movement, myself included, was nontrivially corrupted by external bad faith actors (on both sides).
* To be clear this is not a condemnation of him as a person or for his work or anything, just his Twitter personality.
It seems like you are describing a difficult personal experience. I think the rationalist community and Scott Alexander are altruistic and virtuous, so it seems having been involved/going through the journey in the way I think you are describing would make anyone indignant.
I did not have the same experience with this incident but I have had beliefs and made many poor decisions I have regretted, in very different domains/places, almost certainly with much worse epistemics than you.
even though the situation is moderately unfortunate for me personally.
I think a writeup about this is very interesting, even if short or at a much lower quality/epistemic certainty than many of your other comments.
I don’t think there’s anything particularly interesting here. The short compression of my views is that different people have competing access needs, and I don’t feel like I have a safe space outside of a very small subset of my friends to say something pretty simple and naively reasonable like
I view that my /your interaction with this system/person is parsimoniously explained by either racism or a conjunction of factors that include racism. I would like your help in verifying whether the evidence checks out, as I tend to get emotional about this kind of thing. I would also like to talk about mitigation strategies like how I can minimize this type of interaction in the future. No, I am not claiming that this system deserves to burn down/this person ought to be cancelled. Yes, I think the system/person is probably fine in the grand scheme of things.
without basically getting embroiled in a proxy culture war. I feel like many people (even ones I naively would have thought to be fairly reasonable) would rush to defend the system/person if they like the system/person against any charges of racism that doesn’t have enough evidence to be convicted in court. Or worse, immediately rush to “my defense” and get very indignant on my behalf without being very objective about the whole thing, even though given that I was the one who was emotional at the time, them being more emotional is less helpful (I say “worse” on epistemic grounds even though in the heat of the moment I often appreciate it).
For the sake of completion, I will note that AFAICT, none of these (coded racist) interactions have happened professionally in EA. There’s an important caveat that the statistical nature of discrimination makes it hard for me to be sure of course, but my experience with other systems is that it is often not all that subtle.
Thank you for writing this. There is a lot of personal insight and color to this answer, and I think this informed me and other readers.
I feel like it is appropriate to respond by sharing some personal experience, but I don’t really know what to immediately say. This is not because of political correctness/self-censoring but because there is a lot of personal depth involved and I’m worried I will not give an insightful and fully True answer (and I think there is low demand).
I agree the racism critique is overstated, but I think there’s a more nuanced argument for a need for greater representation/inclusion for xrisk reduction to be very good for everyone.
Quick toy examples (hypothetical):
- If we avoid extinction by very rich, nearly all white people building enough sustainable bunkers, human species continues/rebuilds, but not good for non-white people.
- If we do enough to avoid the xrisk scenarios (say, getting stuck at the poles with minimal access to resources needed to progress civilisation or something) in climate change, but not enough to avoid massively disadvantaging most of the global south, we badly exacerbate inequality (maybe better than extinction, but not what we might consider a good outcome).
And so forth. So the more nuanced argument might be we (a) need to avoid extinction, but (b) want to do so in such a way that we don’t exacerbate inequality and other harms. We stand a better chance of doing the latter by including a wider array of stakeholders than are currently in the conversation.
It seems odd to me to criticise a movement as racist without at least acknowledging that the thing we are working on seems more beneficial for non-white people than the things many other philanthropists work on. The examples you give are hypothetical, so they aren’t a criticism of what longtermists do in the real world. Most longtermists are focused on AI, bio and to a lesser extent climate risk. I fail to see how any of that work has the disparate demographic impact described in the hypotheticals.
Thanks Halstead. I’ll try to respond later, but I’d quickly like to be clear re: my own position that I don’t perceive longtermism as racist, and/or am not claiming people within it are racist (I consider this a serious claim not to be made lightly).
Do you mean to say that
vs
are serious claims that are not to be made lightly?
(Non- sequitur below, may not be interesting)
For what it’s worth, my best guess is that having the burden of proof on P1 is the correct decision procedure in the society we live in, as these accusations have a lot of associated baggage and we don’t currently have a socially acceptable way to say naively reasonable things like “Alice is very likely systematically biased against X group to Y degree in Z sphere of life, so I trust her judgements less about X as applied to Z, but all things considered Alice is still a fine person to work or socially interact with.”
But all else equal, a society where having the burden of proof on P2 would be slightly better, as it is a more accurate representation of affairs (see eg for an illustration of what I mean).
In particular, I think it is an accurate claim that most humans and most human institutions in most times and places are at least somewhat racist* (though I think the demographic compositions that people point to as “problematic”in EA, like higher education levels, should on average probably point towards less racism rather than more).
Right now social consensus appears to be that we “call out” and socially exile people above a certain (unspecified) degree of racism, and people below that bar are considered “not racist”, despite clear statistical evidence to the contrary for anybody who bothers to look.
Unfortunately, my best guess is that this topic is too politically charged for EAs to make much headway with, it overall isn’t especially important, and also trying to do so may draw the attention of hostile actors who may use our missteps here against us.
So I think my all-things-considered position is that we should basically go with the social consensus of pretending that racism below a certain degree doesn’t exist, even though the situation is moderately unfortunate for me personally.
* there are different definitions of racism. The operative definition I use is “do I expect to be treated in statistically distinguishable ways from a demographic twin who happens to be Caucasian (or black, or Indian, etc, depending on the topic of conversation)?”
Thanks Linch. I’d had
P1: People in X are racist
in mind in terms of “serious claim, not to be made lightly”, but I acknowledge your well-made points re: burden of proof on the latter.
I also worry about distribution of claims in terms of signal v noise. I think there’s a lot of racism in modern society, much of it glaring and harmful, but difficult to address (or sometimes out of the overton window to even speak about). I don’t think matters are helped by critiques that go to lengths to read racism into innocuous texts, as the author of one of the critiques above has done in my view (in other materials, and on social media).
I agree that reading racism or white supremacy into innocuous texts is harmful, and for the specific instances I’m aware of, it both involved selective quote mining, and also the mined quote wasn’t very damning even out of context.
I think a writeup about this would be very interesting, even if short or at a much lower quality/epistemic certainty than many of your other comments.
I agree that it seems that the EV of most meta race discussions seems negative, even though there is substantial perspective that might be useful and seems unsaid.
For example, steelmanning the Scott Alexander event on both sides would be a useful exercise. This includes steelmanning the NYT writer’s assertion of a sort of SV cabal, a perspective and makes their behavior more virtuous and doesn’t seem to be discussed.
This steelman for the NYT, against Scott Alexander, would say that the doxxing issue is just a layer/proxy for optical issues which Scott Alexander arguably should bear, which in turn is a layer/proxy for silicon valley power and media. The latter two layers are far more interesting and important than doxxing, despite being unexamined by the rationalist community.
This steelman is probably represented by the views in this New Yorker article (that avoids most of the loaded racism issues).
It’s fascinating to watch this conflict between two intelligentsia on opposite coasts. Both seem truth seeking and worthy of respect, but are in a contest whose nature seem unacknowledged by the rationalist side.
While limited, the relevance to this post and similar discussions is that the New Yorkers perspective, which looks down at the self-importance and arcane re-invention of the rationalist community, is probably the mainstream view. If these perspectives are true, EA probably has to deal with this too when advancing longtermism.
There’s probably ways of dealing with this issue (that might be better than chalking up issues to presentation or “weirdness”) but this seems very hard and I haven’t thought about this and I feel like I will write something dumb. Also, I think there’s low demand for this comment, which is already very long.
This is somewhat relevant to the top level post and the articles it refers to (that seem lower in quality than the New Yorker article).
For what it’s worth, I got the opposite impression. I think neither side is particularly truth-seeking, and much more out to “win” rather than be deeply concerned with what is true. My own experience during the whole SSC/NYT affair was to get very indignant and follow @balajis* (who I’ve since muted), a tech personality with a crusade against tech journalism, and reading him only helped amplify my sense of zealotry against conventional media. On reflection this was very far from my ideals or behaviors I’d like to have going forwards, and I consider my behavior then moderately large evidence against my own truth-seeking.
I think the SSC/NYT event was a fitting culmination of the Toxoplasma of Rage that SSC itself warned us about, and some members of our movement, myself included, was nontrivially corrupted by external bad faith actors (on both sides).
* To be clear this is not a condemnation of him as a person or for his work or anything, just his Twitter personality.
It seems like you are describing a difficult personal experience. I think the rationalist community and Scott Alexander are altruistic and virtuous, so it seems having been involved/going through the journey in the way I think you are describing would make anyone indignant.
I did not have the same experience with this incident but I have had beliefs and made many poor decisions I have regretted, in very different domains/places, almost certainly with much worse epistemics than you.
I don’t think there’s anything particularly interesting here. The short compression of my views is that different people have competing access needs, and I don’t feel like I have a safe space outside of a very small subset of my friends to say something pretty simple and naively reasonable like
without basically getting embroiled in a proxy culture war. I feel like many people (even ones I naively would have thought to be fairly reasonable) would rush to defend the system/person if they like the system/person against any charges of racism that doesn’t have enough evidence to be convicted in court. Or worse, immediately rush to “my defense” and get very indignant on my behalf without being very objective about the whole thing, even though given that I was the one who was emotional at the time, them being more emotional is less helpful (I say “worse” on epistemic grounds even though in the heat of the moment I often appreciate it).
For the sake of completion, I will note that AFAICT, none of these (coded racist) interactions have happened professionally in EA. There’s an important caveat that the statistical nature of discrimination makes it hard for me to be sure of course, but my experience with other systems is that it is often not all that subtle.
Thank you for writing this. There is a lot of personal insight and color to this answer, and I think this informed me and other readers.
I feel like it is appropriate to respond by sharing some personal experience, but I don’t really know what to immediately say. This is not because of political correctness/self-censoring but because there is a lot of personal depth involved and I’m worried I will not give an insightful and fully True answer (and I think there is low demand).