even though the situation is moderately unfortunate for me personally.
I think a writeup about this would be very interesting, even if short or at a much lower quality/epistemic certainty than many of your other comments.
Unfortunately, my best guess is that this topic is too politically charged for EAs to make much headway with, it overall isn’t especially important, and also trying to do so may draw the attention of hostile actors who may use our missteps here against us.
I agree that it seems that the EV of most meta race discussions seems negative, even though there is substantial perspective that might be useful and seems unsaid.
For example, steelmanning the Scott Alexander event on both sides would be a useful exercise. This includes steelmanning the NYT writer’s assertion of a sort of SV cabal, a perspective and makes their behavior more virtuous and doesn’t seem to be discussed.
This steelman for the NYT, against Scott Alexander, would say that the doxxing issue is just a layer/proxy for optical issues which Scott Alexander arguably should bear, which in turn is a layer/proxy for silicon valley power and media. The latter two layers are far more interesting and important than doxxing, despite being unexamined by the rationalist community.
This steelman is probably represented by the views in this New Yorker article (that avoids most of the loaded racism issues).
It’s fascinating to watch this conflict between two intelligentsia on opposite coasts. Both seem truth seeking and worthy of respect, but are in a contest whose nature seem unacknowledged by the rationalist side.
While limited, the relevance to this post and similar discussions is that the New Yorkers perspective, which looks down at the self-importance and arcane re-invention of the rationalist community, is probably the mainstream view. If these perspectives are true, EA probably has to deal with this too when advancing longtermism.
There’s probably ways of dealing with this issue (that might be better than chalking up issues to presentation or “weirdness”) but this seems very hard and I haven’t thought about this and I feel like I will write something dumb. Also, I think there’s low demand for this comment, which is already very long.
This is somewhat relevant to the top level post and the articles it refers to (that seem lower in quality than the New Yorker article).
What’s fascinating about this is the conflict between two intelligentsia on opposite coasts. Both seem truth seeking and worthy of respect, but are in a contest whose nature and stakes seem unacknowledged.
For what it’s worth, I got the opposite impression. I think neither side is particularly truth-seeking, and much more out to “win” rather than be deeply concerned with what is true. My own experience during the whole SSC/NYT affair was to get very indignant and follow @balajis* (who I’ve since muted), a tech personality with a crusade against tech journalism, and reading him only helped amplify my sense of zealotry against conventional media. On reflection this was very far from my ideals or behaviors I’d like to have going forwards, and I consider my behavior then moderately large evidence against my own truth-seeking.
I think the SSC/NYT event was a fitting culmination of the Toxoplasma of Rage that SSC itself warned us about, and some members of our movement, myself included, was nontrivially corrupted by external bad faith actors (on both sides).
* To be clear this is not a condemnation of him as a person or for his work or anything, just his Twitter personality.
It seems like you are describing a difficult personal experience. I think the rationalist community and Scott Alexander are altruistic and virtuous, so it seems having been involved/going through the journey in the way I think you are describing would make anyone indignant.
I did not have the same experience with this incident but I have had beliefs and made many poor decisions I have regretted, in very different domains/places, almost certainly with much worse epistemics than you.
even though the situation is moderately unfortunate for me personally.
I think a writeup about this is very interesting, even if short or at a much lower quality/epistemic certainty than many of your other comments.
I don’t think there’s anything particularly interesting here. The short compression of my views is that different people have competing access needs, and I don’t feel like I have a safe space outside of a very small subset of my friends to say something pretty simple and naively reasonable like
I view that my /your interaction with this system/person is parsimoniously explained by either racism or a conjunction of factors that include racism. I would like your help in verifying whether the evidence checks out, as I tend to get emotional about this kind of thing. I would also like to talk about mitigation strategies like how I can minimize this type of interaction in the future. No, I am not claiming that this system deserves to burn down/this person ought to be cancelled. Yes, I think the system/person is probably fine in the grand scheme of things.
without basically getting embroiled in a proxy culture war. I feel like many people (even ones I naively would have thought to be fairly reasonable) would rush to defend the system/person if they like the system/person against any charges of racism that doesn’t have enough evidence to be convicted in court. Or worse, immediately rush to “my defense” and get very indignant on my behalf without being very objective about the whole thing, even though given that I was the one who was emotional at the time, them being more emotional is less helpful (I say “worse” on epistemic grounds even though in the heat of the moment I often appreciate it).
For the sake of completion, I will note that AFAICT, none of these (coded racist) interactions have happened professionally in EA. There’s an important caveat that the statistical nature of discrimination makes it hard for me to be sure of course, but my experience with other systems is that it is often not all that subtle.
Thank you for writing this. There is a lot of personal insight and color to this answer, and I think this informed me and other readers.
I feel like it is appropriate to respond by sharing some personal experience, but I don’t really know what to immediately say. This is not because of political correctness/self-censoring but because there is a lot of personal depth involved and I’m worried I will not give an insightful and fully True answer (and I think there is low demand).
I think a writeup about this would be very interesting, even if short or at a much lower quality/epistemic certainty than many of your other comments.
I agree that it seems that the EV of most meta race discussions seems negative, even though there is substantial perspective that might be useful and seems unsaid.
For example, steelmanning the Scott Alexander event on both sides would be a useful exercise. This includes steelmanning the NYT writer’s assertion of a sort of SV cabal, a perspective and makes their behavior more virtuous and doesn’t seem to be discussed.
This steelman for the NYT, against Scott Alexander, would say that the doxxing issue is just a layer/proxy for optical issues which Scott Alexander arguably should bear, which in turn is a layer/proxy for silicon valley power and media. The latter two layers are far more interesting and important than doxxing, despite being unexamined by the rationalist community.
This steelman is probably represented by the views in this New Yorker article (that avoids most of the loaded racism issues).
It’s fascinating to watch this conflict between two intelligentsia on opposite coasts. Both seem truth seeking and worthy of respect, but are in a contest whose nature seem unacknowledged by the rationalist side.
While limited, the relevance to this post and similar discussions is that the New Yorkers perspective, which looks down at the self-importance and arcane re-invention of the rationalist community, is probably the mainstream view. If these perspectives are true, EA probably has to deal with this too when advancing longtermism.
There’s probably ways of dealing with this issue (that might be better than chalking up issues to presentation or “weirdness”) but this seems very hard and I haven’t thought about this and I feel like I will write something dumb. Also, I think there’s low demand for this comment, which is already very long.
This is somewhat relevant to the top level post and the articles it refers to (that seem lower in quality than the New Yorker article).
For what it’s worth, I got the opposite impression. I think neither side is particularly truth-seeking, and much more out to “win” rather than be deeply concerned with what is true. My own experience during the whole SSC/NYT affair was to get very indignant and follow @balajis* (who I’ve since muted), a tech personality with a crusade against tech journalism, and reading him only helped amplify my sense of zealotry against conventional media. On reflection this was very far from my ideals or behaviors I’d like to have going forwards, and I consider my behavior then moderately large evidence against my own truth-seeking.
I think the SSC/NYT event was a fitting culmination of the Toxoplasma of Rage that SSC itself warned us about, and some members of our movement, myself included, was nontrivially corrupted by external bad faith actors (on both sides).
* To be clear this is not a condemnation of him as a person or for his work or anything, just his Twitter personality.
It seems like you are describing a difficult personal experience. I think the rationalist community and Scott Alexander are altruistic and virtuous, so it seems having been involved/going through the journey in the way I think you are describing would make anyone indignant.
I did not have the same experience with this incident but I have had beliefs and made many poor decisions I have regretted, in very different domains/places, almost certainly with much worse epistemics than you.
I don’t think there’s anything particularly interesting here. The short compression of my views is that different people have competing access needs, and I don’t feel like I have a safe space outside of a very small subset of my friends to say something pretty simple and naively reasonable like
without basically getting embroiled in a proxy culture war. I feel like many people (even ones I naively would have thought to be fairly reasonable) would rush to defend the system/person if they like the system/person against any charges of racism that doesn’t have enough evidence to be convicted in court. Or worse, immediately rush to “my defense” and get very indignant on my behalf without being very objective about the whole thing, even though given that I was the one who was emotional at the time, them being more emotional is less helpful (I say “worse” on epistemic grounds even though in the heat of the moment I often appreciate it).
For the sake of completion, I will note that AFAICT, none of these (coded racist) interactions have happened professionally in EA. There’s an important caveat that the statistical nature of discrimination makes it hard for me to be sure of course, but my experience with other systems is that it is often not all that subtle.
Thank you for writing this. There is a lot of personal insight and color to this answer, and I think this informed me and other readers.
I feel like it is appropriate to respond by sharing some personal experience, but I don’t really know what to immediately say. This is not because of political correctness/self-censoring but because there is a lot of personal depth involved and I’m worried I will not give an insightful and fully True answer (and I think there is low demand).