The race stuff is much more right-coded than some of the other genetic/disability stuff.
David Mathers
I think he is spreading the view because he strategizes about doing so in the quoted email (though it’s a bit hard to specify what the view is, since it’s not clear what probability “probably” amounts to.)
You are a polite and careful critic, I think you will not get a mega-hostile reaction from most people. (If the worry is just that you won’t persuade, then, well, you’re not making things worse.)
Do we actually have hard statistical evidence that rationalists as a group “lean right”? I am highly unsympathetic to right rationalism, as you can see here: https://forum.effectivealtruism.org/posts/kgBBzwdtGd4PHmRfs/an-instance-of-white-supremacist-and-nazi-ideology-creeping?commentId=tNHd9C8ZbazepnDqs And it certainly feels true emotionally to me that “rationalism is right-wing”. (Which is one reason I consider myself an EA but not a rationalist, although that is mostly just because I entered EA through academic philosophy not rationalism and other than reading a lot of SSC/ACX over the years, have only ever interacted with rationalists in the context of doing EA stuff.) Certain high profile individual rationalist seem to hold a lot of taboo/far-right beliefs (i.e. Scott Alexander on race and IQ here: https://www.astralcodexten.com/p/book-review-the-origins-of-woke). Roko and Hannania are of course even more right-wing (and frankly pretty gross in my view), though hopefully they are outliers.
BUT
Over the years, I have observed a general pattern with, what we can call “rationalist-like” groups: i.e. lots of men, mostly straight and white, lots of autism broadly construed, an interest in telling the harsh truth, reverence for STEM and skepticism of the humanities, more self-declared right libertarians than the population average etc.:
1) The group gains a reputation for being right-wing, sexist, bigoted etc.
2) People in the group get very offended about this; I get a bit offended too: most people I have met within the group seem moderate, with a lean towards the centre-left rather than the centre-right. I feel as a person with mild autism that autistic truth-telling and bluntness is getting stigmatised by annoying, overemotive people who can’t defend their views in a fair argument.
3) Gradually thought leaders of the group have a lot of scandals involving some combination of: misogyny, sexual harassment, Islamophobia, racism, eugenics, Western chauvinism etc.
4) I start thinking “ok, maybe [group] actually is right-wing, and I am either a sucker to be involved with it, or self-deceived about my own (self-declared liberal centrist) political preferences (after all, I do get irritated with the left a lot and believe some un-PC things, I think I am pro some transhumanist genetic engineering stuff in principle maybe, am not particularly left-or right-economically, maybe “liberalism” is what angry Marxists on twitter say it is etc. etc.)”
5) Some survey data comes out about the political views of rank-and-file members of [group]. They are overwhelmingly centre-left liberal: Where there is evidence on views about gender specifically, they are also pretty centre-left liberal. I feel even more confused.
Over the years I have seen this pattern to varying degrees with:
-Movement atheism (Can’t find the survey data I once saw highlighted on twitter on this by a surprised critic of right-wing movement “skepticism” so you’ll have to trust me on this one.)
-Analytic philosophy (relatively speaking: seen as a “right-wing” subject in the humanities in relative terms, and a bastion of sexism: both those might be true, but nonetheless, considerably more analytic philosophers endorse socialism than capitalism, and a slight majority are socialists: https://survey2020.philpeople.org/survey/results/5122)
-”Tech” itself: https://www.noahpinion.blog/p/silicon-valley-isnt-full-of-fascists
-EA itself (Compare our bad reputation on the left-as far as I can tell, with the fact that more EAs identify as “left” than “centre” “centre-right” “right” “libertarian” or “other” put together even when “centre-left” is also an option: https://forum.effectivealtruism.org/posts/AJDgnPXqZ48eSCjEQ/ea-survey-2022-demographics#Politics).
I have seen less hard data for the rationalists, but I do recall about ten years ago Scott Alexander trumpeting that the average LessWrong user had at least as positive a rating of “feminism” on a 1-5 scale as the average American woman. (Though the median American woman politically is probably like an elderly Latina church goer with economically left-wing socially conservative Catholic views?) And that whilst survey data of SSC readers at one point showed most endorsed “race realism” (I remember David Thorstad pointing this out on twitter), and I would not hesitate to describe ACX as “linked to the far-right”, nonetheless I seem to remember than when Scott surveyed the readers on a 1-10 left-right scale, the median reader was a 4.something, i.e. very slightly more left than right identified:
I am not sure what is going on with this, probably a mixture of:
-People being self-deceived about their views and being more right-wing than they think they are, because the right is stigmatized in wider intellectual culture and people don’t want to see themselves as part of it.
-People in these spaces hold mostly left views, but they mostly hold relatively uncontroversial left-wing views, or are not a prime target for the right-wing press for other reasons, whilst they minority of right-wing views they do hold tend to be radioactively controversial so they end up in the media.
-I mostly read centre-leftish media (The Guardian, Yglesias, Vox until the last couple of years) or critics of “wokeness” who are not straightforwardly conservative (Yglesias again, Singal), rather than conventional conservative stuff, so I hear about “woke”/left anger with these groups, but not right-wing anger with them. I also pay less attention to the latter because I just care less about it; it’s not a source of personal angst for me in the same way.
-People who want to/get to become leaders in these sorts of spaces differ in their traits from the median member of the group in ways that make them predictably more right-wing than the average.
-*Becoming* a leader makes you more right-wing, since you like hierarchy more when you’re on top of the local hierarchy.
-People confused a (perceived and/or real) tendency towards sexual bad behaviour amongst autistic nerds with a right-wing political position.
-These groups are well to the left of the median citizens, but they are to the right of the median person with a master degree, so most people in “intellectual” spheres are correctly picking up on them being more right-wing than them and they’re friends, but wrongly concluding that makes them “right-wing” by the standards of the public as a whole.
-Anything stereotypically “masculine” outside of a strike by manual labourers gets coded as “right” these days, facts be damned.
-There is a distinctive cluster of issues around “biodeterminism” on which these groups are very, very right-wing on average-eugenics, biological race and gender differences etc.-but on everything else they are centre-left.
It only definitely follows from humans being net negative in expectation that we should try to make humans go extinct if you are both a full utilitarian and “naive” about it, i.e. prepared to break usually sacrosanct moral rules when you personally judge that to be likely to have the best consequences, something which most utilitarians take to be likely to usually result in bad consequences and therefore to be discouraged. Another way to describe ‘make humanity more likely to go extinct’ is ‘murder more people than all the worst dictators in history combined’. That is the sort of thing that is going to be look like a prime candidate for “do not do this, even if it has the best consequences’ on non-utilitarian moral views. And it’s also obviously breaking standard moral rules.
I should probably stop posting on this or reading the comments, for the sake of my mental health (I mean that literally, this is a major anxiety disorder trigger for me.) But I guess I sort of have to respond to a direct request for sources.
Scott’s official position on this is agnosticism, rather than public endorsement*. (See here for official agnosticism: https://www.astralcodexten.com/p/book-review-the-cult-of-smart)
However, for years at SSC he put the dreaded neo-reactionaries on his blogroll. And they are definitely race/IQ guys. Meanwhile, he was telling friends privately at the time, that “HBD” (i.e. “human biodiversity”, but generally includes the idea that black people are genetically less intelligent) is “probably partially correct or at least very non-provably non-correct”: https://twitter.com/ArsonAtDennys/status/1362153191102677001 . That is technically still leaving some room for agnosticism, but it’s pretty clear which way he’s leaning. Meanwhile, he also was saying in private not to tell anyone he thinks this (I feel like I figured out his view was something like this anyway though? Maybe that’s hindsight bias): ‘NEVER TELL ANYONE I SAID THIS, not even in confidence’. And he was also talking about how publicly declaring himself to be a reactionary was bad strategy for PR reasons (“becoming a reactionary would be both stupid and decrease my ability to spread things to non-reactionary readers”). (He also discusses how he writes about this stuff partly because it drives blog traffic. Not shameful in itself, but I think people in EA sometimes have an exaggerated sense of Scott’s moral purity and integrity that this sits a little awkwardly with.) Overall, I think his private talk on this paints a picture of someone who is too cautious to be 100% sure that Black people have genetically lower IQs, but wants other people to increase their credence in that to >50%, and is thinking strategically (and arguably manipulatively) about how to get them to do so. (He does seem to more clearly reject the anti-democratic and the most anti-feminist parts of Neo-Reaction.)
I will say that MOST of what makes me angry about this, is not the object-level race/IQ beliefs themselves, but the lack of repulsion towards the Reactionaries as a (fascist) political movement. I really feel like this is pretty damning (though obviously Scott has his good traits too). The Reactionaries are known for things like trolling about how maybe slavery was actually kind of good: https://www.unqualified-reservations.org/2009/07/why-carlyle-matters/ Scott has never seemed sufficiently creeped out by this (or really, at all creeped out by it in my experience). But he has been happy to get really, really angry about feminists who say mean things about nerds**, or in one case I remember, stupid woke changes to competitive debate. (I couldn’t find that one by googling, so you’ll have to trust my memory about it; they were stupid, just not worth the emotional investment.) Personally, I think fascism should be more upsetting than woke debate! (Yes, that is melodramatic phrasing, but I am trying to shock people out what I think is complacency on this topic.)
I think people in EA have a big blind-spot about Scott’s fairly egregious record on this stuff, because it’s really embarrassing for the community to admit how bad it is, people (including me often; I feel like I morally ought to give up ACX, but I still check it from time to time) like his writing for other reasons. And frankly, there is also a certain amount of (small-r) reactionary white male backlash in the community. Indeed, I used to enjoy some of Scott’s attacks on wokeness myself; I have similar self-esteem issues around autistic masculinity issues as I think many anti-woke rationalists do. The currently strongly negative position is one I’ve come to slowly over many years of thinking about this stuff, though I was always uncomfortable with his attitude towards the Reactionaries.
*[Quoting Scott] ’Earlier this week, I objected when a journalist dishonestly spliced my words to imply I supported Charles Murray’s The Bell Curve. Some people wrote me to complain that I handled this in a cowardly way—I showed that the specific thing the journalist quoted wasn’t a reference to The Bell Curve, but I never answered the broader question of what I thought of the book. They demanded I come out and give my opinion openly. Well, the most direct answer is that I’ve never read it. But that’s kind of cowardly too—I’ve read papers and articles making what I assume is the same case. So what do I think of them?This is far enough from my field that I would usually defer to expert consensus, but all the studies I can find which try to assess expert consensus seem crazy. A while ago, I freaked out upon finding a study that seemed to show most expert scientists in the field agreed with Murray’s thesis in 1987 - about three times as many said the gap was due to a combination of genetics and environment as said it was just environment. Then I freaked out again when I found another study (here is the most recent version, from 2020) showing basically the same thing (about four times as many say it’s a combination of genetics and environment compared to just environment). I can’t find any expert surveys giving the expected result that they all agree this is dumb and definitely 100% environment and we can move on (I’d be very relieved if anybody could find those, or if they could explain why the ones I found were fake studies or fake experts or a biased sample, or explain how I’m misreading them or that they otherwise shouldn’t be trusted. If you have thoughts on this, please send me an email). I’ve vacillated back and forth on how to think about this question so many times, and right now my personal probability estimate is “I am still freaking out about this, go away go away go away”. And I understand I have at least two potentially irresolvable biases on this question: one, I’m a white person in a country with a long history of promoting white supremacy; and two, if I lean in favor then everyone will hate me, and use it as a bludgeon against anyone I have ever associated with, and I will die alone in a ditch and maybe deserve it. So the best I can do is try to route around this issue when considering important questions. This is sometimes hard, but the basic principle is that I’m far less sure of any of it than I am sure that all human beings are morally equal and deserve to have a good life and get treated with respect regardless of academic achievement.
(Hopefully I’ve given people enough ammunition against me that they won’t have to use hallucinatory ammunition in the future. If you target me based on this, please remember that it’s entirely a me problem and other people tangentially linked to me are not at fault.)’
** Personally I hate *some* of the shit he complains about there too, although in other cases I probably agree with the angry feminist takes and might even sometimes defend the way they are expressed. I am autistic and have had great difficulties attracting romantic interest. (And obviously, as my name indicates I am male. And straight as it happens.) But Scott’s two most extensive blogposts on this are incredibly bare of sympathetic discussion of why feminists might sometimes be a bit angry and insensitive on this issue.
I think this is too pessimistic: why did one of Biden’s cabinet ask for Christiano in one of the top positions at the US gov’s AI safety org if the government will reliably prioritize the sort of factors you cite here to the exclusion of safety?: https://www.nist.gov/news-events/news/2024/04/us-commerce-secretary-gina-raimondo-announces-expansion-us-ai-safety
I also think that whether or not the government regulates private AI has little to do with whether it militarizes AI. It’s not like there is one dial with “amount of government” and it just gets turned up or down. Government can do very little to restrict what Open AI/DeepMind/Anthropic do, but then also spend lots and lots of money on military AI projects. So worries about militarization are not really a reason not to want the government to restrict Open AI/DeepMind/Anthropic.
Not to mention that insofar as the basic science here is getting done for commercial reasons, any regulations which slow down the commercial development of frontier modes will actually slow down the progress of AI for military applications too, whether or not that is what the US gov intends, and regardless of whether those regulations are intended to reduce X-risk, or to protect the jobs of voice actors in cartoons facing AI replacement.
I trust EV more than the charity commission about many things, but whether EV behaved badly over SBF is definitely not one of them. One judgment here is incredibly liable to distortion through self-interest and ego preservation, and it’s not the charity commission’s. (That’s not a prediction that the charity commission will in fact harshly criticize EV. I wouldn’t be surprised either way on that.)
’also on not “some moral view we’ve never thought of”.’
Oh, actually, that’s right. That does change things a bit.
People don’t reject this stuff, I suspect, because there is frankly, a decently large minority of the community who thinks “black people have lower IQs for genetic reasons” is suppressed forbidden knowledge. Scott Alexander has done a lot, entirely deliberately in my view, to spread that view over the years (although this is probably not the only reason), and Scott is generally highly respected within EA.
Now, unlike the people who spend all their time doing race/IQ stuff, I don’t think more than a tiny, insignificant fraction of the people in the community who think this actually are Nazis/White Nationalists. White Nationalism/Nazism are (abhorrent) political views about what should be done, not just empirical doctrines about racial intelligence, even if the latter are also part of a Nazi/White Nationalist worldview. (Scott Alexander individually is obviously not “Nazi”, since he is Jewish, but I think he is rather more, i.e. more than zero sympathetic ,to white nationalists than I personally consider morally acceptable, although I would not personally call him one, largely because I think he isn’t a political authoritarian who wants to abolish democracy.) Rather I think most of them have a view something like “it is unfortunate this stuff is true, because it helps out bad people, but you should never lie for political reasons”.
Several things lie behind this:
-Lots of people in the community like the idea of improving humanity through genetic engineering, and while that absolutely can be completely disconnected from racism, and indeed, is a fairly mainstream position in analytic bioethics as far as I can tell, in practice it tends to make people more suspicious of condemning actual racists, because you end up with many of the same enemies as them, since most people who consider anti-racist a big part of their identity are horrified by anything eugenic. This makes them more sympathetic to complaints from actual, political racists that they are being treated unfairly.
-As I say, being pro genetic enhancement or even “liberal eugenics”* is not that outside the mainstream in academic bioethics: you can publish it in leading journals etc. EA has deep roots in analytic philosophy, and inherits it’s sense of what is reasonable.
-Many people in the rationalist community are for various reasons strongly polarized against “wokeness”, which again, makes them sympathetic to the claims of actual political racists that they are being smeared.
-Often, the arguments people encounter against the race/IQ stuff are transparently terrible. Normal liberals are indeed terrified of this stuff, but most lack expertise in being able to discuss it, so they just claim it has been totally debunked and then clam up. This makes it look like there must be a dark truth being suppressed when it is really just a combination of almost no one has expertise on this stuff and in any case, because causation of human traits is so complex, for any case where some demographic group appears to be score worse on some trait, you can always claim it could be because of genetic causes, and in practice it’s very hard to disprove this. But of course that is not itself proof that there IS a genetic cause of the differences. The result of all this can make it seem like you have to endorse unproven race/IQ stuff or take the side of “bad arguers” something EAs and rationalists hate the thought of doing. See what Turkheimer said about this here https://www.vox.com/the-big-idea/2017/6/15/15797120/race-black-white-iq-response-critics:
’There is not a single example of a group difference in any complex human behavioral trait that has been shown to be environmental or genetic, in any proportion, on the basis of scientific evidence. Ethically, in the absence of a valid scientific methodology, speculations about innate differences between the complex behavior of groups remain just that, inseparable from the legacy of unsupported views about race and behavior that are as old as human history. The scientific futility and dubious ethical status of the enterprise are two sides of the same coin.To convince the reader that there is no scientifically valid or ethically defensible foundation for the project of assigning group differences in complex behavior to genetic and environmental causes, I have to move the discussion in an even more uncomfortable direction. Consider the assertion that Jews are more materialistic than non-Jews. (I am Jewish, I have used a version of this example before, and I am not accusing anyone involved in this discussion of anti-Semitism. My point is to interrogate the scientific difference between assertions about blacks and assertions about Jews.)
One could try to avoid the question by hoping that materialism isn’t a measurable trait like IQ, except that it is; or that materialism might not be heritable in individuals, except that it is nearly certain it would be if someone bothered to check; or perhaps that Jews aren’t really a race, although they certainly differ ancestrally from non-Jews; or that one wouldn’t actually find an average difference in materialism, but it seems perfectly plausible that one might. (In case anyone is interested, a biological theory of Jewish behavior, by the white nationalist psychologist Kevin MacDonald, actually exists [have removed link here because I don’t want to give MacDonald web traffic-David].′
If you were persuaded by Murray and Harris’s conclusion that the black-white IQ gap is partially genetic, but uncomfortable with the idea that the same kind of thinking might apply to the personality traits of Jews, I have one question: Why? Couldn’t there just as easily be a science of whether Jews are genetically “tuned to” (Harris’s phrase) different levels of materialism than gentiles?
On the other hand, if you no longer believe this old anti-Semitic trope, is it because some scientific study has been conducted showing that it is false? And if the problem is simply that we haven’t run the studies, why shouldn’t we? Materialism is an important trait in individuals, and plausibly could be an important difference between groups. (Certainly the history of the Jewish people attests to the fact that it has been considered important in groups!) But the horrific recent history of false hypotheses about innate Jewish behavior helps us see how scientifically empty and morally bankrupt such ideas really are.′
All this tends sadly to distract people from the fact that when white nationalists like Lynn talk about race/IQ stuff, they are trying to push a political agenda to strip non-whites of their rights, end anti-discrimination measures of any kind, and slash immigration, all on the basis of the fact that, basically, they just really don’t like black people. In fact, given the actual history of Nazism, it is reasonable to suspect that at least some and probably a lot of these people would go further and advocate genocide against blacks or other non-whites if they thought they could get away with it.
*See https://plato.stanford.edu/entries/eugenics/#ArguForLibeEuge
I find it easy to believe there was a heated argument but no threats, because it is easy for things to get exaggerated, and the line between telling someone you no longer trust them because of a disagreement and threatening them is unclear when you are a powerful person who might employ them. But I find Will’s claim that the conversation wasn’t even about whether Sam was trustworthy or anything related to that, to be really quite hard to believe. It would be weird for someone to be mistaken or exaggerate about that, and I feel like a lie is unlikely, simply because I don’t see what anyone would gain from lying to TIME about this.
Nathan’s comment here is one case where I really want to know what the people giving agree/disagree votes intended to express. Agreement/disagreement that the behaviour “doesn’t sound like Will’? Agreement/disagreement that Naia would be unlikely to be lying? General approval/disapproval of the comment?
Yes, but not at great length.
From my memory, which definitely could be faulty since I only listened once:
He admits people did tell him Sam was untrustworthy. He says that his impression was something like “there was a big fight and I can’t really tell what happened or who is right” (not a direct quote!). Stresses that many of the people who warned him about Sam continued to have large amounts of money on FTX later, so they didn’t expect the scale of fraud we actually saw either. (They all seem to have told TIME that originally also.) Says Sam wrote a lot of reflections (10k words) on what had gone wrong at early Alameda and how to avoid similar mistakes again, and that he (Will) now understands that Sam was actually omitting stuff that made him look bad, but at the time, his desire to learn from his mistakes seemed convincing.
He denies threatening Tara, and says he spoke to Tara and she agreed that while their conversation got heated, he did not threaten her.
Will’s expressed public view on that sort of double or nothing gamble is hard to actually figure out, but it is clearly not as robustly anti as commonsense would require, though it is also clearly a lot LESS positive than SBF’s view that you should obviously take it: https://conversationswithtyler.com/episodes/william-macaskill/
(I haven’t quoted from the interview, because there is no one clear quote expressing Will’s position, text search for “double” and you’ll find the relevant stuff.)
Actually, I have a lot of sympathy with what you are saying here. I am ultimately somewhat inclined to endorse “in principle, the ends justify the means, just not in practice” over at least a fairly wide range of cases. I (probably) think in theory you should usually kill one innocent person to save five, even though in practice anything that looks like doing that is almost certainly a bad idea, outside artificial philosophical thought experiments and maybe some weird but not too implausible scenarios involving war or natural disaster. But at the same time, I do worry a bit about bad effects from utilitarianism because I worry about bad effects from anything. I don’t worry too much, but that’s because I think those effects are small, and anyway there will be good effects of utilitarianism too. But I don’t think utilitarians should be able to react with outrage when people say plausible things about the consequences of utilitarianism. And I think people who worry about this more than I do on this forum are generally acting in good faith. And yeah, I agree utilitarians shouldn’t (in any normal context) lie about their opinions.
I don’t necessarily disagree with most of that, but I think it is ultimately still plausible that people who endorse a theory that obviously says in principle bad ends can justify the means are somewhat (plausibly not very much though) more likely to actually do bad things with an ends-justifies-the-means vibe. Note that this is an empirical claim about what sort of behaviour is actually more likely to co-occur with endorsing utilitarianism or consequentialism in actual human beings. So it’s not refuted by “the correct understanding of consequentialism mostly bars things with an ends justifies the means vibe in practice” or “actually, any sane view allows that sometimes it’s permissible to do very harmful things to prevent a many orders of magnitude greater harm”. And by “somewhat plausible” I mean just that. I wouldn’t be THAT shocked to discover this was false, my credence is like 95% maybe? (1 in 20 things happen all the time.) And the claim is correlational, not causal (maybe both endorsement of utilitarianism and ends-justifies-the-means type behaviour are both caused partly by prior intuitive endorsement of ends-justifies-the-means type behaviour, and adopting utilitarianism doesn’t actually make any difference, although I doubt that is entirely true.)
The 3% figure for utilitarianism strikes me as a bit misleading on it’s own, given what else Will said. (I’m not accusing Will of intent to mislead here, he said something very precise that I, as a philosopher, entirely followed, it was just a bit complicated for lay people.) Firstly, he said a lot of the probability space was taken up by error theory, the view that there is no true morality. So to get what Will himself endorses, whether or not there is a true morality, you have to basically subtract an unknown but large amount for his credence in error theory from 1, and then renormalize his other credence so that they add up to 1 on their own. Secondly, there’s the difference between utilitarianism where only the consequences of your actions matter morally, and only consequences for (total or average) pain and pleasure and/or fulfilled preferences matter as consequence, and consequentialism where only the consequences of your actions matter morally, but it’s left open what those consequences are. My memory of the podcast (could be wrong, only listened once!) is that Will said that, conditional on error theory being false, his credence in consequentialism, is about 0.5. This really matters in the current context, because many non-utilitarian forms of consequentialism can also promote maximizing in a dangerous way, they just disagree with utilitarianism about exactly what you are maximizing. So really, Will’s credence in a view that, interpreted naively recommends dangerous maximizing is functionally (i.e. ignoring error theory in practice) more like 0.5 than 0.03, as I understood him in the podcast. Of course, he isn’t actually recommending dangerous max-ing regardless of his credence in consequentialism (at least in most contexts*), because he warns against naivety.
(Actually, my personal suspicion is that ‘consequentialism’ on its own is basically vacuous, because any view gives a moral preferability ordering over choices in situations, and really all that the numbers in consequentialism do is help us represent such orderings in a quick and easily manipulable manner, but that’s a separate debate.)
*Presumably sometimes dangerous, unethical-looking maximizing actually is best from a consequentialist point of view, because the dangers of not doing so, or the upside of doing so if you are right about the consequences of your options outweigh the risk that you are wrong about the consequences of different options, even when you take into account higher-order evidence that people who think intuitively bad actions maximize utility are nearly always wrong.
Actually, Wikipedia is characterizing the US bills a bit misleadingly: at least the one I looked at is not a full legal ban on trans women using women’s bathrooms, but seemed to only cover schools specifically.
Whilst there is a bigoted anti-trans backlash in the UK, as seen in the bad legislation under discussion here and Rowling destroying her reputation on twitter, I do think some Americans are a bit over-attached to the view that the UK is uniquely transphobic (“TERF island” etc.) In reality, (binary) trans people can use whatever bathroom they like anywhere in the UK, whereas 5 US states have banned them from doing so in the last 3 years: https://en.wikipedia.org/wiki/Bathroom_bill#United_States The impression that the UK must be much worse than the US on this seems confounded by a) the fact that most people angry about it live in the most liberal places in the US, and b) the fact that the UK lacks much of Christian right, so people phrasing their opposition to trans inclusion in feminist/liberal terms is partly a product of that, not just “they are so transphobic that even the liberals and feminists are anti-trans”.
Scott seems not unsympathetic to something like* that step here**, though he stops short of clear endorsement: https://www.astralcodexten.com/p/book-review-the-origins-of-woke I think this is a dangerous path to go down.
*”Something like”= if you substitute “all there is” with “a major cause, which makes some standard albeit controversial ways of targeting racial inequality fail a cost/benefit test that they might otherwise pass.
**Full quote:
’Everyone is so circumspect when talking about race that I can never figure out what anyone actually knows or believes. Still, I think most people would at least be aware of the following counterargument: suppose you’re the math department at a college. You might like to have the same percent black as the general population (13%). But far fewer than 13% (let’s say 2%) of good math PhDs are black. So it’s impossible for every math department to hire 13% black math professors unless they lower their standards or take some other drastic measure.
Okay, says our hypothetical opponent. Then that means math grad programs are discriminating against blacks. Fine, they’re the ones we should be investigating for civil rights violations.
No, say the math grad programs, fewer than 13% of our applicants are black too.
Fine, then the undergrad programs are the racists. Or if they can prove they’re not, then the high schools are racist and we should do busing. The point is, somebody somewhere along the line has to be racist, right?
I know of four common, non-exclusive answers to this question.
Yes, the high schools (or whatever) are racist. And if you can present a study proving that high schools aren’t racist, then it’s the elementary schools. And if you have a study there too, it’s the obstetricians, giving black mothers worse pregnancy care. If you have a study disproving that too, why are you collecting all these studies? Hey, maybe you’re the racist!
Maybe institutions aren’t too racist today, but there’s a lot of legacy of past racism, and that means black people are poor. And poor people have fewer opportunities and do worse in school. If you have a study showing that black people do worse even when controlled for income, then maybe it’s some other kind of capital, like educational capital or social capital. If you have studies about those too, see above.
Black people have a bad culture. Something something shoes and rap music, trying hard at school gets condemned as “acting white”. They should hold out for a better culture. I hear nobody’s using ancient Sumerian culture these days, maybe they can use that one.
White people have average IQ 100, black people have average IQ 85, this IQ difference accurately predicts the different proportions of whites and blacks in most areas, most IQ differences within race are genetic, maybe across-race ones are genetic too. I love Hitler and want to marry him.
None of these are great options, and I think most people work off some vague cloud of all of these and squirm if you try to make them get too specific. I don’t exactly blame Hanania for not taking a strong stand here. It’s just strange to assume civil rights law is bad and unnecessary without having any opinion on whether any of this is true, whether civil rights law is supposed to counterbalance it, and whether it counterbalances it a fair amount.
A cynic might notice that in February of this year, Hanania wrote Shut Up About Race And IQ. He says that the people who talk about option 4 are “wrong about fundamental questions regarding things like how people form their political opinions, what makes for successful movements, and even their own motivations.” A careful reader might notice what he doesn’t describe them as being wrong about. The rest of the piece almost-but-not-quite-explicitly clarifies his position: I read him as saying that race realism is most likely true, but you shouldn’t talk about it, because it scares people.
(*I’m generally against “calling people out” for believing in race realism*. I think people should be allowed to hide beliefs that they’d get punished for not hiding. I sympathize with some of these positions and place medium probability on some weak forms of them. I think Hanania is open enough about where he’s coming from that this review doesn’t count as a callout.)
″