Thanks, that’s useful! I guess the surprising thing is maybe just that there still are some fairly prominent names in the rationalist space that express obviously very right wing views and that they are generally almost not seen as such (for example Scott Alexander just wrote a review of Hanania’s new book in which I’d say he almost ends up sounding naive by how much he doesn’t simply acknowledge “well, clearly Hanania is barely stopping shy of saying black people are just stupider”, something that Hanania has said openly elsewhere anyway, so it’s barely a mystery that he believes it).
dr_s
So I am actually perhaps less familiar with the distribution of political beliefs in EAs specifically and I’m thinking about rationalist-adjacent communities more at large, and there’s definitely some people more comfortable around some pretty racist stuff than you’d find elsewhere (as someone else quoted—ACX just published a review of Hanania’s book “The origins of woke”, and the book is apparently a big screed against civil rights law. And knowing Hanania, it’s not hard to guess what he’s driving at). So at least there’s a certain tendency in which open-mindedness and willingness to always try to work everything out from first principles can let in some relatively questionable ideas.
I do agree about the problem with political labels. I do worry about whether that position will be tenable if the label of “TESCREAL” takes off in any meaningful way. Labels or not, if the rationalist community writ large gets under sustained political attack from one side of the aisle, natural alliances will be formed and polarization will almost certainly occur.
Well, it’s complicated. I think in theory these things should be open to discussion (see my point on moral philosophy). But now suppose that hypothetically there was incontrovertible scientific evidence that Group A is less moral or capable than Group B. We should still absolutely champion the view that wanting to ship Group A into camps and exterminate them is barbaric and vile, and that instead the humane and ethical thing to do is help Group A compensate for their issues and flourish at the best of their capabilities (after all, we generally hold this view for groups with various disabilities that absolutely DO hamper their ability to take part in society in various ways). But to know that at all can be also construed as an infohazard: just the fact itself creates the condition for a Molochian trap in which Group A gets screwed by nothing other than economic incentives and everyone else acting in their full rights and self-interest. So yeah, in some way these ideas are dangerous to explore, in the sense that they may be a case where truth-finding has net negative utility. That said, it’s pretty clear that people are way too invested in them either way to just let sleeping dogs lie.
Yes, ACX readers do believe that genes influence a lot of life outcomes, and favour reproductive technologies like embryo selection, which are right-coded views. They’re actually not restricted to the far-right, however.
The problem is that this is really a short step away from “certain races have lower IQ and it’s kinda all there is to it to explain their socio-economic status”, and I’ve seen many people take that step. Roko and Hanania which I mentioned explicitly absolutely do so publicly and repeatedly.
So the thing with self-identification is that I think it might suffer from a certain skew. I think there’s fundamentally a bit of a stigma on identifying as right wing, and especially extreme right wing. Lots of middle class, educated people who perceive themselves as rational, empathetic and science-minded are more likely to want to perceive themselves as left wing, because that’s what left wing identity used to prominently be until a bit over 15 years ago (which is when most of us probably had their formative youth political experiences). So someone might resist the label even if in practice they are on the right half of the Overton window. Must be noted though that in some cases this might just be the result of the Overton window moving around them—and I definitely have the feeling that we now have a more polarized distribution anyway.
I think you are onto something—and I think there is a distinction here between “elites” and “rank and file”, so to speak. Not too surprisingly since these are people from very distinct backgrounds often anyway! I kind of shudder when I see high profile rationalists casually discussing betting or offering prizes of tens of thousands of dollars over small internet arguments, because it’s fairly obvious these people live in a completely different world than mine (where my wife would rightfully have my head if I spaffed half of my year’s salary for internet points). And having different material interests is fairly likely to skew your politics.
One more thing is that often the groups that you describe are most attracted to being libertarian—which is kind of a separate thing, but more right than left coded usually (though it’s the “laissez fair capitalism” kind of right, not the “round up the ethnic minorities and put them in camps” one).
There is a distinctive cluster of issues around “biodeterminism” on which these groups are very, very right-wing on average-eugenics, biological race and gender differences etc.-but on everything else they are centre-left.
This is kind of a key point because there’s also two dimensions to this. One is, “which statements about biodeterminism are true, if any?”, and the other is “what should we do about that?”. The first is a scientific question, the latter a political and moral one. But the truth is that because the right wing has offered some very awful answers to the latter, it has become an important tenet on the left to completely deny that any such statements could be true, which kind of cuts the problem at its roots. This is probably correct anyway for vastly disproved and discredited theses like “black people have lower IQ”, but it gets to the point of denying that IQ is inheritable or correlates at all with anything worth calling “intelligence”, which to me feels a bit too hard to believe (and even if it was—ok, so what is a better measure of intelligence? There has to exist one!).
And well, a community of high decoupling, high intelligence, science minded autists is probably the one that’s most likely to take issue with that. Though again it should be very wary of the risk of going down the lane of self-aggrandizement in which you fall for any supposed “study” more or less flawed that says that group so-and-so is just constitutionally stupid, no need to think any harder about why they do badly.
Fair! I think it’s hard to fully slot rationalists politically because, well, the mix of high decoupling and generally esoteric interests make for some unique combinations that don’t fit neatly in the standard spectrum. I’d definitely qualify myself as centre-left, with some more leftist-y views on some aspects of economics, but definitely bothered by the current progressive vibe that I hesitate to define “woke” since that term is abused to hell but am also not sure how to call since they obstinately refuse to give themselves a political label or even recognise that they constitute a noteworthy distinct political phenomenon at all.
How was this survey done, by the way? Self ID or some kind of scored test?
But even then, a nuanced engagement with that would require making distinctions, not just going “all EA evil”. Both Torres and Gebru these days are very invested in pushing this label of “TESCREAL” to bundle together completely different groups, from EAs who spend 10% of their income in malaria nets to rationalists who worry about AI x-risk to e/accs who openly claim that ASI is the next step in evolution. I think here there are two problems:
abstract moral philosophy can’t be for the faint of heart, you’re engaging with the fundamental meaning of good or evil, you must be able to realize when a set of assumptions leads to a seemingly outrageous conclusion and then decide what to make of that. But if a moral philosopher writes “if we assume A and B, that leads us to concluding that it would be moral to eat babies”, the reaction can’t be “PHILOSOPHER ENDORSES EATING BABIES!!!!”, because that’s both a misunderstanding of their work and if universalized will have a chilling effect leading to worse moral philosophy overall. Sometimes entertaining weird scenarios is important, if only to realise the contradictions in our assumptions;
for good or bad, left-wing thought and discourse in the last ten-fifteen years just hasn’t been very rational. And I don’t mean to say that the left can’t be rational. Karl Marx’s whole work was based on economics and an attempt to create a sort of scientific theory of history, love it or hate it the man obviously had a drive more akin to those of current rationalists than of current leftists. What is happening right now is more of a fad, a current of thought in which basically rationality and objectivity have been sort of depreciated as memes in the left wing sphere, and therapy-speak that centres the self and the inner emotions and identity has become the standard language of the left. And that pushes away a certain kind of mind very much, it feels worse than wrong, it feels like bullshit. As a result, the rationalist community kind of leans right wing on average for evaporative cooling reasons. Anyone who cares to be seen well by online left wing communities won’t associate. Anyone who’s high decoupling will be more attracted, and among those, put together with the average rationalist’s love for reinventing the wheel from first principles, some will reach some highly controversial beliefs in current society that they will nevertheless hold up as true (read: racism). It doesn’t help when terms like “eugenics” are commonly used to lump together things as disparate as Nazis literally sterilizing and killing people and hypothetical genetical modifications used to help willing parents have children that are on average healthier or live longer lives, obviously very different moral issues.
Honestly I do think the rationalist space needs to confront this a bit. People like Roko or Hanania hold pretty extreme right wing beliefs, to the point where you can’t even really call them rational because they are often dominated by confirmation bias and the usual downfalls of political polarization. Longtermism itself is a pretty questionable proposition in my book, though that argument still lays in the space of philosophy for the most part. I would be all for the rise of a “rational left”, both for the good of the rationalist community and for the good of the left, which is currently really mired into an unproductive circle jerk of emotionalism and virtue signalling. But this “TESCREAL” label if anything risks having the opposite effect, and polarizing people away from these philosophically incompetent and intellectually dishonest representatives of what’s supposed to be the current left wing intelligentsia.
I don’t know if it makes a lot of sense because yes, in theory from my viewpoint all “torture worlds” (N agents, all suffering the same amount of torture) are equivalent. I feel like that intuition is more right than just “more people = more torture”. I would call them equally bad worlds, and if the torture is preternatural and inescapable I have no way of choosing between them. But I also feel like this is twisting ourselves into examples that are completely unrealistic, to the point of almost uselessness; it is no wonder that our theories of ethics break down, same as most physics does at a black hole singularity.
I think the reason for those intuitions is that (reasonably enough!) we can’t imagine there being 10^100 people without there also being a story behind that situation. A world in which e.g. some kind of entity breeds humans on purpose to then torture them, leading to those insane amounts, sounds indeed absolutely hellish! But the badness of it is due to the context; a world in which there exists only one person, and that person is being horribly tortured, is also extremely upsetting and sad, just in a different way and for different reasons (and all paths to there are also very disturbing; but we’ll maybe think “at least everyone else just died without suffering as much” so it feels less horrible than the 10^100 humans torture world).
But my intuition on the situation alone is more along the lines of: imagine you know you’re going to be born into this world. Would you like you odds? And in both the “one tortured human” and the “10^100 tortured humans” worlds, your odds would be exactly the same: 100% chance of being tortured.
But all of these are just abstract thought experiments. In any realistic situations, torture worlds don’t just happen—there is a story leading to them, and for any kind of torture world, that story is godawful. So in practice the two things can’t be separated. I think it’s fairly correct to say that in all realistic scenarios the 10^100 world will be in practice worse, or have a worse past, though both worlds would be awful.
Would 10^100 happy people be just as good as 1 happy person (assuming everyone is just as happy individually)?
Hypothetically, yes, if we take it into a vacuum. I find the scenario unrealistic in any real world circumstance though because obviously people’s happiness tends to depend on having other people around, and also, because any trajectory that ended in there being only one person, happy or not, from the current situation, seems likely bad.
Would 10^100 people being tortured be just as bad as 1 person being tortured (assuming everyone is feeling just as bad individually)?
Pretty much the same reasoning applies. For “everyone has it equally good/bad” worlds, I don’t think sheer numbers make a difference. What makes things more complicated is when inequality is involved.
Would you agree that a happy life of 100 years is just as good as a happy life of 1 year (assuming the annual happiness is the same in both cases)? If not, why is a person the relevant unit of analysis, and not a person-year?
I think length of life matters a lot; if I know I’ll have just one year of life, my happiness is kind of tainted by the knowledge of imminent death, you know? We experience all of our life ourselves. For an edge scenario, there’s one character in “Permutation City” (a Greg Egan novel) who is a mind upload and puts themselves into a sort of mental state loop; after a period T their mental state maps exactly to itself and repeats identically, forever. If you considered such a fantastical scenario, then I’d argue the precise length of the loop doesn’t matter much.
I find it hard to understand this. I think 10 billion happy people is better than no people. I guess you disagree with this?
“No people” is a special case—even if one looks at e.g. average utilitarianism, that’s a division by zero. I think a universe with no sentient beings in it does not have a well-defined moral value: moral value only exists with respect to sentients, so without any of them, the categories of “good” or “bad” stop even making sense. But obviously any path from a universe with sentients to a universe without implies extermination, which is bad.
However, given an arbitrary amount of sentient beings of comparable happiness, I don’t think the precise amount matters to how good things are, no. No one experiences all that good at once, hence 10 billion happy people are as good as 10 million—if they are indeed just as happy.
because it is possible for someone to have preferences which are not ideal to maximise one’s goals
I think any moral philosophy that leaves the door open to too much of “trust me, it’s for your own good, even though it’s not your preference you’ll enjoy the outcome far more” is rife for dangerous derailments.
No killing is necessary given an ASI. The preferences of humans could be modified such that everyone is happy with ASI taking over the universe. In addition, even if you think killing without suffering is bad in itself (and note ASI may even make the killing pleasant to humans), do you think that badness would outweigh an arbitrarity large happiness?
Yes, because I don’t care if the ASI is very very happy, it still counts for one. I also don’t think you can reasonably conceive of unbounded amounts of happiness felt by a single entity, so much as to compensate for all that suffering. Also try to describe to anyone “hey what if a supercomputer that wanted to take over the universe brainwashed you to be ok with it taking over the universe”, see their horrified reaction, and consider whether it makes sense for any moral system to reach conclusions that are obviously so utterly, instinctively repugnant to almost everyone.
I think rocks are sentient in the sense they have a non-null expected welfare range, but it does not matter because I have no idea how to make them happier.
I’m… not even sure how to parse that. Do you think rocks have conscious experiences?
No, it would not be acceptable. I am strongly against negative utilitarianism. Vaporising all beings without any suffering would prevent all future suffering, but it would also prevent all future happiness. I think the expected value of the future is positive, so I would rather not vaporise all beings.
The idea was that the vaporization is required to free the land for a much more numerous and technologically advanced populace, who can then go on to live off its resources a much more leisurely life with less hard work, less child mortality, less disease etc. So you replace, say, 50,000 vaporised indigenous people living like hunter gatherers with 5 million colonists living like we do now in the first world (and I’m talking new people, children that can only be born thanks to the possibility of expanding in that space). Does that make the genocide any better? If not, why? And how do those same arguments not apply to the ASI too?
I think smaller beings will produce welfare more efficiently. For example, I guess bees produce it 5 k times as effectively as humans.
I just don’t think it makes any sense to have an aggregated total measure of “welfare”. We can describe what is the distribution of welfare across the sentient beings of the universe, but to simply bunch it all up has essentially no meaning. In what way is a world with a billion very happy people any worse than a world with a trillion merely okay ones? I know which one I’d rather be born into! How can a world be worse for everyone individually yet somehow better, if the only meaning of welfare is that it is experienced by sentient beings to begin with?
There is a terrorist who is going to release a very infectious virus which will infect all humans on Earth. The virus makes people infertile forever, thus effectively leading to human extinction, but it also makes people fully lose their desire to have children, and have much better self-assessed lives. Would it make sense to kill the terrorist? Killing the terrorist would worsen the lives of all humans alive, but it would also prevent human extinction.
It’s moral because the terrorist is infringing the wishes of those people right now, and violating their self-determination. If the people decided to infect themselves, then it would be ok.
Genocide involves suffering, and suffering is bad, so I assume there would be a better option to maximise impartial welfare. For example, ASI could arguably persuade humans that their extinction was for the better, or just pass everyone to a simulation without anyone noticing, and then shutting down the simulation in a way that no suffering is caused in the process.
I disagree that the genocide is made permissible by making the death a sufficiently painless euthanasia. Sure, the suffering is an additional evil, but the killing is an evil unto itself. Honestly, consider where these arguments could lead in realistic situations and consider whether you would be okay with that, or if you feel like relying on a circumstantial “well but actually in reality this would always come out negative net utility due to the suffering” is protection enough. If you get conclusions like these from your ethical framework it’s probably a good sign that it might have some flaws.
For example, replacing rocks by humans is fine, and so might be replacing humans by digitals minds. However, the replacement process itself should maximise welfare, and I am very sceptical that “brutal colonization efforts” would be the most efficient way for ASI to perform the replacement.
Rocks aren’t sentient, they don’t count. And your logic still doesn’t work. What if you can instantly vaporize everyone with a thermonuclear bomb, as they are all concentrated within the radius of the fireball? Death would then be instantaneous. Would that make it acceptable? Very much doubt it.
I am not sure I understand your point. Are you suggesting we should not maximise impartial welfare because this principle might imply humans being a small fraction of the overall number of beings?
I just don’t think total sum utilitarianism maps well with the kind of intuitions I’d like a functional moral system to match. I think ideally a good aggregation system for utility should not be vulnerable to being gamed via utility monsters. I lean more towards average utility as a good index, though that too has its flaws and I’m not entirely happy with it. I’ve written a (very tongue-in-cheek) post about it on Less Wrong.
Whether suffering is good or bad depends on the situation, including the person assessing it.
Sure. So that actually backs my point that it’s all relative to sentient subjects. There is no fundamental “real morality”, though there are real facts about the conscious experience of sentient beings. But trade-offs between these experiences aren’t obvious and can’t be settled empirically.
The idea sounds bad to me too! The reason is that, in the real world, killing rarely brings about good outcomes. I am strongly against violence, and killing people.
But more so, killing people violates their own very strong preference towards not being killed. That holds for an ASI too.
Genociding a population is almost always a bad idea, but I do not think one should reject it in all cases. Would you agree that killing a terrorist to prevent 1 billion human deaths would be good? If so, would you agree that killing N terrorists to prevent N^1000 billion human deaths would also be good?
I mean, ok, one can construct these hypothetical scenarios, but the one you suggested wasn’t about preventing deaths, but ensuring the existence of more lives in the future. And those are very different things.
Total utilitarianism only says one should maximise welfare. It does not say killing weaker beings is a useful heuristic to maximise welfare. My own view is that killing weaker beings is a terrible heuristic to maximise welfare (e.g. it may favour factory-farming, which I think is pretty bad).
But obviously if you count future beings too—as you are—then it becomes inevitable that this approach does justify genocide. Take the very real example of the natives of the Americas. By this logic, the same exact logic that you used for an example of why an ASI could be justified in genociding us, the colonists were justified in genociding the natives. After all, they lived in far lower population densities that the land could support with advanced agricultural techniques, and they lived hunter-gatherer or at best bronze-age style lives, far less rich of pleasures and enjoyments than a modern one. So killing a few millions of them to allow eventually for over 100 million modern Americans to make full use of the land would have been a good thing.
See the problem with the logic? As long as you have better technology and precommit to high population densities you can justify all sorts of brutal colonization efforts as a net good, if not maximal good. And that’s a horrible broken logic. It’s the same logic that the ASI that kills everyone on Earth just so it can colonize the galaxy would follow. If you think it’s disgusting when applied to humans, well, the same standards ought to apply to ASI.
In the sense of increasing expected total hedonistic utility, where hedonistic utility can be thought of as positive conscious experiences.
I don’t think total sum utilitarianism is a very sensible framework to me. I think it can work as a guideline within certain boundaries, but it breaks down as soon as you admit the potential for things like utility monsters, which ASI as you’re describing it effectively is. Everyone only experiences one life, their own, regardless of how many other conscious entities are out there.
I would say it comes from the Laws of Physics, like everything else. While I am being tortured, the particles and fields in my body are such that I have a bad conscious experience.
That just kicks the metaphysical can down the road. Ok, suffering is physical. Who says that suffering is good or bad? Or that it is always good or bad? Who says that what’s important is total rather than average utility, or some other more complex function? Who says how can we compare the utility of subjects A and B when their subjective qualia are incommensurate? None of these things can be answered by physics or really empirical observation of any kind alone, that we know of.
if humans in 2100 determined they wanted to maintain forever the energy utilization of humans and AIs below 2100 levels, and never let humans nor AIs leave Earth, I would be happy for advanced AIs to cause human extinction (ideally in a painless way) in order to get access to more energy to power positive conscious experiences of digital minds.
I disagree, personally. The idea that it’s okay to kill some beings today to allow more to exist in the future does not seem good to me at all, for several reasons:
at a first order level, because I don’t subscribe to total sum utilitarianism, or rather I don’t think it’s applicable so wildly out of domain—otherwise you could equally justify e.g. genociding the population of an area that you believe can be “better utilized” to allow a different population to grow in it and use its resources fully. I hope we can agree that is in fact a bad thing; not merely worse than somehow allowing for coexistence, but just bad. We should not in fact do this kind of thing today, so I don’t think AI should do it to us in the future;
at a higher order, because if you allow such things as good you create terrible incentives in which basically everyone who thinks they can do better is justified in trying to kill anyone else.
So I think if your utility function returns these results as good, it’s a sign your utility function is wrong; fix it. I personally think that the free choices of existing beings are supreme here; having a future is worth it insofar as present existing beings desire that there is a future. If humanity decided to go voluntarily extinct (assuming such a momentous decision could genuinely be taken by everyone in synchrony—bit of a stretch), I’d say they should be free to, without feeling bound by either the will of their now dead ancestors nor the prospect of their still hypothetical descendants. It’s not that I can’t imagine a situation in which in a war between humans and ASI I couldn’t think, from my present perspective, that the ASI could be right, though I’d still hope such a war does not turn genocidal (and in fact any ASI that I agree with would be one that doesn’t resort to genocide as long as it has the option to). But if the situation you described happened, I’d side with the humans, and I’d definitely say that we shouldn’t build the kind of ASI that wouldn’t either. Any ASI that can arbitrarily decide “I don’t like what these humans are doing, better to kill them all and start afresh” is in fact a ticking time bomb, a paperclipper that is only happy to suffer us live as long as we also make paperclips.
good to the universe
Good in what sense? I don’t really buy the moral realist perspective—I don’t see where could this “real morality” possibly come from. But on top of that, I think we all agree that disempowerment, genocide, slavery, etc. are bad; we also frown upon our own disempowerment of non-human animals. So there are two options:
either the “real morality” is completely different and alien from the morality we humans currently tend to aspire to as an ideal (at least in this corner of the world and of the internet), or
a SAI that would disempower us despite being vastly more smart and capable and having no need to would be pretty evil.
If it’s 1, then I’m even more curious to know what this real morality is, why should I care, or why would the SAI understand it while we seem to be drifting further away from it. And if it’s 2, then obviously unleashing an evil SAI on the universe is a horrible thing to do, not just for us, but for everyone else in our future light cone. Either way, I don’t see a path to “maybe we should let SAI disempower us because it’s the greater good”. Any sufficiently nice SAI would understand well enough we don’t want to be disempowered.
We also don’t plan to ask the kids for “permission” to move.
A comment on this: as a child, my parents moved twice in the space of a few years (basically, I never did more than two years in a row in the same elementary school). I never really even thought I should have been consulted, but in hindsight, those two quick moves probably contributed significantly to my attachment issues, as they both broke some nice friendships I had found myself in and pretty much taught me a sort of unconscious “don’t get too attached to people” lesson. Between that and the bad luck that the last school I ended up in was not quite as nice an environment for me as the first two, this probably had a big impact on me that I still feel the effects of. So, what I mean here is, it’s easy to consider the needs of children secondary to those of the big, important, money-making adults, but when it comes to things like these, a few key moments during development can have big impacts downstream.
Wouldn’t lots of infections even for flu happen through e.g. faeces or such? If the birds are densely packed it might be hard to achieve much with UVC from the ceiling.
Also, about the idea of this being eventually transferable to humans, I wonder if birds have different sensitivity to it (due to being covered in feathers).
But I don’t see a case for climate change risk specifically approaching anywhere near those levels, especially on timescales less than 100 years or so.
I think the thing with climate change is that unlike those other things it’s not just a vague possibility, it’s a certainty. The uncertainty lies in the precise entity of the risk. At the higher end of warming it gets damn well dangerous (not to mention, it can be the trigger for other crises, e.g. imagine India suffering from killer heatwaves leading to additional friction with Pakistan, both nuclear powers). So it’s a baseline of merely “a lot dead people, a lot lost wealth, a lot things to somehow fix or repair”, and then the tail outcomes are potentially much much worse. They’re considered unlikely but of course we may have overlooked a feedback loop or tipping point too much. I honestly don’t feel as confident that climate change isn’t a big risk to our civilization when it’s likely to stress multiple infrastructures at once (mainly, food supply combined with a need to change our energy usage combined with a need to provide more AC and refrigeration as a matter of survival in some regions combined with sea levels rising which may eat on valuable land and cities).
I’m often tempted to have views like this. But as my friend roughly puts it, “once you apply the standard of ‘good person’ to people you interact with, you’d soon find yourself without any allies, friends, employers, or idols.”
I’m not saying “these people are evil and irredeemable, ignore them”. But I’m saying they are being fundamentally irrational about it. “You can’t reason a person out of a position they didn’t reason themselves in”. In other words, I don’t think it’s worth worrying about not mentioning climate change merely for the sake of not alienating them when the result is it will alienate many more people on other sides of the spectrum. Besides, those among those people who think like you might also go “oh well these guys are wrong about climate change but I can’t hold it against them since they had to put together a compromise statement”. I think as of now many minimizing attitudes towards AI risk are also irrational, but it’s still a much newer topic and a more speculative one, with less evidence behind it. I think people might still be in the “figuring things out” stage for that, while for climate change, opinions are very much fossilized, and in some cases determined by things other than rational evaluation of the evidence. Basically, I think in this specific circumstance, there is no way of being neutral: either mentioning or not mentioning climate change gets read as a signal. You can only pick which side of the issue to stand on, and if you think you have a better shot with people who ground their thinking in evidence, then the side that believes climate change is real has more of those.
I would need to dig up specific stuff, but in general I’d suggest to just check out his Twitter/X account https://twitter.com/RichardHanania and see what he says. These days it’s completely dominated by discourse on the Palestine protests so it’s hard to dig out anything on race. Mind you, he’s not one to hold a fully stereotypical GOP-aligned package of ideas—he has a few deviations and is secular (so for example pro-choice on abortion; also he’s definitely not antisemitic, in fact he explicitly called himself prosemitic, as he believes Jews to be smarter). But on race I’m fairly convinced he 100% believes in scientific racism from any time he’s talked about it. I don’t want to link any of the opinion pieces around that argue for this (but there’s a fair deal if you want to check them out and try to separate fact from fiction—many point out that he’s sort of switched to some more defensive “bailey” arguments lately, which he seems to do and explicitly advocate for as a strategy in his latest book “The Origins of Woke” too, again see the ACX review). But for some primary evidence, for example, here’s a tweet about how crime can only be resolved by more incarceration and surveillance of black people:
https://twitter.com/RichardHanania/status/1657541010745081857?lang=en-GB
His RationalWiki article has obviously opinions about him, but also a bunch of links to primary sources in the bibliography:
https://rationalwiki.org/wiki/Richard_Hanania
He used to write more explicitly racist stuff under the pseudonym Richard Hoste until a few years ago. He openly admitted this and wrote an apology blog post in which he basically says that he was young and went a bit too far. Now whether this corresponds to a genuine moderation (from extremely right wing to merely strongly socially conservative and anti-woke) is questionable, because it could just as well be a calculated retreat from a motte to a bailey. It’s not wild to consider this possibility given that, again, he explicitly talks about how certain arguments would scare normies too much so it’s better to just present more palatable ones. And after all that is a pretty sound strategy (and one Torres accused EAs of recently re: using malaria bednets as the bailey to draw people into the motte of AI safety, something that of course I don’t quite see as evil as he implies it to be since I think AI safety absolutely is a concern, and the fact that it looks weird to the average person doesn’t make it not so).
At this point from all I’ve seen my belief is that Hanania mostly is a “race realist” who thinks some races are inherently inferior and thus the correct order of things has them working worse jobs, earning less money etc. and all efforts in the opposite direction are unjust and counterproductive. I don’t think he then moves from that to “and they should be genocided”, but that’s not a lot. He still thinks they should be an underclass and for now thinks that the market left to its own devices would make them so, which would be the rightful order of things. That’s the model of him I built, and I find it hard to believe that Scott Alexander for example hasn’t seen all the same stuff.