Do better, please …
I am not a card carrying member of EA. I am not particularly A, much less E in that context. However the past few months have been exhausting in seeing not just the community, one I like, in turmoil repeatedly, while clearly fumbling basic aspects of how they’re seen in the wider world. I like having EA in the world, I think it does a lot of good. And I think you guys are literally throwing it away based on aesthetics of misguided epistemic virtue signaling. But it’s late, and I read more than a few articles, and this post is me begging you to please just stop.
The specific push here is of course the Bostrom incident, when he clearly and highly legibly wrote black people have lower intelligence than other races. And his apology, was, to put it mildly, mealy mouthed and without much substance. If anything, in the intervening 25 years since the offending email, all he seems to have learnt to do is forget the one thing he said he wanted to do—to speak plainly.
I’m not here to litigate race science. There’s plenty of well reviewed science in the field that demonstrates that, varyingly, there are issues with measurements of both race and intelligence, much less how they evolve over time, catch up speeds, and a truly dizzying array of confounders. I can easily imagine if you’re young and not particularly interested in this space you’d have a variety of views, what is silly is seeing someone who is so clearly in a position of authority, with a reputation for careful consideration and truth seeking, maintaining this kind of view.
And not only is this just wrong, it’s counterproductive.
If EA wants to work on the most important problems in the world and make progress on them, it would be useful to have the world look upon you with trust. For anything more than turning money into malaria nets, you need people to trust you. And that includes trusting your intentions and your character.
If you believe there are racial differences in intelligence, and your work forces you to work on the hard problems of resource allocation or longtermist societal evolution, nobody will trust you to do the right tradeoffs. History is filled with optimisation experiments gone horribly wrong when these beliefs existed at the bottom. The base rate of horrible outcomes is uncomfortably large.
This is human values misalignment. Unless you have overwhelming evidence (or any real evidence), this is just a dumb prior to hold and publicise if you’re working on actively changing people’s lives. I don’t care what you think about ethics about sentient digital life in the future if you can’t figure this out today.
Again, all of which individually is fine. I’m an advocate of people holding crazy opinions should they want to. But when like a third of the community seems to support him, and the defenses require contortions that agree, dismiss and generally be whiny about drama, that’s ridiculous. While I appreciate posts like this, which speak about the importance of epistemic integrity, it seems to miss the fact that applauding someone for not lying is great but not if the belief they’re holding is bad. And even if this blows over, it will remain a drag on EA unless it’s addressed unequivocally.
Or this type of comment which uses a lot of words but effectively seems to support the same thought. That no, our job is to differentiate QALYs and therefore differences are part of life.
But guess what, epistemic integrity on something like this (I believe something pretty reprehensible and am not cowing to people telling me so) isn’t going to help with shrimp welfare or AI risk prevention. Or even malaria net provision. Do not mistake “sticking with your beliefs” to be an overriding good, above believing what’s true, or acting kindly towards the world, or acting like serious members of a civilisation where we all need to work together. EA writes regularly about burnout from the sheer sense of feeling burdened with a duty to do good—guess what, here’s a good chance.
In fact, if you can’t see why sticking with the theory that “race X is inferior in Y” and “we unequivocally are in favour of QALY differentiation” together constitute a clear and dangerous problem, I don’t know what to say. If you want to be a successful organisation that does good in the world, you have to stop confusing sophomoric philosophical arguments with actual lived concerns in the real world.
You can’t sweep this under the rug as “drama of the day”. I’m sorry, but if you want to be anything more than yet another NGO who take themselves a tad too seriously, this is actively harmful.
This isn’t a PR problem, it’s an actual problem. If one of the most influential philosophers and leaders of your movement is saying these things that are just wrong, it hurts credibility for any other sort of framework you might create. Not to mention the actual flesh and blood people who live in the year 2023.
It’s one thing to play with esoteric thought experiments about the wellbeing of people in the year 20000. It’s quite another to live in the year 2023. Everyone is free to analyse and experiment to explore any question they so choose, including this. However this is not that. It is starting from professing a belief, and saying you are okay doing so because there isn’t any contrary evidence. That’s not how science works, and that’s not how a public facing organisation should work.
If he’d said, for instance, “hey I was an idiot for thinking and saying that. We still have IQ gaps between races, which doesn’t make sense. It’s closing, but not fast enough. We should work harder on fixing this.” That would be more sensible. Same for the community itself disavowing the explicit racism.
By the way, it’s insane that the Forum seems to hide this whole thread as if it is a minor annoyance instead of a death knell. The SBF issue I can understand, you were fooled like everyone else and its a black eye for the organisation, but this isn’t that. And the level of condemnation that brought was a good way to react. This is much more serious.
I should say, I don’t have a particular agenda here. This stream of consciousness is already quite long. A little annoyed perhaps that this is flooding the timeline and the responses from folks whom I’d considered thoughtful are tending towards debating weird theoretical corner cases, doing mental jiu-jitsu just to keep holding that faith a little longer. But mostly it’s just frustration bubbling out as cope.
I just wish y’all could regain the moral high ground here. There are important causes that could use the energy. It’s not even that hard.
- 20 Jan 2023 5:48 UTC; 45 points) 's comment on Linch’s Quick takes by (
- 15 Jan 2023 20:43 UTC; 20 points) 's comment on Thread for discussing Bostrom’s email and apology by (
- 16 Jan 2023 12:46 UTC; 19 points) 's comment on Thread for discussing Bostrom’s email and apology by (
Rohit—if you don’t believe in epistemic integrity regarding controversial views that are socially stigmatized, you don’t actually believe in epistemic integrity.
You threw in some empirical claims about intelligence research, e.g. ‘There’s plenty of well reviewed science in the field that demonstrates that, varyingly, there are issues with measurements of both race and intelligence, much less how they evolve over time, catch up speeds, and a truly dizzying array of confounders.’
OK. Ask yourself the standard epistemic integrity checks: What evidence would convince you to change your mind about these claims? Can you steel-man the opposite position? Are you applying the scout mindset to this issue? What were your Bayesian priors about this issue, and why did you have those priors, and what would update you?
It’s OK for EAs to see a highly controversial area (like intelligence research), to acknowledge that learning more about it might be a socially handicapping infohazard, and to make a strategic decision not to touch the issue with a 10-foot-pole—i.e. to learn nothing more about it, to say nothing about it, and if asked about it, to respond ‘I haven’t studied this issue in enough depth to offer an informed judgment about it.’
What’s not OK is for EAs to suddenly abandon all rationality principles and epistemic integrity principles, and to offer empirically unsupported claims and third-hand critiques of a research area (that were debunked decades ago), just because there are high social costs to holding the opposite position.
It’s honestly not that hard to adopt the 10-foot-pole strategy regarding intelligence research controversies—and maybe that would be appropriate for most EAs, most of the time.
You just have to explain to people ‘Look, I’m not an intelligence research expert. But I know enough to understand that any informed view on this matter would require learning all about psychometric measurement theory, item response theory, hierarchical factor analysis, the g factor, factorial invariance across groups, evolutionary cognitive psychology, evolutionary neurogenetics, multivariate behavior genetics, molecular behavior genetics, genome-wide association studies for cognitive abilities, extended family twin designs, transracial adoption studies, and several other fields. I just haven’t put in the time. Have you?’
That kind of response can signal that you’re epistemically humble enough not to pretend to have any expertise, but that you know enough about what you don’t know, that whoever you’re talking to can’t really pretend any expertise they don’t have either.
And, by the way, for any EAs to comment on the intelligence research without actually understanding the majority of the topics I mentioned above, would be pretty silly: analogous to someone commenting on technical AI alignment issues if they don’t know the difference between an expert system and a deep neural network, or the difference between supervised and reinforcement learning.
PS, as always, I’d welcome anyone who disagree-votes on this comment to share what specifically you disagree with.
I think this is the best Steelman of a certain position that prioritizes epistemic integrity. I also think this position is wrong.
The only acceptable approach to race science is to clearly and vigorously denounce assertions that one race is somehow superior or inferior, and to state that it is a priority to address any apparent disparities between races. Responding to inquiries on this subject with some version of “I’m not an expert in intelligence research, etc” comes across as “mealy mouthed,” to use Rohit’s words. Bostrom himself used a version of this argument in his apology, and it just doesn’t fly.
This doesn’t require sacrificing epistemic integrity. Rohit’s suggested apology is pretty good in this regard:
“We still have IQ gaps between races, which doesn’t make sense. It’s closing, but not fast enough. We should work harder on fixing this.”
EDIT: Overall, my main point is that Rohit is broadly correct in asserting that it’s a huge problem if the EA community ends up somehow having a position on the IQ and race question. It’s obviously a massive PR problem; how do you recruit people to join an organization that has been branded as being racist? Even more important though, if the question of IQ and race plays a non-trivial role in your determination of how to do the most good, then you have massively screwed up somewhere in your thought process.
EDIT 2: Removed some comments that prompted a discussion on topics that really just aren’t relevant in my opinion. I think we should avoid getting caught up arguing about the specifics of Bostrom’s claims, but part of my comment seems to have prompted discussion in that direction so I’ve removed it.
Matthew—many EAs seem to think that intelligence research is ‘this one topic with virtually no relevance to our actual goals’, but doesn’t make sense to me.
Intelligence research is relevant to (for example):
measuring harmful effects of global public health problems (e.g. IQ deficits due to parasite load, lead exposure, iodine deficiency),
discussing cognitive enhancement (e.g. embryo selection),
identifying effective educational interventions (after controlling for IQ),
improving mental health (where lower IQ is a risk factor for most mental illnesses), and
choosing careers (e.g. 80k hours recommendations that should take each person’s cognitive abilities into account.)
General intelligence is the most reliable, valid, predictive psychological trait ever discovered, and it has pervasive implications for human flourishing, education, economics, mental health, physical health, careers, and many other domains.
Embryo selection for cognitive ability would have plenty of positive downstream consequences. If in vitro gametogenesis enables selection from large batches, there could be large gains from selection. If smart fraction theory is true, then widespread cognitive genetic enhancement even among a small portion of the population may have disproportionately large downstream positive consequences. Not discussing cognitive ability might be deterimental considering the benefits are so large. This is one cause area that I think is drastically underconsidered due in part to stigma.
To be clear, the “one topic” is race science, not general intelligence.
I understand. But the idea that there’s some nefarious field of ‘race science’ is a straw man buzzword invented by activist academics who are opposed to any empirical study of group differences in any domain of science—even things like biomedical differences in responsiveness to different pharmaceuticals, or differences in susceptibility to specific diseases.
I would define race science as the field trying to prove the superiority of one race over another race, for the purpose of supporting a racial hierarchy.
So IQ differences between races = race science
Susceptibility to different diseases != race science
Differences in 100M dash times != race science (countries don’t choose their leaders based on sprint times).
Do you think there are absolutely no differences between races in how they score on IQ tests?
Do you believe in race science?
Edit: Comment below was deleted so I am posting what prompted me to ask this question.
My point is that “IQ differences between races = race science” is such a low qualifying bar that you might fall under it. It appears the author of this post acknowledges the existence of IQ differences. Many people do not dispute that there are disparities in IQ scores, as well as SAT, ACT, MCAT, etc.
I believe there exist gaps on these tests. And yet, I wish they did not exist. Many come to these conclusions not because they are “trying to prove the superiority of one race over another race” but because they are persuaded by the evidence. The people willing to discuss this are extraordinarily atypical due to extreme selection pressure from social stigma. This probably makes most sane people “in the know” avoid discussing the topic entirely.
Again, if there are no differences then open inquiry will reveal the truth and we should pursue these questions. If there are differences and its infohazardous, we ought to want to prepare by inoculating people from the idea that anything heinous follows from these facts. Eventually genetic researchers will demonstrate beyond reasonable doubt what is true. They will talk about ancestral populations when they do rather than races. Researching the genetic architecture of cognitive ability will inadvertently point to population differences if they exist.
If we hide data and do not expand datasets on cognitive ability and other culturally sensitive traits of diverse non-European populations, parents from those populations will have less ability to accurately select embryos with those desired traits. The sensitivity around these questions creates an unwillingness to do GWAS of IQ and even prevents scientists from accessing NIH data.
Since I am a strong proponent of cognitive enhancement and think IQ is a major driving force in inter- and intranational socioeconomic outcomes, I think stigmatizing and delaying research into these topics is extaordinarily harmful. Slight delays could mean the differences between global catestrophe and a longterm future. Massive cognitive enhancement would be extraordinarly important.
Wouldn’t an acceptable approach to race science be to demonstrate that races are actually all the same across every trait we care about and the racists are wrong? Why not fight bad science with good science?
I disagree, I don’t think there is value in race science at all, since race isn’t a particularly good way of categorizing people. At the moment, there are plenty of good scholars working in population genetics (David Reich at Harvard is a good example). None of the scholars I’m aware of use race as a primary grouping variable, since it’s not particularly precise.
Would you support discussions and research into ancestral population differences?
Sure, I provided David Reich as an example of a population geneticist doing good work that I believe is worthwhile.
David Reich claims that whilst we don’t currently have any evidence to suggest that one particular population group is genetically more intelligent than another, the claim that such a thing is impossible or even unlikely, is also incorrect. Theres currently not much evidence either way and there’s no theoretical basis on which to decide there aren’t any such differences either.
At the same time he highlights the importance of respecting all people as individuals when treating with them, irrespective of the distribution of various characteristerics among their population groups.
It is suggestive that you describe the belief as “bad” rather than “wrong”,”incorrect” or “false”.
It’s fine to criticise people for (i) holding beliefs that are wrong, or for (ii) expressing beliefs that are probably best not expressed in a given context (whether they are true or false).
But it’s important to separate, as best we can, claims about (a) whether a particular belief is true from claims about (b) whether holding that belief has good consequences, or (c) correlates with good moral character.
This post would be better if it made this distinction more clearly.
I think Bostrom 1996 deserves criticism for (ii).
He may deserve criticism for (i) as well.
I can’t express my agreement with this post strongly enough.
I’ve been a hardcore utilitarian for many years, EA-interested for a few years, and (minorly) EA involved for the past year. The forum’s response to this event has shaken my belief in EA’s ability to grow and influence the world for good much more than the FTX scandal did. For starters, I no longer feel I can recommend EA to other people (for now at least) because they might check out the forum and wonder if I’m racist.
Leaving aside the question of whether Bostrom’s statement was “true” (I don’t think so, but don’t feel like litigating it in these comments), it unquestionably violated one of the strongest taboos that exists in the year 2023. There’s no question that defending it hurts EA’s credibility in the eyes of the larger world.
If you’re someone from a rationalist community or elsewhere where epistemic integrity is your first order of concern, I’m begging you to reconsider when posting on this forum. The effect of a statement on social welfare ought to be the guiding concern here.
Note that I’m not asking anyone to disregard epistemic integrity, or even necessarily compromise it. But I think most of us have “default” modes of communication (even if we try not to). On this forum, that default should favor social welfare consequences.
If epistimic integrity violates a social taboo, then you are asking people to compromise it; you shouldn’t just handwave that away because that’s uncomfortable. Own the tradeoff if you’re going to ask for it.
This also assumes that not violating a taboo is inherently supportive of the social welfare, which I do not believe is the case. Social taboos have been wrong many, many times before, and EA is already taboo-violating and offensive to various swathes of the population. How and by whom is it decided which taboos should be obeyed and which should be violated?
I don’t imagine you’d suggest EA, had it existed in 1950, should have ignored civil rights, but favoring segregation would’ve been the dominant social position at the time and fighting for desegregation was violating a strong taboo. Or I’m wrong, and you would suggest a policy of strategic silence where local taboos are concerned?
There was recently a post on Less Wrong about the concept of information that is “infohazardous if true”.
Given the observed empirical effects of having certain beliefs about racial differences, it seems plausible to me that certain claims about racial differences fall into the “infohazardous if true” category.
I haven’t previously heard anyone in EA say that it’s vital for our epistemic integrity to freely discuss infohazards. I don’t see why this case should be different.
Far-right ideas have created enormous suffering over the past few centuries. As far as I know, we don’t have a great theory for how this happened. But it seems fairly clear that it has something to do with memetics—if far-right ideas remain on the fringe, they will do a limited amount of harm; if far-right ideas become politically dominant, there’s a chance they’ll do a great deal of harm.
So, it seems that the best way to prevent far-right ideas from doing a ton of harm is to keep them on the fringe. This is fundamentally a pretty scary thing, because memetics is poorly understood. It would be much better if we had a robust, principled method to guard against harms from far-right ideas. But I don’t think such a method exists. Until it does, we have to operate on a “best guess” basis.
Aptdell—You mention ‘Far-right ideas have created enormous suffering over the past few centuries.’ Often true.
But also true of far-left ideas (including the Blank Slate doctrine that all individuals must have exactly equal abilities, preferences, & values, and any empirical evidence challenging this doctrine must be instantly slandered). Examples include every ‘idealistic’ communist regime that degenerated into a totalitarian nightmare, internal genocide, & surveillance state.
IMHO, it’s is NOT useful to fall into the usual left/right partisan trap of trying to assess the empirical truth of claims about human nature by trying to tally up the relative historical harms that allegedly resulted, usually second- or third-hand, from holding certain views.
I’m not sure this is true.
I just finished reading a ~170 page history of the Russian Revolution (covering through the late 1930s, including forced collectivization and the Great Purges), and I didn’t get the impression Blank Slate doctrine was an important cause of Soviet horrors. (I don’t recall it being mentioned at all, and it doesn’t appear in the book’s index.)
While researching this comment, I read some of this pdf, and again, I don’t see any discussion of Blank Slate doctrine. (There are good sections on the psychology of socialism starting on PDF page 300 -- the “Intuitive anti-capitalism” and “Gary Lineker fallacy” chapters.)
I’m also not sure it is relevant.
Supposing communist horrors are due to Blank Slate doctrine, perhaps Blank Slate doctrine is also “infohazardous if true”. I don’t think that affects my original points.
I’m not doing that. I don’t think historical harms tell us whether a claim is true, just whether it is an infohazard if true (or “infohazard if believed” really—a meme can be both false and harmful!)
It was confusing writing, and I’m surprised Miller didn’t bring this up in his reply, but my interpretation is that the two aren’t actually connected except by loose ideological affiliation.
Blank Slate is mentioned as the “far-left” counterpoint to Bostrom’s theory as the topic of discussion. It is, AFAIK, a considerably younger “theory” than communism and is not related to communism’s failures.
The example of communism is brought up because you only call out “far-right ideas” causing enormous suffering, while ignoring that “far-left” ideas have also caused enormous suffering. Communism is the last century’s far-left failure mode and horror show; funny that people so often forget about all that.
Had you left out the partisan phrasing, I don’t think Miller would’ve taken any issue with your post, and I would’ve found it a stronger post as well. EA doesn’t require promotion of infohazards, and there’s no reason to implicitly suggest that infohazards can only come from one side of the spectrum.
Aptdell—not every historian is tuned into the role of explicit or tacit Blank Slate thinking in political ideologies. Again, I’d recommend Pinker (2001) to get that attunement.
Once you see the harms caused by Blank Slate doctrine—like once you see the harms caused by factory farming—you can’t un-see them. But not everyone is willing to confront those horrors.
The problem with Blank Slate doctrine (and many doctrines) isn’t that it’s ‘infohazardous if true’, but as you say, it’s more like ‘infohazardous if believed’—since it can be both false and harmful.
What fraction of the harms from communism do you attribute to Blank Slate thinking?
I assume you consider Blank Slate doctrine false. Do you believe communism would’ve worked out in a world where it was true? (My view is that most or all of the problems with communism would remain.)
Yeah, this. The real issues with communism ultimately come down to ignoring thermodynamics exists. Once you accept that idea, a lot of other false ideas from communism starts to make more sense.
I’m confused by this—you say:
But when I look up Blank Slate doctrine on Google—I find nothing remotely related to this claim. Instead I see a lot of something like this:
Also it’s not clear to me what you mean by “far-left”—do you have more specific labels in mind? I consider myself fairly left-wing but have never heard of this doctrine, and highly doubt that my even more lefty friends would endorse anything like your claim above.
In this, you’re only considering far-left regimes that are highly state-controlled, rather than both libertarian and left-wing societies (e.g. anarchistic). If you look for these examples, you might actually find things are going pretty well internally (e.g. Zapatistas or Rojava) and that these societies don’t seek to eradicate sub-groups of a population—which is pretty uniform for far-right ideologies.
Can you clarify what you mean here? I’m trying to be charitable but seems like you’re trying to cast doubt on the fact that far-right ideologies have caused harm to people, or diminish the harm that has been caused. Would appreciate you specifying exactly what you meant as this could easily be interpreted as this pretty reprehensible view.
James—you’re giving a very uncharitable interpretation that sounds politically motivated.
I explicitly said that it’s ‘often true’ that ‘far-right ideas have created enormous suffering’. In what sense was that ‘trying to cast doubt’ or ‘diminish the harm’?
Then I argued that far-left ideas have also created enormous suffering.
We can dispute what % of far-left nations become highly state-controlled such that they have the centralized capacity for totalitarian oppression. My estimate might be considerably higher than your estimate. I don’t see many examples of truly libertarian or anarchistic societies that last more than 10 years, or that involve more than 10 million people. But such disputes would get us into precisely the kinds of pointless partisan squabbling that EA tries (rightly) to avoid.
If you’d like to learn more about the Blank Slate doctrine, and its many harmful effects over the last couple of centuries, I’d highly recommend the classic Steven Pinker book The Blank Slate (2001).
Could you clarify your last paragraph I quoted then? Im genuinely unsure why you used the word “allegedly”, if you do believe the far-right ideas have causes large amounts of harm?
I also wasn’t clear on what you meant by second or third-hand in this context, so clarifying that would also help me understand your position better.
James—I don’t get the sense that you’re arguing in good faith, but are looking for ‘gotcha’ quotes that you can share out of context. Sorry, I’m not interested in playing that game.
I don’t want to be rude, but this appears to be just shoddy overuse of rationalist lingo in the name of shoehorning a myopic and empirically unsupported political agenda into the consequentialist framework.
What observed empirical effects? You link to a very strange post saying, concretely, that
This person has had a falling-out with their friends who believe HBD, apparently because they have come to harbor other right-wing ideas poorly compatible with aspects of this person’s identity and lifesyle.
Those friends had drifted to the right because they felt persecuted “by people on the left or center-left” due to them believing HBD.
This person had concluded that HBD is pseudoscientific, by virtue of right-wingers being nasty to trans people and vegans.
Pardon me, what? Is this your evidence base?
№1-2 might as well be considered arguments for lesser demonization of HBD. There is nothing inherently political about thinking one way or another about sources of cognitive differences; the political valence is imposed on such hypotheses by external forces. If smart people independently arrive at HBD as a morally neutral explanation for generally available observations, then it’s not very prudent on part of “the left or center-left” to baselessly label them racists, supporters of genocidal far-right ideologies, insane cranks and such and leave them no choice except break their own minds into an Orwellian mold, learn to live in falsehood, or go rightward. When they say, like Bostrom, that they are motivated by humanitarian impulses, they can be taken at their word.
You, however, seem to conclude that the only problem is insufficient intensity of vilification of HBD, now as a “cause area” unto itself; that these people can be intimidated into not believing what they see, through pure peer pressure and pushing the topic to the fringe instead of rational persuasion.
№3 is honestly horrifying in terms of epistemic integrity. You seem to be dismissive of truth as a terminal value, so let’s put it like this: a person who sees nothing wrong with such pseudoreasoning – and, given the score, that’s normal on EA forum –can delude oneself into excusing arbitrary atrocities; or less dangerously, draining resources into arbitrarily ineffective causes just to feel good about oneself.
We don’t have a good theory, in part, because there’s no meaningful way to lump together “far-right ideas” over “the past few centuries”, or indeed seriously analyze anything prior to the 20th century through these lens. Do you mean Jacobites or Bourbons by far-right? Why not address la Terreur as an archetypal case of the idea of egalitarianism causing mass death and suffering in the characteristic manner of an infohazard? Should this make us suspicious of egalitarian ideation in general?
Here’s a honest thought: the notion of “memetics” or “infohazards” is an infohazard in its own right. It’s bad philosophy, and it offers zero explanatory power over traditional terms like “undeservedly popular idea”, “misleading idea” or “dangerous idea” but it gives the false impression of such adjectives having been substantiated. It’s just a way of whitewashing a classical illiberal and, indeed, totalitarian belief that some ideas must be kept away from the plebeians because they are akin to a plague. In illiberal societies those are “democracy” and “independent thought”; we have a consensus that theories justifying restriction of access to those are vacuous and evil, but those theories at least had some substance, unlike equivocation here about suffering caused by “far right” and, by an entirely frivolous extension, HBD.
In sum, analogizing ideas and their bearers to infectious agents invading and spreading within the body politic is a staple of far-right sociology that exploits deep-seated reactions of disease-associated disgust, fear and distrust of outsiders, and that’s all there is to “memetics” in such colloquial use. Perhaps you could do without resorting to such tools for thought.
Perhaps there is, and it’s called “law” and “democracy”, and you need to argue in a principled way for your cost-benefit analysis that concludes that extant legal and political checks against far-right threats are insufficient, and concludes with embracing of some of the worst totalitarian legacies to ostracize an apparent scientific truth.
I upvoted this post and think it’s a good contribution. The EA community as a whole has done damage to itself the past few days. But I’m worried about what it would mean to support having less epistemic integrity as a community.
This post says both:
and
The first quote says believing X (that there exists a racial IQ gap) is harmful and will result in nobody trusting you. The second says X is, in fact, true.[1]
For my own part, I will trust someone less if they endorse statements they think are false. I would also trust someone less if they seemed weirdly keen on having discussions that kinda seem racist. Unfortunately, it seems we’re basically having to decide between these two options.
My preferred solution is to—while being as clear as possible about the context, and taking great care not to cause undue harm—maintain epistemic integrity. I think “compromising your ability to say true, relevant things in order to be trusted more” is the kind of galaxy-brain PR move that probably doesn’t work. You incur the cost of decreased epistemic integrity, and then don’t fool anyone else anyway. If I can lose someone’s trust by saying something true in a relevant context,[2] then keeping their trust was a fabricated option.
I’m left not knowing what this post wants me to do differently. When I’m in a relevant conversation, I’m not going to lie or dissemble about my beliefs, although I will do my best to present them empathetically and in a way that minimizes harm. But if the main thrust here is “focus somewhat less on epistemic integrity,” I’m not sure what a good version of that looks like in practice, and I’m quite worried about it being taken as an invitation to be less trustworthy in the interest of appearing more trustworthy.
I’ve seen other discussions where someone seems to both claim “the racial IQ gap is shrinking / has no genetic component / is environmentally caused” and “believing there is a racial IQ gap is, in itself, racist.”
I think another point of disagreement might be whether this has been a relevant context to discuss race and IQ. My position is that if you’re in a discussion about how to respond to a person saying X, you’re by necessity also in a discussion about whether X is true. You can’t have the first conversation and completely bracket the second, as the truth or falsity of X is relevant to whether believing X is worthy of criticism.
I am grateful for this post and think it demonstrates bravery that Rohit didn’t need to show. He’s a thoughtful, accomplished professional who has approximately no personal incentive to writing this out.
I hope readers who wish for a healthy community around the ideas of effective altruism, and who want thoughtful engagement from people exploring similar questions that effective altruists consider important in good faith, reflect on the damage that the discourse of the past few days (and the incident that kicked it off in terms of Bostrom’s poor statement on historical failings) has caused to the mission to make the world a better and safer place.
I gave this post a strong downvote, but I’m happy to explain if you would like. I’ve added this comment because I’ve seen some people complain in the past about receiving downvotes without receiving comments. I considered adding details directly in this comment, but I didn’t want to provide unsolicited criticism in this particular case.
You should either explain the reasoning for your strong downvote or not make this comment at all, it’s pretty noisy and doesn’t add useful information, unless you/the specific individual being the source of the strong downvote is meaningful in some way. (imagine if everyone simply made a comment just describing what their vote was).
Edit: I have retracted my downvote given your edit. I agree with Julian that I would also be interested in the reasoning behind the strong downvote.
That’s a good point, but I’ve seen people upset before about receiving downvotes without comments. At the same time, I didn’t want to make unsolicited criticism in this particular instance.
I think that’s reasonable. What really bothers me is a dogpile of downvotes without comments. If you come upon something with +60 points and downvote it, the person downvoted is unlikely to even notice. On the other hand, I recently commented on something with over 100 downvotes where not a single person had answered the guy’s questions. Yes, ze was rude and unreasonable, but it still bothered me how people handled it.
Humans evolved strong punishment norms for a reason.
While I normally value EA members’ willingness to break social norms, I’m finding myself wishing for the chilling effect of taboos and punishments right now. The forum has done incredible damage to itself over the last few days.
I am not proud of human history. Why do you want unkindness so much in this case?
Because I’m a consequentialist, and it seems like I need the the aforementioned norms to get good consequences (EA not doing irrepairable damage to its reputation) in this case.
Social organizations like EA face their own form of natural selection. EA competes for members and funding with other good-doing and intellectual communities, in an environment where prospective members and funders almost universally believe that saying “different races have different average IQs” is irredeemably racist. A large portion of EAs rallying in support of someone who said that is therefore a surefire way for EA to lose members and funds.
It would be adaptive* for EA to have norms in favor of downvoting strongly taboo-breaking comments that have little to no utility-creating potential.
*butchering the definition of that word a bit
Is that true? I am skeptical. Notably, this seems to be controversial among the current membership (which is exactly what OP is complaining about!)
I am strongly confident that this is true. My prior is something like 99%. I can’t think of a single person I’ve met in real life (and I’ve been offline involved with political organizations, nonprofits, and a wide variety of intellectual communities) who wouldn’t consider “different races have different average IQs” to be prima facie racist. The number goes up only slightly for people I’ve encountered online (and more than half of them were encountered over the past few days).
Edit:
I think demonstrates just how disconnected some EAs are with mainstream social norms (I don’t mean that as an insult; being disconnected from mainstream norms has its benefits, though I think it’s bad in this specific case). Claiming a difference in intelligence between races is one of the worst things you can say in polite society in 2023. It’s pretty much on par with rape apologia or advocating for pedophilia. It’s worse than, say, advocating in support of torture.
99% is really too high. It’s more than 1% likely that you’re just in a very strong ideological filter bubble (which are surprisingly common; for example, I know very few Republican voters even though that’s roughly half the US). The fact that this is a strong social norm makes that more likely.
I already said this, but I don’t really understand how you can be so confident in this given the current controversy. It seems pretty clear that a sizeable fraction of current members don’t agree with “saying “different races have different average IQs” is irredeemably racist”. Doesn’t that disprove your claim? (current members are at least somewhat representative of prospective members)
I think a historical strength of EA has been its ability to draw from people disconnected from mainstream social norms, especially because there is less competition for such people.
I might be wrong! But I stand by it. I don’t believe myself to be in an ideological bubble. I grew up in the south, went to college in a highly rural area, and have friends across the political spectrum. Most of my friends from college are actually Republican, a few are even Trump supporters (honestly, I think they have some racial bias, but if you asked them “is saying white people have higher IQs than black people racist?” I’m highly confident they would say yes).
The current controversy is pretty easily explainable to me without updating my priors: the EA community has attracted a lot of high decoupler rationalists who don’t much care about mainstream norms (which again, is a virtue in many cases—but not this one).
Yeah, that explanation seems right. But—the high-decoupler rationalists are the counterexample to your claim! That group is valuable to EA, and EA should make sure it remains appealing to them (including the ones not currently in EA—the world will continue to produce high-decoupler rationalists). Which is not really consistent with the strong norm enforcement you’re advocating.
I think this is a decent argument, but I probably disagree. I think most high decouplers aren’t utilitarian or utilitarian-adjacent, and aren’t inclined to optimize for social welfare the way I think it is important for EA to. I have another comment arguing somewhat provocatively that rationalist transplants may harm the EA movement more than helping it by being motivated by norm-violative truth seeking over social welfare.
But as I say in the other post, I wouldn’t point out any individual rationalists/high-decouplers as bad for the movement; my argument is just about group averages ;)
FWIW, I’m highly longtermism skeptical for epistemic reasons, so I value the influx of people who care a lot of AGI alignment and whatnot much less than most people on here.
High decoupling truth seekers who wanted to do the most good were necessary for founding the movement. Now that it exists they aren’t. As time goes on it will more closely approach the normal charitable movement where fashion and status seeking are as important, if not more important than doing the most good. This was inevitable from the founding of the movement. Autists and true believers always lose to people who are primarily interested in power and social status eventually in every social movement. Getting ten or twenty years of extra good over the normal charitable complex before it degenerates into them is good. https://meaningness.com/geeks-mops-sociopaths
Decoupling by definition ignores context. Context frequently has implications for social welfare. Utilitarian goals therefore cannot be served without contextualizing.
I also dispute the idea that the movement’s founders were high decoupler rationalists to the degree that we’re talking about here. While people like Singer and MacAskill aren’t afraid to break from norms when useful, and both (particularly Singer) have said some things I’ve winced at, I can’t imagine either saying anything remotely like Bostrom’s statement, nor thinking that defending it would be a good idea.
I can’t imagine Singer having a commitment to public truth telling like Bostrom’s either, because he’s published on the moral necessity of the Noble Lie[1]. If he believed something that he thought publicising would be net negative for utility he would definitely lie about it. I’m less sure that McAskill would lie if he thought it expedient for his goals but the idea that he’s not a high decoupler beggars belief. He’s so weird he came up with a highly fit new idea that attracts a bunch of attention and even real action from the intellectually inclined philosophy lovers. Normal people don’t do that. Normal people don’t try to apply their moral principles universally.
[1] https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-9329.2009.00449.x
https://betonit.substack.com/p/singer-and-the-noble-lie
Lying to meet goals != contextualizing
It’s hard for me to follow what you’re trying to communicate. Are you saying that high contextualizers don’t/can’t apply their morals universally while high decouplers can? I don’t see any reason to believe that. Are you saying that decouplers are more honest? I also don’t see any reason to believe that.
I’m saying decouplers are more likely to be honest with themselves, and if they value truth terminally, to be honest with others. Having multiple goals that trade off against each other in ways that are unclear to people who hold them, never mind anyone else, generally goes along with this kind of muddled thinking. You don’t get to be any good as an academic philosopher without being a high decoupler. It’s a necessity for being expert, never mind world class.
Peter Singer seems pretty clearly to be a utilitarian above and beyond any other moral considerations so if he thought lying would increase utility he’d do it. Nick Bostrom clearly values truth seeking over short term considerations. If you don’t value the longtermist element of EA that’s fine. Have fewer people die, more people live long, healthy , happy lives, perhaps care about animal welfare on somebody level. That’s plenty for the next 30 years. Truth as terminal value is not really relevant. Being right is however very important if you’re trying to get things right on the scale of 100 years, never mind anything longer than that. Not lying, even if it’s convenient, is important. The purge Bostrom faction almost certainly mostly believes that what they profess is true. But purging people for having different beliefs is not truth seeking behaviour. If you’re a longtermist you can’t do that. You can’t lie for political convenience unless you plan to do everything important in secret either because every area where your model and the world are different is an opportunity to fuck to when you’re very likely to do so anyway.
If all high context means is that Bostrom could have said the same thing but less bluntly fine. It seems extremely unlikely that those calling for his ouster would have found a statement that was more delicate with the same truth message any less offensive.
Not sure what it is about this specific instance that makes you more reluctant to share your views. I’d still be interested though, so feel free to DM me if you think it’s an explanation that requires buy-in from the OP before you feel comfortable sharing it publicly, if you want!
I would like to know why. I found the post insightful.
Seems kind of obvious.
It’s neither sincere nor well-argued.Added: It strikes me that it’s unlikely to be sincere, because of the way that it builds the whole post around a hugely unfavourable and unjustified interpretation of Bostrom’s views.
The author states that Bostrom is wrong and (edit: that it is silly that he maintains a view that he doesn’t seem to hold), while not stating any arguments for why he is wrong, while attributing to him beliefs that he hasn’t expressed, and implicitly agreeing with his observations about intelligence (edit: assuming not over-generously that he’s thinking about the IQ / attainment gap across the global population, rather than something more innate).
The only truly controversial question about the intelligence race debate is whether there is a genetic component to the real IQ differences we see between nations/ ethnic groups, and Bostrom barely mentions this.
I think saying “seems kind of obvious” in this context is not kind, and implying that a post is not sincere — without very strong evidence — is also not good and certainly doesn’t assume good faith. I also think that “the author states that Bostrom is … silly” somewhat misrepresents the post.
Please be kind and keep to the norms.
I don’t like it when people insist that their favourite interpretation of a vague sentence is the correct one and accuse others of misrepresenting when people complain about other interpretations.
There is a huge difference between these two sentences:
1: “x people are stupid”
2: “The people that we socially ascribe to the race X had lower scores in IQ tests on average the last 70 years”
A lot of people in this forum equivocate the first sentence with the much weaker second one. This is classic motte and bailey.
The sentence 1 is vague and it can be interpreted in a way similar to “copper is conductive”. That interpretation(race pseudoscience) would imply the following:
a. There is a scientifically valid category of the race x.
b. There is a causal relationship(or a law of nature) between the race x and intelligence.
c. Boo x people! (because stupid is a loaded word)
There is also another controversial inference from IQ test scores to intelligence but I’m setting this aside for now.
You can’t blame people for interpreting Bostrom’s original statement in that way. Powerful people advocated for pseudoscientific theories making exactly these claims in the past, and many are still making these claims. Bostrom doesn’t explicitly disavow these claims in his apology either.
Some people here seem to be very concerned about deception by omission. Some say that if Bostrom excluded the paragraph starting by “Are there any genetic contributors to differences between groups in cognitive abilities?” that would be deception. I don’t think that’s true. But more importantly, if we are going to be concerned about omissions, a more misleading omission is him not disavowing race pseudoscience in his apology. I think his apology is currently misleading people into thinking that “race pseudoscience” interpretation of his original statements is the correct interpretation, and he is merely apologising for the slur and using a loaded word like “stupid”. Because of this, his apology provides unwarranted and harmful credibility to a pseudoscientific theory.
“I don’t like it when people insist that their favourite interpretation of a vague sentence is the correct one and accuse others of misrepresenting when people complain about other interpretations”
I definitely acknowledge that ‘X people are stupid’ can have lots of interpretations, and mine was more favourable. But to write a whole blog post assuming a very specific, negative and distinctive explanation seems a lot worse than my response in this respect.
I think rejecting “race pseudoscience” is difficult because it’s surely meaningless to just say: I reject pseudoscience.
He would have to go into all of the messy details about what he considers pseudoscience and doesn’t, which, to be honest, would probably make people even more angry, wherever he chose to draw the line.
I find it extremely hard to believe this isn’t the case.
I think the author is sincere, though not arguing well. I don’t understand what is going on with the downvote brigades here and it bothers me tremendously. But are they sincere? I think so.
Consider my now-76-year-old father, for instance [edit: basically he’s an example where, when I read what he writes, he sounds like some kind of internet troll, but the explanation of “insincerity” seems wrong]. His brother died of Covid. His favorite televangelist died of Covid. I argued passionately for over a year that he should get a vaccine. He refused not only to get a vaccine, but also to answer the more-than-100 questions I asked him about his beliefs [edit: and about various topics surrounding this]. As a result we are estranged and there are all kinds of bad things I could say about his behavior, but he did risk his life for his beliefs, and that’s a very strong mark of sincerity. And I think that his level of sincerity is very common, and that people usually underestimate how sincere other people are.
I think starting the post with “Do Better” is a kind of rhetorical flush that probably erodes goodwill between you and the people you want to convince
while giving no reasons for people to agree with what you are saying.It’s a common turn of phrase, and when I see it I often think it’s an effort to sort of shame people into agreeing with what you are saying, to assert moral superiority without actually providing argument for it. In your post you do make a number of arguments which I think are pretty good. I don’t think they need to be embellished with some low-key shaming in your post’s title.
Edit: I ought to explain more clearly why I think what I claimed above. The exhortation, “Do Better” carries an unambiguous implication the recipient isn’t putting in as much effort as they ought to. This is probably true in many cases, but I think probably there are people with many different starting positions in this discussion who are doing as well as they can to understand how they should respond. So “do better” as an exhortation to everyone who disagrees with the particular claims one is making seems quite blunt and carries inaccurate implications. People who are already doing the best they can might conclude the only way they can “do better” is to change their attitude or position without really understanding why they should do so.
I don’t know—there are probably some issues where that’s fine, given the stakes, but it is poor epistemic practice because it has rhetorical persuasive power independent of the truth value or clarity of the claims being made, but possibly very dependent on social norm adherence.
There is a distinction between IQ gaps existing, “intelligence” gaps existing, and intelligence gaps existing that are attributable to genetic differences between populations. This is also distinct from “race X is inferior to race Y.”
Different races have different average scores on IQ tests, which it appears you acknowledge. Intelligence tests are created by assembling a wide range of cognitively demanding test items. Their scores happen to align well with what people generally mean when they say “intelligent,” although perhaps not perfectly.
Believing in the existence of gaps attributable to genetic differences is not a dumb prior. It would be astounding if all people at all times in all places no matter how you divided them happen to have the exact same average in this polygenic trait. This is especially considering cognitive ability influences behavior with regard to immigration, fertility, assortative mating, etc. that would create deviations from perfect equality.
Believing in differences does not mean that we should stop treating others with dignity and respect. Despite believing the above, I treat others well because my treatment is not contingent on the statistical average member of one particular group one is apart of. I think people can share my belief and still be dedicated toward doing good in the future.
I would suspect a sizeable portion of EA would agree.
FYI, just as you complained, and as the screenshot you put in the end says, this very post is categorised as a “personal blog” post, and most people aren’t going to see it. Try also linking to it from the megathread?
(I think this is why the barrage of downvotes hasn’t come for it yet.)
Okay, first of all, you, like other people complaining about the apology, haven’t actually said what it is you object to. So let’s see...
Is it that part you don’t like?
Is it that part you don’t like?
Is it that part you don’t like?
Is it that part you don’t like?
And so on. You get the idea. Edit: you said it was “to put it mildly, mealy mouthed” which Bing tells me is “avoiding the use of direct and plain language”. So what leads you to think Bostrom avoided plain language “to put it mildly”? You also say it was “without much substance”. So, how would you have said it in such a way that it does have substance and plain language, if you were Nick? I suspect that there is nothing he could have reasonably said to pacify you, but if in fact there was an apology that would have suited you, you can prove it by writing it. (Edit 3: I apologize for not having noticed that the post suggested that Bostrom should have said “hey I was an idiot for thinking and saying that. We still have IQ gaps between races, which doesn’t make sense. It’s closing, but not fast enough. We should work harder on fixing this.” But why is this statement better than what Bostrom said? Is it because this new version vaguely suggests that it is not possible for IQ differences to be genetic? But of course, Bostrom decided he could not make that judgement without expertise or data. It is this humility that bothers you? Do you believe it’s not “real” humility? Also, do you not see how confusing it is that you imply it’s okay to acknowledge an IQ gap after making it clear that it’s unacceptable to do that?)
I can’t imagine what reasoning process you are using here. If I do a survey and ask people whether people in poverty could possibly grow up with a lower IQ than people in wealth, what result do you think it will have? Do you really think environment and upbringing can’t affect intelligence?
And now, what if the poor people happen to have a different skin color?
Do you not admit, then, ________? (not even going to ask because the question itself angers people)
I think the main issues with Bostrom’s original posting (the one he apologized for) were (1) that he said something extremely easy to misinterpret, (2) said a word he shouldn’t say and (3) didn’t qualify his words.
But did he even so much as say, 26 years ago, that there are genetic differences in IQ? No, he didn’t.
Not only did he not “stick with that”. He didn’t even say it in the first place.
And I saw at least one person on Twitter saying ze was disappointed with EAs (especially leadership) for how they’re roasting Bostrom for this, and that it makes zim want to affiliate less with EA in the future.
There’s only so much SJWing people can take.
Edit 2: and by the way, have you noticed that the above is full of questions? Immediately after I posted this, I got two strong downvotes or high-karma downvotes as well as disagreement votes. But an hour later, my questions have not been answered. I literally don’t know why people do this, but in my opinion it is mean-spirited to downvote as one’s sole response, just as I think OP is mean-spirited toward Bostrom. Okay, so you think I’m wrong, fine. Reasonable people can disagree. But tell me why.
I think everything after the words “Christian Blind Mission” was unnecessary and if he had ended the statement there, it would have been less provocative. I don’t think that would have been the best possible choice but it would have been better than what we got.
That’s something I couldn’t have predicted before he posted the thing, but I’m sure a more skilled communicator could have.
The reason the words afterwards were harmful is because irrespective of their truth value, they raise an issue that is potentially harmful just to get into. By talking about it in a particular context when it isn’t necessary, you give it airtime you don’t need to. I think that’s the dimension that is missed here.
Words aren’t just truth claims; they are also locutionary acts that draw readers’ attention to certain topics. The words “smoking is bad for your lungs” are true and even helpful in many contexts, but they may also remind a smoker about smoking which could have an unintended consequence of causing a smoker to smoke. There may also be unintended negative consequences from jumping into a discussion about what you do and don’t believe about eugenics or genetics and IQ.
That’s a fair point. But Rohit’s complaint goes way beyond the statement being harmful or badly constructed. Ze is beating around the bush of a much stronger and unsubstantiated claim that is left unstated for some reason: “Bostrom was and is a racist who thinks that race directly affects intelligence level (and also, his epistemics are shit)”.
What ze does say: “his apology, was, to put it mildly, mealy mouthed and without much substance” “I’m not here to litigate race science.;” “someone who is so clearly in a position of authority...maintaining this kind of view.”; “If you believe there are racial differences in intelligence”; “a third of the community seems to support him” [implied to be a bad thing]; “applauding someone for not lying is great but not if the belief they’re holding is bad”; “Do not mistake ‘sticking with your beliefs’ to be an overriding good, above believing what’s true”; “sticking with the theory that ‘race X is inferior in Y’”; “leaders of your movement is saying these things that are just wrong”.
Reading recently about the razing of an entire Black community in Palm Springs, 60 years ago. https://www.theguardian.com/us-news/2023/jan/15/california-palm-springs-section-14-homes-burned-survivors-justice?amp;amp;amp “Their houses had burned, sometimes with their belongings inside – no time to evacuate or no place to go.” This is one of many examples of the destruction of Black communities, of lives, generational wealth, culture. Throughout American history: Elaine, Tulsa, Rosewood, Wilmington, … many more.
I’ll bet those victims didn’t do so well on an IQ test the day after their lives were destroyed.
This race-determines-IQ-determines-worthiness thread is one in a long line of sophisticated arguments that intellectuals have comfortably made for centuries and much more (c.f. Buckminster Fuller: the great pirates and the invention of the university) (c.f. General Robert E Lee weeping for the damage that slavery was doing to the slave owners.)
The goal is to make the elite comfortable and justified in profitable repression.
I honestly welcome someone to explain why EA, a great concept in principle, is not in part a smokescreen for the same old racket. Thanks, everyone, for the chance to participate in the discussion.
This is a re-post. Someone “canceled” the previous attempt. Not such an open forum, it turns out.
Just reading today about the razing of an entire Black community in Palm Springs, 60 years ago. https://www.theguardian.com/us-news/2023/jan/15/california-palm-springs-section-14-homes-burned-survivors-justice?amp;amp;amp “Their houses had burned, sometimes with their belongings inside – no time to evacuate or no place to go.” This is one of many examples of the destruction of Black communities, of lives, generational wealth, culture. Throughout American history: Elaine, Tulsa, Rosewood, Wilmington, … many more.
I’ll bet those victims didn’t do so well on an IQ test the day after their lives were destroyed.
This race-determines-IQ-determines-worthiness thread is one in a long line of sophisticated arguments that intellectuals have comfortably made for centuries and much more (c.f. Buckminster Fuller: the great pirates and the invention of the university) (c.f. General Robert E Lee weeping for the suffering of the slave owners.)
The goal is to make the elite comfortable and justified in profitable repression.
I honestly welcome someone to explain why EA, a great concept in principle, is not in part a smokescreen for the same old racket. Thanks, everyone, for the chance to participate in the discussion.
I’d argue that at least until the advent of Longtermism, the actions taken in practice by EAs were mostly about transferring wealth and welfare from rich Westerners to the poor in Africa and Asia. This is still the case for a majority of EA funding (although the gap is narrowing).
You could counterargue that all of philanthropy is a smokescreen to protect the rich while giving a semblance of improving the problems of the poor. I agree in principle, but in practice I think there’s a trade-off between applying this reasoning to intra-country vs. inter-country inequality: money that Western rich individuals give as taxes almost entirely remains within rich Western rich countries. You can use funds to lobby Western governments to increase foreign aid (and it’s been done by EAs in Zürich and in the UK, for example), but that would still: (a) require funds, and (b) take the agency from the already-powerless recipients. So again, it’s a trade-off in terms of justice.