CEA statement on Nick Bostrom’s email
Effective altruism is based on the core belief that all people count equally. We unequivocally condemn Nick Bostrom’s recklessly flawed and reprehensible words. We reject this unacceptable racist language, and the callous discussion of ideas that can and have harmed Black people. It is fundamentally inconsistent with our mission of building an inclusive and welcoming community.
— The Centre for Effective Altruism
- Two things that I think could make the community better by 26 Apr 2023 14:59 UTC; 92 points) (
- Nick Bostrom should step down as Director of FHI by 4 Mar 2023 14:39 UTC; 77 points) (
- 12 Jan 2023 17:37 UTC; 21 points) 's comment on [Linkpost] Nick Bostrom’s “Apology for an Old Email” by (
- 6 Mar 2023 18:03 UTC; 19 points) 's comment on Nick Bostrom should step down as Director of FHI by (
- 5 Mar 2023 22:14 UTC; 12 points) 's comment on Nick Bostrom should step down as Director of FHI by (
- 13 Jan 2023 1:33 UTC; 4 points) 's comment on I am tired. by (
A short note as a moderator:[1] People (understandably) have strong feelings about discussions that focus on race, and many of us found the content that the post is referencing difficult to read. This means that it’s both harder to keep to Forum norms when responding to this, and (I think) especially important.
Please keep this in mind if you decide to engage in a discussion about this, and try to remember that most people on the Forum are here for collaborative discussions about doing good.
If you have any specific concerns, you can also always reach out to the moderation team at forum-moderation@effectivealtruism.org.
Mostly copying this comment from one I made on another post.
I feel really quite bad about this post. Despite it being only a single paragraph it succeeds at confidently making a wrong claim, pretending to speak on behalf of both an organization and community that it is not accurately representing, communicating ambiguously (probably intentionally in order to avoid being able to be pinned on any specific position), and for some reason omitting crucial context.
Contrary to the OP it is easy to come up with examples where within the Effective Altruism framework two people do not count equally. Indeed most QALY frameworks value young people more than older people, many discussions have been had about hypothetical utility monsters, and about how some people might have more moral patienthood due to being able to experience more happiness or more suffering, and of course the moral patienthood of artificial systems immediately makes it clear that different minds likely matter differently in moral calculus.
Saying “all people count equally” is not a core belief of EA, and indeed I do not remember hearing it seriously argued for a single time in my almost 10 years in this community (which is not surprising, since it indeed doesn’t really hold any water after even just a tiny bit of poking, and your only link for this assertion is a random article written by CEA, which doesn’t argue for this claim at all and also just blindly asserts it). It is still the case that most EAs believe that the variance in the importance of different people’s experience is relatively small, that variance almost certainly does not align with historical conceptions of racism, and that there are at least some decent game-theoretic arguments to ignore a good chunk of this variance, but this does not mean that “all people count equally” is a “core belief” which should clearly only be reserved for an extremely small number of values and claims. It might be a good enough approximation in almost all practical situations, but it is really not a deep philosophical assumption of any of the things that I am working on, and I am confident that if I were to bring it up at an EA meetup, someone would quite convincingly argue against it.
This might seem like a technicality, but in this context the statement is specifically made to claim that EA has a deep philosophical commitment to valuing all people equally, independently of the details about how their mind works (either because of genetics, or development environment, or education). This reassurance does not work. I (and my guess is also almost all extrapolations of the EA philosophy) value people approximately equally in impact estimates because it looks like the relative moral patienthood of different people, and the basic cognitive makeup of people, does not seem to differ much between different populations, not because I have a foundational philosophical commitment to impartiality. If it was the case that different human populations did differ on the relevant dimensions a lot, this would spell a real moral dilemma for the EA community, with no deep philosophical commitments to guard us from coming to uncomfortable conclusions (luckily, as far as I can tell, in this case almost all analyses from an EA perspective lead to the conclusion that it’s probably reasonable to weigh people equally in impact estimates, which doesn’t conflict with society’s taboos, so this is not de-facto a problem).
Moving on, I do not believe that this statement is speaking on behalf of the employees of CEA, many of which I am confident also feel quite badly represented by this statement, and is also not speaking on behalf of Effective Altruism. I don’t know what process has produced it, but I don’t think it is speaking for me or almost anyone else I know within the EA community. Organizations themselves don’t have beliefs, and EA has generally successfully avoided descending into meaningless marketing and PR speech where organizations take positions despite nobody at those organizations actually believing those positions. If you want to make a statement on this matter, speak as an individual. Individuals can meaningfully have beliefs. Organizations pretending to have beliefs is usually primarily a tactic to avoid taking responsibility and creating a diffuse target.
Additionally, it is completely unclear from your statement whether you are referring to Bostrom’s original email or whether you are referring to Bostrom’s apology. I don’t know why you are being ambiguous, but it seems quite plausible that you are doing so in order to not be able to be pinned on either repudiating the statements in Bostrom’s apology, which seem quite reasonable to me and many other EAs (and would therefore attract ire from the community), or failing to repudiate those same statements which are attracting a lot of ire publicly due to not being explicitly anti-racist enough. If this is indeed what you are doing, then please stop. This ambiguity is toxic to clear communication. If this is not what you are doing, then please clarify, and also please get better at writing, it seems really extremely obvious that this was going to be a problem with this statement.
Lastly, you are also not linking to either Bostrom’s original statement, or his apology. I don’t know why. It would both clear up the ambiguity discussed above, and it would provide crucial context for anyone trying to understand what is going on, and who might have not seen Bostrom’s apology. My guess is you are doing this with some other PR reason in mind. Maybe so that when people Google this topic later on this doesn’t show up in search? Maybe so that the lack of context makes it less likely that other people outside of the community will understand what this statement is about? In any case, either please get better at communicating, or stop the weird PR games that you are seemingly trying to play here.
Overall, despite this only being a single paragraph, I think there has been little produced by CEA that has made me feel as badly represented, and that has made me feel as alienated from the EA community as this statement. Please stop on whatever course you are setting out where this is how you communicate with both the public and the community.
I think I do see “all people count equally” as a foundational EA belief. This might be partly because I understand “count” differently to you, partly because I have actually-different beliefs (and assumed that these beliefs were “core” to EA, rather than idiosyncratic to me).
What I understand by “people count equally” is something like “1 person’s wellbeing is not more important than another’s”.
E.g. a British nationalist might not think that all people count equally, because they think their copatriots’ wellbeing is more important than that of people in other countries. They would take a small improvement in wellbeing for Brits over a large improvement in wellbeing for non-Brits. An EA would be impartial between improvements in wellbeing for British people vs non-British people.
“most QALY frameworks value young people more than older people, many discussions have been had about hypothetical utility monsters, and about how some people might have more moral patienthood due to being able to experience more happiness or more suffering, and of course the moral patienthood of artificial systems immediately makes it clear that different minds likely matter differently in moral calculus”
In all of these situations, I think we can still say people “count” equally. QALY frameworks don’t say that young people’s wellbeing matters more—just that if they die or get sick, they stand to lose more wellbeing than older people, so it might make sense to prioritize them. This seems similar to how I prioritize donating to poor people over rich people—it’s not that rich people’s wellbeing matters less, it’s just that poor people are generally further from optimal wellbeing in the first place. And I think this reasoning can be applied to hypothetical people/beings with greater capacity for suffering. I think greater capacity for happiness is trickier and possibly an object-level disagreement—I wouldn’t be inclined to prioritize Happiness Georg’s happiness above all else, because his happiness outweights the suffering of many others, but maybe you would bite that bullet.
Thanks for writing out a reaction very similar to my own. As I wrote in a comment on a different topic, “it seems to me that one of the core values of effective altruism is that of impartiality― giving equal moral weight to people who are distant from me in space and/or time.”
I agree that “all people count equally” is an imprecise way to express that value (and I would probably choose to frame in in the lens of “value” rather than “belief”) but I read this as an imprecise expression of a common value in the movement rather than a deep philosophical commitment to valuing all minds exactly the same.
But there is a huge difference in this case between something being a common belief and a philosophical commitment, and there is also a huge difference between saying that space/time does not matter and that all people count equally.
I agree that most EAs believe that people roughly count equally, but if someone was to argue against that, I would in no way think they are violating any core tenets of the EA community. And that makes the sentence in this PR statement fall flat, since I don’t think we can give any reassurance that empirical details will not change our mind on this point.
And yeah, I think time/space not mattering is a much stronger core belief, but as far as I can tell that doesn’t seem to have anything to do with the concerns this statement is trying to preempt. I don’t think racism and similar stuff is usually motivated by people being far away in time and space (and indeed, my guess is something closer to the opposite is true, where racist individuals are more likely to feel hate towards the immigrants in their country, and more sympathy for people in third world countries).
One of the defining characteristics of EA is rejecting certain specific reasons for counting people unequally; in particular, under EA ideology, helping someone in a distant country is just as good as helping a nearby person by the same amout. Combined with the empirical fact that a dollar has much larger effect when spent on carefully chosen interventions in poorer countries, this leads to EA emphazing on poverty-reduction programs in poor, mainly African countries, in contrast to non-EA philanthropy which tends to favor donations local to wherever the donor is.
This is narrower than the broad philosophical commitment Habryka is talking about, though. Taken as a broad philosophical commitment, “all people count equally” would force some strange conclusions when translated into a QALY framework, and when applied to AI, and also would imply that you shouldn’t favor people close to you over people in distant poor countries at all, even if the QUALYs-per-dollar were similar. I think most EAs are in a position where they’re willing to pay $X/QALY to extend the lives of distant strangers, $5X/QALY to extend the lives of acquaintances, and $100X/QALY to extend the lives of close friends and family. And I think this is philosophically coherent and consistent with being an effective altruist.
I don’t think this goes through. Let’s just talk about the hypothetical of humanity’s evolutionary ancestors still being around.
Unless you assign equal moral weight to an ape than you do to a human, this means that you will almost certainly assign lower moral weight to humans or nearby species earlier in our evolutionary tree, primarily on the basis of genetic differences, since there isn’t even any clean line to draw between humans and our evolutionary ancestors.
Similarly, I don’t see how you can be confident that your moral concern in the present day is independent of exactly that genetic variation in the population. That genetic variation is exactly the same that over time made you care more about humans than other animals, amplified by many rounds of selection, and as such, it would be very surprising if there was absolutely no difference in moral patienthood among the present human population.
Again, I expect that variance to be quite small, since genetic variance in the human population is much smaller than the variance between different species, and also for that variance to really not align very well with classical racist tropes, but the nature of the variance is ultimately the same.
And the last part of the sentence that I quoted seems also not very compatible with this. Digital people might have hugely varying levels of capacity for suffering and happiness and other things we care about, including different EMs. I indeed hope we create beings with much greater capacity for happiness than us, and would consider that among one of the moral priorities of our time.
For information, CEA’s OP links to an explanation of impartiality:
That paragraph does feel kind of confused to me, though it’s hard to be precise in lists of principles like this.
As jimrandomh says above, it is widely accepted in EA that time and location do not matter morally (well, moreso location, I think it’s actually pretty common for EAs to think that far future lives are worth less than present lives, though I don’t agree with this reasoning). But that clearly does not imply that all people count equally, given that there are many possible reasons for differing moral weights.
EMs?
“Emulated Minds” aka “Mind uploads”.
Brain Emulations—basically taking a person and running a simulation of them on a computer, where they could potentially be copied, run faster or slower, etc.
Thanks for writing this up Amber — this is the sense that we intended in our statement and in the intro essay that it refers to (though I didn’t write the intro essay). We have edited the intro essay to make clearer that this is what we mean, and also to make clear that these principles are more like “core hypotheses, but subject to revision” than “set in stone”.
Sorry for the slow response.
I wanted to clarify and apologise for some things here (not all of these are criticisms you’ve specifically made, but this is the best place I can think of to respond to various criticisms that have been made):
This statement was drafted and originally intended to be a short quote that we could send to journalists if asked for comment. On reflection, I think that posting something written for that purpose on the Forum was the wrong way to communicate with the community and a mistake. I am glad that we posted something, because I think that it’s important for community members to hear that CEA cares about inclusion, and (along with legitimate criticism like yours) I’ve heard from many community members who are glad we said something. But I wish that I had said something on the Forum with more precision and nuance, and will try to be better at this in future.
The first sentence was not meant to imply that we think that Bostrom disagrees with this view, but we can see why people would draw this implication. It’s included because we thought lots of people might get the impression from Bostrom’s email that EA is racist and I don’t want anyone — within or outside the community — to think that. Nevertheless this was sloppy, and is something that we should have caught when drafting it. Sorry.
We also intended the first sentence to have a meaning like Amber’s interpretation above, rather than the interpretation you had, but we agree that this is unclear. We’ve just edited the intro essay to make clearer that this is what we mean, and also to make clear that these principles are mostly more like “core hypotheses, but subject to revision” than “set in stone”.
This statement was intended as a reaction to Bostrom’s initial email (CW that this link includes a racial slur). I agree that if we had linked to that email it would have been clearer, and at the time I posted it I didn’t even consider that this might be ambiguous. Sorry.
More generally, we’re thinking about how we can improve our responses to situations like this in the future. I’m also planning to write up more about our overall approach to comms (TL;DR is that I agree with various concerns that have been raised about CEA and others in the community caring too much about PR concerns; I think truthfully saying what you believe — carefully and with compassion — is almost always more important than anything else), but it might be a little while before I get round to that.
CEA’s current media policy forbids employees from commenting on controversial issues without permission from leaders (including you). Does the view you express here mean you disagree with this policy? At present it seems that you have had the right to shoot from the hip with your personal opinions but ordinary CEA employees do not.
I appreciate this
At the risk of running afoul of the moderation guidelines, this comment reads to me as very obtuse. The sort of equality you are responding to is one that I think almost nobody endorses. The natural reading of “equality” in this piece is the one very typical of, even to an extent uniquely radical about, EA. When Bentham says “each to count for one and none for more than one”, or Sidgwick talking about the point of view of the universe, or Singer discusses equal consideration of equal interests. I would read this charitably and chalk it up to an isolated failure to read the statement charitably, but it is incredibly implausible to me that this becoming the top voted comment can be accounted for by mass reading comprehension problems. If this were not a statement critical of an EA darling, but rather a more mundane statement of EA values that said something about how people count equally regardless of where in space and time they are, or sentient beings count equally regardless of their species, I would be extremely surprised to see a comment like this make it to the top of the post. I get that taking this much scandal in a row hurts, but guys, for the love of god just take the L, this behavior is very uncharming.
I think what Habryka is saying is that while EA does have some notion of equality, the reason it sticks so close to mainstream egalitarianism is because humans don’t differ much. If there were multiple species civilizations like those in Orion’s Arm for example, where multiple orders of magnitude differences in abilities are present, than a lot of stratification and non-egalitarianism will happen solely by the value of freedom/empowerment.
And this poses a real moral dilemma for EA, primarily because of impossibility results around fairness/egalitarianism.
Who supports this? This is an extremely radical proposal, that I also haven’t seen defended anywhere. Of course sentient beings don’t count equally regardless of their species, that would imply that if fish turn out to be sentient (which they might) their moral weight would completely outweigh all of humanity right now. Maybe you buy that, but it’s definitely extremely far from consensus in EA.
In-general I feel like you just listed 6 different principles, some of which are much more sensible than others. I still agree that indifference to location and time is a pretty core principle, but I don’t see the relevance of it to the Bostrom discussion at hand, and so I assumed that it was not the one CEA was referring to. This might be a misunderstanding, but I feel like I don’t really have any story where stating that principle is relevant to Bostrom’s original statement or apology, given that racism concerns are present in the current day and affect people in the same places as we are. If that is the statement CEA was referring to, then I do withdraw that part of the criticism and replace it with “why are you bringing up a principle that doesn’t seem to have much to do with the situation?”.
And then beyond that, I do indeed think asserting there is no difference whatsoever in moral consideration between people seems pretty crazy to me, and I haven’t seen it defended. I am not that familiar with Bentham’s exact arguments here, and I don’t think he is particularly frequently cited (or at least I haven’t seen it). I also think I haven’t seen most of the other philosopher’s cited here except Singer, and I would be happy to have my first object level discussion now about whether you think a principle of perfectly equal moral consideration should hold. Singer has gone on record thinking that indeed different people have different moral weight, and this is one of his most controversial beliefs (i.e. his disability stuff is a consequence of that and has in the past gotten him cancelled at various universities), so I don’t know what you are referring to here as the principle, though I also feel pretty confused about Singer’s reasoning here.
In-general I think we discuss the differing moral weight of different animals all the time, and I don’t see us following a principle that puts sentient/conscious beings into one large uniform bucket.
Equality is always “equality with respect to what”. In one sense giving a begger a hundred dollars and giving a billionaire a hundred dollars is treating them equally, but only with respect to money. With respect to the important, fundamental things (improvement in wellbeing) the two are very unequal. I take it that the natural reading of “equal” is “equal with respect to what matters”, as otherwise it is trivial to point out some way in which any possible treatment of beings that differ in some respect must be unequal in some way (either you treat the two unequally with respect to money, or with respect to welfare for instance).
The most radical view of equality of this sort, is that for any being for whom what matters can to some extent matter to them, one ought to treat them equally with respect to it, this is for instance the view of people like Singer, Bentham, and Sidgwick (yes, including non-human animals, which is my view as well). It is also, if not universally at least to a greater degree than average, one of the cornerstones of the philosophy and culture of Effective Altruism, it is also the reading implied by the post linked in that part of the statement.
Even if you disagree with some of the extreme applications of the principle, race is easy mode for this. Virtually everyone today agrees with equality in this case, so given what a unique cornerstone of EA philosophy this type of equality is in general, in cases where it seems that people are being treated with callousness and disrespect based on their race, it makes sense to reiterate it, it is an especially worrying sign for us. Again, you might disagree that Bostrom is failing to apply equal respect of this sort, or that this use of the word equality is not how you usually think of it, but I find it suspicious that so many people are boosting your comment given how common, even mundane a statement in EA philosophy ones like this are, and that the statement links directly to a page explaining it on the main EA website.
This feels to me like it is begging the question, so I am not sure I understand this principle. This framing leaves open the whole question of “what determines how much capacity for things mattering to them someone has?”. Clearly we agree that different animals have different capacities here. Even if a fish managed so somehow communicate “the only thing I want is fish food”, I am going to spend much less money on fulfilling that desire of theirs than I am going to spend on fulfilling an equivalent desire from another human.
Given that you didn’t explain that difference, I don’t currently understand how to apply this principle that you are talking about practically, since its definition seems to have a hole exactly the shape of the question you purported it would answer.
That’s a good question, and is part of what Rethink Priorities are working on in their moral weight project! A hedonistic utilitarian would say that if fulfilment of the fish’s desire brings them greater pleasure (even after correcting for the intensity of pleasure perhaps generally being lower in fish) than the fulfilment of the human’s desire, then satisfying the fish’s desire should be prioritised. The key thing is that one unit of pleasure matters equally, regardless of the species of the being experiencing it.
Yeah, I think there are a bunch of different ways to answer this question, and active research on it, but I feel like the answer here does indeed depend on empirical details and there is no central guiding principle that we are confident in that gives us one specific answer.
Like, I think the correct defense is to just be straightforward and say “look, I think different people are basically worth the same, since cognitive variance just isn’t that high”. I just don’t think there is a core principle of EA that would prevent someone from believing that people who have a substantially different cognitive makeup would also deserve less or more moral consideration (though the game-theory here also often makes it so that you should still trade with them in a way that evens stuff out, though it’s not guaranteed).
I personally don’t find hedonic utilitarianism very compelling (and I think this is true for a lot of EA), so am not super interested in valence-based approaches to answering this question, though I am still glad about the work Rethink is doing since I still think it helps me think about how to answer this question in-general.
Agree that not all EAs are utilitarians (though a majority of EAs who answer community surveys do appear to be utilitarian). I was just describing why it is that people who (as you said in many of your comments) think some capacities (like the capacity to suffer) are morally relevant still, despite this, also describe themselves as philosophically committed to some form of impartiality. I think Amber’s comment also covers this nicely.
Just to clarify, I am a utilitarian, approximately, just not a hedonic utilitarian.
Bentham’s view was that the ability to suffer means that we ought to give at least some moral weight to a being (their capacity to suffer determining how much weight they are given). Singer’s view, when he was a preference utilitarian, was that we should equally consider the comparable interests of all sentient beings. Every classical utilitarian will give equal weight to one unit of pleasure or one unit of suffering (taken on their own), regardless of the species, gender or race of the being experiencing the pleasure or suffering. This is a pretty mainstream view within EA. If it means (as MacAskill suggests it might, in his latest book) that the total well-being of fish outweighs the total well-being of humanity, then this is not an objectionable conclusion (and to think otherwise would be speciesist, on this view).
It’s interesting to read this critique of a EVF/CEA press statement through the lens of EVF/CEA’s own fidelity model, which emphasizes the problems/challenges with communicating EA ideas in low-bandwidth channels.
I don’t agree with the specific critique here, but would be curious as to how the decision to publish a near-tweet-level public statement fits into the fidelity model.
in addition to all of this, the statement compounds the already existent trust problem EA has. It was already extremely bad in the aftermath of FTX that people were running to journos to leak them screenshots from private EA governance channels (vide that New Yorker piece). You can’t trust people in an organization or culture who all start briefing the press against each other the minute the chips are down! Now we have CEA publicly knifing a long-term colleague and movement founder figure with this unbelievably short and brutal statement, more or less a complete disowning, when really they needed to say nothing at all, or at least nothing right now.
When your whole movement is founded on the idea of utility maximizing, trust is already impaired because you forever feel that you’re only going to be backed for as long as you’re perceived useful: virtues such as loyalty and friendship are not really important in the mainstream EA ethical framework. It’s already discomfiting enough to feel that EAs might slit your throat in exchange for the lives of a million chickens, but when they appear to metaphorically be quite prepared to slit each other’s throats for much less, it’s even worse!
Sabs—I agree. EAs need to learn much better PR crisis management skills, and apply them carefully, soberly, carefully, and expertly.
Putting out very short, reactive, panicked statements that publicly disavow key founders of our movement is not a constructive strategy for defending a movement against hostile outsiders, or promoting trust within the movement, or encouraging ethical self-reflection among movement members.
I’ve seen this error again, and again, and again, in academia—when administrators panic about some public blowback about something someone has allegedly done. We should be better than that.
Agree. At a meta-level, I was disappointed by the seemingly panicked and reactive nature of the statement. The statement is bad, and so, it seems, is the process that produced it.
Hm, I don’t much agree with this because I think the statement is basically consistent with Bostrom’s own apology. (Though it can still be rough to have other people agree with your criticisms of yourself).
Trust does not mean circling the wagons and remaining silent about seriously bad behavior. That kind of “trust” would be toxic to community health because it would privilege the comfort of the leader who made a racist comment over maintaining a safe, healthy community for everyone else.
Being a leader means accepting more scrutiny and criticism of your actions, not getting a pass because you’re a “long-term colleague and movement founder figure.”
Sounds like you feel pretty strongly about this and feel like this was very poorly communicated. What would you have preferred the statement to be instead?
I would also like to add to the other comments that EA Intro Fellowship has included a book section titled “All Animals Are Equal” for quite some time.
Another statement that “people are equal” from GWWC.
Here’s Bostrom’s letter about it (along with the email) for context: https://nickbostrom.com/oldemail.pdf
I have to be honest that I’m disappointed in this message. I’m not so much disappointed that you wrote a message along these lines, but in the adoption of perfect PR speak when communicating with the community. I would prefer a much more authentic message that reads like it was written by an actual human (not the PR speak formula) even if that risks subjecting the EA movement to additional criticism and I suspect that this will also be more impactful long term. It is much more important to maintain trust with your community than to worry about what outsiders think, especially since many of our critics will be opposed to us no matter what we do.
I don’t understand the importance of CEA saying anything to the community about this particular matter. We can all read Bostrom’s statement and draw our own conclusions; CEA has—to my knowledge—no special knowledge about or insight into this situation. The “PR speak” seems designed to ensure that each potentially quotable sentence includes a clear rejection of the racist language in question.
I would be fine if CEA hadn’t put out a message at all, but this sets a bad precedent. Robotic PR messages has never been the kind of relationship that CEA has had with the community up until now.
I think Jason’s point is more that CEA’s statement isn’t really an attempt to ‘communicate with the EA community’, so your criticisms don’t apply in this case. E.g. this statement could be something for EAs to link to when talking about it with people looking in, who are trying to make an informed judgement (i.e. busy, neutral people lacking information, not committed critics).
If the message was written for outsiders, then I would encourage them not to post it on the EA forum.
I don’t see the value in CEA not posting its press statements to the forum. That just means that people have to regularly check another website if they want to see if a statement has been issued. On the other hand, if you do not want to engage with press statements, it only takes two seconds to read the post title and decide not to engage with content you think is inappropriate for the forum. Given the historical frequency of such comments, that’s. . . thirty seconds a year?
The forum seems as good a place as any?
We are not the target audience here. If the PR-speak is interferring with something CEA needs to say to the community, that’s one thing. But if there’s no need for a community message at all I don’t see how the PR-speak message is interfering with community communication.
In what way do you feel like CEA’s statement is counterproductive to maintaining trust?
Because PR messages are so standardised they effectively just follow a formula. They aren’t authentic at all and it raises the question of to what extent other messages are representative of CEA’s true beliefs.
Some context:
Bostrom’s problematic email was written in 1996.
Bostrom claims to have apologised for the email back in 1996, within 24 hours after sending it. If that’s right, then the 2023 message is his second apology.
I am disappointed that the CEA statement does not include these details.
As far as I can tell this is his “apology” from back then.
It’s embarrassing for Bostrom to claim this as an apology.
Did I link to an incorrect e-mail or why does this comment have −6 agreement karma? In general it would be helpful if people explained their downvote.
Bostrom’s email was horrible, but I think it’s unreasonable on CEA’s part to make this short statement without mentioning that the email was written 26 years ago, as part of a discussion about offending people.
Bostrom’s 2023 letter spends more time defending his 1996 beliefs than anything else.
He chose to “get out in front” in a way that raises far more questions than it settles, & I think this statement rightly holds him accountable for that.
Bostrom today clearly disavows using the N-word in ’97. Does he still believe in some form of white superiority? I hope not! But right now, the ambivalence & vagueness of his 2023 letter is working like a dog whistle to western chauvinists, & I hope he figures that out & denounces them as strongly as this statement does.
I wonder why CEA feels the need to comment on what seems to be a personal matter not relating to CEA programming. While I understand how seductive it can be to criticize someone who has said something reprehensible, especially when brought to light with a clumsily worded apology, I wonder if this really relates to CEA, or whether this would have been a good time to practice the Virtue of Silence.
Hello Peter, I will offer my perspective as a relative outsider who is not formally aligned with EA in any way but finds the general principle of “attempting to do good well” compelling and (e.g.) donates to Give Directly. I found Bostrom’s explanation very offputting and am relieved that an EA institution has commented to confirm that racism is not welcome within EA. Given Bostrom’s stature within the movement, I would have taken a lack of institutional comment as a tacit condonation and/or determination that it is more valuable to avoid controversy than to ensure that people of colour feel welcome within EA.
While AI safety has sucked up a lot of attention recently, EA’s most famous and most well-funded efforts have been focused in Africa- malaria bednets, deworming, vitamin supplementation, etc etc. There’s a post at least monthly, maybe weekly, about how EA isn’t diverse enough, that it’s a tragedy, and how they can and should improve that.
I find it difficult to consider the majority of EA actions could possibly be outweighed by one person’s terribly stupid statement almost three decades ago, no matter how high-status that person is within the community. I find it difficult to think that a movement that has spent hundreds of millions of dollars improving the lives of the less-fortunate (mostly in Africa, but there was also that 300M$ experiment in criminal justice reform that would mostly help black people if it worked) has a racism problem, and that their hundreds of millions of dollars of actions, don’t speak louder than one goofus and his poor apology.
But if I try to put myself in that headspace, where this movement does have a serious racism problem despite all the evidence suggesting the contrary, one paragraph of PR-speak is not going to be the least bit comforting.
Could you, or any readers, help me understand that mindset better?
Hello Robert, I am stepping back from this forum but as you’ve replied to me directly I will endeavour to help you understand my viewpoint. I will use italics as you seem to have a high level of belief in their ability to improve written communication.
If the only form that racism took was hatred of black people, then the evidence you present would be persuasive that EA as a movement as a whole does not condone racism.
However: racism also encompasses the belief that certain races are inferior. Belief that black people are stupider than white people, for example, is not incompatible with sending aid to Africa.
Therefore, I was relieved to see an EA institution explicitly confirm that it does not condone racism.
Hope this helps.
EDIT: Did you mean to write “not compatible”? I didn’t notice this until after I typed my reply. I thought you were claiming that sending aid to Africa was incompatible. If you could clarify, I’ll add my wall of text back.
Hello, I did mean to type “not incompatible”- I think we are largely in agreement.
Ah okay, sorry, I thought you meant the opposite. Thank you!
The community needs to split. Basically high cognitive decouplers and low decouplers can’t live together online anymore. And if the EA brand is going to attack the high decoupler way of thinking for the sake of making people like britomart happy—which might be the right choice, there needs to be a new community for altruists who are oriented towards working through any argument themselves, no matter what it implies.
Mainly, the ea brand and community are tools for doing good, but currently the way they are functioning no longer work quite right.
Probably because CEA is problematic, and because of the recent recruitment drives that brought a lot of people who weren’t coming from the rationalist meme space in, abd this naturally leads to culture clashes.
Also maybe things are still okay off the forums.
This seems like a very emotionally-driven response. If you look at the situation rationally, setting aside your instinctive defensiveness, I think you’ll realise what a wild overreaction proposing a schism is. I know putting emotion to one side can be really challenging, especially when you feel threatened, but I really suggest making the effort so that you don’t embarrass yourself with clearly overblown statements like “high cognitive decouplers and low decouplers can’t live together online anymore.”
I mean the post was emotionally driven response to the current situation. The idea that EA should split up is something that I’ve been thinking about for a while, and nearly wrote a top level post on, before I didn’t write it because I decided that I wasn’t actually confident enough or making a sufficiently useful point in a sufficiently useful way.
The idea that anything about this should make you think less of Bostrom as a thinker is nonsense—though his ‘apology’ makes me doubt his practical judgement, since it was written in a way that was not in the slightest optimized for ending the controversy.
People who see this incident as saying something about how interested we should be reading Bostrom’s next paper, or about how much we should praise him for the important work he has done are coming from a culture that doesn’t fit very well with the way my mind works.
It’s an overreaction with a kernel of truth: there’s too much difference for everyone to feel fairly “represented” by a single PR source.
I don’t see proposing a schism as necessarily a “wild overreaction”, but their certainty does indicate a likely emotionally driven-response.
I place a decent probability on the current state of drama being the new normal, in which case we need to do something. And there are other things we can try first, but I wouldn’t be surprised if Tim turned out to be right in the end.
IMO, I actually agree with the point that high decouplers and low decouplers can’t live together very well. Pessimistically speaking, EA probably has to split because of incompatible cultures from another comment I read:
Moderator here. The comment you quote describes the personal experience of someone who transitioned genders, but the selection you quoted doesn’t make that clear (or other nuance like that the commenter has Asperger’s and isn’t a native English speaker), which makes your quote unnecessarily inflammatory. Could you remove the quote and instead link to the comment? (Or quote the comment in its entirety.)
Thank you, this is very illuminating. You argue that:
Culture has become “feminised” (I assume this means it has started doing more of the housework)
This feminisation means that EAs are discouraged from engaging with DIFFICULT but IMPORTANT questions, such as “Are white men the smartest of them all.”
One potential solution—this, to me, was the apotheosis of your comment—is “a Scott Alexander megapost”
Spending time on this forum has clarified for me that although I support in principle many of the stated aims of the EA movement, I don’t wish to participate in the culture, which is hostile to anyone who refuses to make a fetish of rationality, while refusing to consider the ways in which such fetishisation is itself irrational. So: so long, and thanks for all the fish ✌️
While I disagree with a big chunk of the comment you’re responding to (and don’t really want to engage with the claims stated there), I think your comment misinterprets the parent comment in a way that is uncharitable (e.g. “‘feminised’ (I assume this means it has started doing more of the housework)” — this is not what the parent comment is talking about).
Please don’t do that, folks.
The parenthetical was a joke. I won’t do it again.
Ok, thank you!
I think this is very related to CEA.
Influential EA philosophers having used racial slurs and saying they’re unsure about IQ and race is hurtful to black EAs, hurtful to black people outside EA and bad for future diversity in EA.
Although this shouldn’t be the primary concern, it is additionally also very harmful to the reputation of other individuals, organisations and initiatives associated with EA, potentially reducing their impact.
It’s also pseudoscience.
My gods, I don’t understand why people are downvoting this, actually.
A pretty large fraction of engaged EAs believe in HBD. Its quite common the deeper you go into the community.
Okay, if there’s anyone here who actually believes in HBD, here’s a couple reasons why you shouldn’t:
Human biodiversity is actually pretty low. Homo sapiens has been through a number of bottlenecks.
Human migrations over the last thousand years have been such that literally everyone on Earth is a descendant of literally everyone that lived 7000 years ago whose offspring didn’t die out. This is known as the Identical Ancestors Point.
Africans have more genetic diversity than literally every other ethnicity on earth taken together, so any classification that separates “Africans” from other groups is going to be suspect.
Race isn’t a valid construct, genetically speaking. It’s not well defined. Most of the definitions are based on self reports or continents of origin, when we know what is considered “black” in the US may not be so in, say, Brazil, or that many people from Africa can very well be considered “white”.
Intelligence is not well defined. There’s no single definition of intelligence on which people from different fields can agree.
IQ has a number of flaws. It is by definition Gaussian without having appeared empirically first and the g construct itself has almost certainly no neurological basis and is purely an artifact of factor analysis.
Twin studies are flawed in methodology. Twins, even identical twins, simply do not have exactly the same DNA.
Evolution isn’t just mutations and natural selection. Not every trait is an adaptation.
Heritability does not imply genetic determinism. Many things are heritable and do not involve genes. These include epigenetic mechanisms, microbiota, or even environmental stress on germinal cells.
We don’t mate randomly, which is an assumption in many genetics studies.
HBD is not generally accepted in academia.
Many public HBD figures have been found guilty of fraud. Cyril Burt would literally forge results, while Lynn would take the average of two neighboring countries’ IQ in order to derive “data” from a country’s unknown national IQ.
Special thanks to these threads for compiling most of the information
Now you might want to attack one of these (and feel free to send me a message), but even if you’re right, that would still leave more than enough reasons to stay away from HBD.
This list is a good example of the sort of arguments that look persuasive to those already opposed to HBD, but can push people on the fence towards accepting it, so it may be net-negative from your perspective. This is what has happened to me, and I’ll elaborate on why – so that you may rethink your approach, if nothing else.
Disclaimer: I am a non-Western person with few traits worth mentioning. I identify with the rationalist tradition as established on LW, feel sympathy for the ideal of effective altruism, respect Bostrom despite some disagreements, have donated to GiveWell charities on EA advice, but I have not participated more directly. Seeing the drama, people expressing disappointment and threatening to leave the community, and the volume of meta-discussion, I feel like clarifying a few details that may be hard to notice from within your current culture, and hopefully helping you mend the fracture that is currently getting filled with the race-iq stuff.
All else being equal, people who hang around such communities prefer consistent models (indeed, utilitarianism itself is a radical solution to inconsistencies in other ethical theories). This discourse is suffused with intellectual inconsistency, on many levels of varying contentiousness.
On the faint level of moral intuitions, there’s the strange beeline from the poorly supported prior that normalization of beliefs like Bostrom’s will lead to bad effects like discrimination, to the consequentialist decision against entertaining them. It is not clear that Bostrom’s beliefs are harmful in this way, or more likely to encourage a net increase in discrimination than their negation. Arguments from historical precedent have big problems with them: they do not address the direction of causality, or the fact that different cultures can have different reactions to the same information. As it is not considered normal in the modern culture to equate moral worth and ability for individuals of any group, it can be expected that the same will hold should the difference in ability between groups be acknowledged. Arguments from personal distress of users are valid points with regard to community health, but obviously (I hope) incommensurate with the question of global utility, and do not directly weigh on it. So the consequentialist case for not taking Bostrom’s belief in good faith is already suspect.
Perhaps the most obvious level is that specific failings Bostrom is credibly accused of (racist attitude, belief in the racial IQ difference, belief in the validity of IQ measurement) do not depend on HBD. (He has done himself no favors by bringing up eugenics). So it’s bizarre to see many people denounce his beliefs in toto, but support this denunciation with environmentalist explanations of the IQ gap – in effect, conceding the specific factual claim in Bostrom’s old email, or at least demonstrating that it is not beyond the pale by their own standard. To be clear: it is not in doubt that the IQ gap between Black and White Americans exists; and that it is as predictive of outcomes associated with cognitive capacity as IQ measurement is (which is to say, highly predictive – and this, too, is mainstream consensus). People who act indignant about such statements send a huge red flag, demonstrating either general unwillingness to educate themselves or irrational ideological bias on this specific matter. People who bring up irrelevant anti-HBD talking points demonstrate confused reasoning.
Less obviously, the problem is portraying this as an open-and-shut case – a portrayal which doesn’t really survive scrutiny. I don’t know how to put this nicely, but what your list most reminds me of is polemics of sophisticated Creationists in the heyday of New Atheism. It’s a mix of true but irrelevant, misleadingly phrased, blatantly misinterpreting and patently false claims. Instrumentally they are gotchas; structurally, opening moves aimed at people who are not familiar with the debate and are not aware that all those issues had long been answered, and the debate is incredibly mature. Of course, in all such debates both sides can assert that they’ve solved every vulnerability, and this forum isn’t some HBD Central. So I won’t compete in citations, and will just address things a total layman, provided he’s minimally erudite, napkin-numerate, capable of critical thought and aware of basic logical fallacies, could spot, if he were so inclined. You way “Now you might want to attack one of these (and feel free to send me a message), but even if you’re right, that would still leave more than enough reasons to stay away from HBD.” What if we go through every one of these?
Human biodiversity is actually pretty low. Homo sapiens has been through a number of bottlenecks. – maybe true but vacuous. “Pretty” low relative to what baseline? How would we even tell – do we have anything like IQ for other species? Does this genetic fact establish some prior for the magnitude in differences in measurable phenotypic traits between groups? What about individuals? What we do know that people with a priori negligible “biodiversity” – as in, children in ethnically endogamous marriages, even in isolated villages – routinely have large differences in all traits of interest. So how much diversity is needed, really, to introduce some measurable population-level divergence? Likewise for the point about bottlenecks, what of it? Should our layman just conclude that this is an authoritative-sounding technical term?
Human migrations over the last thousand years have been such that literally everyone on Earth is a descendant of literally everyone that lived 7000 years ago whose offspring didn’t die out. This is known as the Identical Ancestors Point. – grossly misleading/false, and doesn’t pass basic sanity check. Is every single unadmixed Indigenous Australian really a descendant of “literally everyone” 7000 years ago, same as every single Han Chinese? But, looking it up, National Geographic says that “Aboriginal Australians are all related to a common ancestor who was a member of a distinct population that emerged on the mainland about 50,000 years ago”, which implies people of other populations are not all related to him. Aha, here’s where your figure comes from: “Rohde, Olson and Chang showed through simulations that, given the false assumption of random mate choice without geographic barriers, the Identical Ancestors Point for all humans would be surprisingly recent, on the order of 5,000-15,000 years ago.” But it is indeed false, there were barriers for the entire history of our species, such as oceans; and how do migrations of the last millenium negate it? More importantly, it’s a quantitative issue. Your link goes on to say: “Thus, even though the Norwegian and Japanese person share the same set of ancestors, these ancestors appear in their family tree in dramatically different proportions. A Japanese person in 5000 BC with present-day descendants will likely appear trillions of times in a modern-day Japanese person’s family tree, but might appear only one time in a Norwegian person’s family tree.” Seeing as every specimen can have novel genetic variants, this should allow for arbitrary magnitude of divergence, no?
Africans have more genetic diversity than literally every other ethnicity on earth taken together, so any classification that separates “Africans” from other groups is going to be suspect. – misleading. The money quote is: “Tishkoff and her colleagues studied DNA markers from around the planet, identifying 14 “ancestral clusters” for all of humanity. Nine of those clusters are in Africa. “You’re seeing more diversity in one continent than across the globe,” Tishkoff said.” Okay, let’s assume that those 9 clusters are meaningfully different (as are the other 5). When people talk of “Africans”, in practice, whom do they refer to? Looking it up,
4. Race isn’t a valid construct, genetically speaking. It’s not well defined”. – but aren’t we already talking of genomic ancestry? So this is a true but irrelevant objection. Now, people are of course free to believe that conventional self-reported “races”, which are, as is often correctly said, social constructs, do not correspond to continental-level ancestry – although noisily in many cases. I think this is pretty absurd on its face, but anyway, Googling tells us “In mothers self-identified as Black and White, the imputed ancestry proportions were 77.6% African and 75.1% European respectively” in a “diverse” NYC sample, and I’d expect less cosmopolitan groups to show higher figures. However unfit race is for purposes of cutting-edge research, in the aggregate data it is robustly aligned with ancestry, which is well-defined.
5. Intelligence is not well defined. There’s no single definition of intelligence on which people from different fields can agree. – blatantly misinterpreting. The cited paper states: ”...Nevertheless, some definitions are clearly more concise, precise and general than others. Furthermore, it is clear that many of the definitions listed above are strongly related to each other and share many common features” and goes on to propose a unified definition: “Intelligence measures an agent’s ability to achieve goals in a wide range of environments.” Features such as the ability to learn and adapt, or to understand, are implicit in the above definition as these capacities enable an agent to succeed in a wide range of environments. Also it is not clear why we’d even need people from different fields (in this case, psychology and AI research!) to agree on a definition of intelligence to have a useful measurement of human smarts. And this is what has happened with IQ:
6. IQ has a number of flaws. It is by definition Gaussian without having appeared empirically first and the g construct itself has almost certainly no neurological basis and is purely an artifact of factor analysis. – this is just some Gish Gallop. To begin with, I don’t see your link supporting your summarization – except the vague “number of flaws”. If I may, where have you taken this list from? In any case, anything but God has a number of flaws; your link says that “According to Weiten, “IQ tests are valid measures of the kind of intelligence necessary to do well in academic work.”″ and “clinical psychologists generally regard IQ scores as having sufficient statistical validity for many clinical purposes”. It doesn’t seem like there’s any scientific objection as to the validity of IQ as a measurement of what’s casually called smarts and understood to be smarts in the context of this discussion – even though there are weird attempts to drown this fact in caveats. Why is the part about assumed Gaussian even relevant? What would it mean for g to have a neurological basis, and why would that matter in the discussion of HBD? …And the part about g being “purely an artifact of factor analysis” is plain false, far as I can tell. It comes from Cosma Shalizi’s essay that misstates the reason for the existence of positive manifold. “If I take any group of variables which are positively correlated, there will, as a matter of algebraic necessity, be a single dominant general factor… Since intelligence tests are made to correlate with each other, it follows trivially that there must appear to be a general factor of intelligence.” This is just a lie: a great deal of effort has been devoted to making cognitive tests comprehensive and diverse assessments of ability, but positive correlations pop out on their own, even in research informed by Shalizi’s assumptions, e.g. “The WJ-R was developed based on the idea that the g factor is a statistical artifact with no psychological relevance. Nevertheless, all of its subtests are intercorrelated and, when factor analyzed, it reveals a general factor that is no less prominent than those of more traditional IQ tests”. And ”...All 861 correlations are positive. Subtests of each IQ battery correlate positively not only with each other but also with the subtests of the other IQ batteries. This is, of course, something that the developers of the three different batteries could not have planned – and even if they could have, they would not have had any reason to do so, given their different theoretical presuppositions.”
7. Twin studies are flawed in methodology. Twins, even identical twins, simply do not have exactly the same DNA. – again, misinterpreting; there are flaws but the method is not summarily “flawed” just because a section about flaws exists. The first link is a list of objections but in no way does it show or argue that they are decisive, or even apply at all to current methods (there are “responses to critiques” subsections). The second is apparently irrelevant, and was already addressed by another user.
8. Evolution isn’t just mutations and natural selection. Not every trait is an adaptation. (a link to Wiki on “Evolution – Evolutionary processes”) – …okay but how does this even support your case? I’m honestly unsure what the idea here is. Taken literally, your summary suggests that evolution can produce maladaptive changes, so we cannot assume that all (or any) populations will be maximally fit (for their environment). This is a pro-eugenicist take, if anything. Whereas the page itself discusses mechanisms of change in allele frequency and does not have any clear impact on the validity of HBD one way or another.
9. Heritability does not imply genetic determinism. Many things are heritable and do not involve genes. These include epigenetic mechanisms, microbiota, or even environmental stress on germinal cells. – irrelevant/false. The link is to “Heritability – Controversies” with some nitpicks of unclear truth value. The second is a general overview of possible issues with heritability estimates. It does not weigh in on HBD and accepts the premise of variable genetic contributions to human intelligence: As a case in point, consider that both genes and environment have the potential to influence intelligence. Heritability could increase if genetic variation increases, causing individuals to show more phenotypic variation, like showing different levels of intelligence. On the other hand, heritability might also increase if the environmental variation decreases, causing individuals to show less phenotypic variation. This says, concretely, that in more equal environments we will observe more true genetic effects on variation in intelligence, so whatever differences in genetic effects on this trait there are between groups, they will become more pronounced. By the way this is terrible for the anti-HBD position because it means that the state of perfect environmental equality – one could say equality of opportunity – will collapse into genetic determinism (modulo random noise). Your own idea seems to be that non-genetic mechanisms of apparent heritability can be interrupted by a positive environmental intervention. What share of “heritable” variance can it explain, at a maximum? Like, concretely, to what extent do you think the racial IQ gap is explained by microbiota, epigenetic mechanisms and environmental stress on germinal cells? Those are all quantifiable and falsifiable claims, but you just gesture at them. At this point, a dedicated layman looks it up and sees that they can explain very little indeed.
10. We don’t mate randomly, which is an assumption in many genetics studies. – irrelevant applause lights, “genetics bad”. Which studies, and does this matter for HBD? I’ve watched the video; it discusses interactions between psychiatric disorders and such, and states that genetic correlations between traits may be inflated by assortative mating (i.e. people high in trait X marry people high in trait Y). Genetic correlation “is defined as the proportion of the heritability that is shared between two traits divided by the square root of the product of the heritability for each trait”. What is meant here, concretely? Ancestry is not really a “heritable trait”, is it? And race is just a category, plus a bad proxy for ancestry, as far as HBD is concerned.
11. HBD is not generally accepted in academia. – this is just an appeal to authority, plus misleading. It’s a single highly technical paper by some Kevin Bird, “Department of Horticulture Michigan State University”, can it be considered an authoritative source on what academia thinks? And from the abstract, it attacks a very strong form of HBD reasoning, using data that cannot plausibly be conclusive: Evidence for selection was evaluated using an excess variance test. Education associated variants were further evaluated for signals of selection by testing for excess genetic differentiation (Fst). Does it strike you as plausible that we know enough about “education associated variants” to impute effects of prehistoric selection on intelligence? This ought to mean that the science of genetics of intelligence is vastly more mature than people think, than you suggest, too, and that within-group intelligence heritability is understood really well! Why hasn’t this made the news yet? (And how does this address obvious low-tech HBD arguments, such as admixture studies and adoption studies?)
12. Many public HBD figures have been found guilty of fraud. Cyril Burt would literally forge results, while Lynn would take the average of two neighboring countries’ IQ in order to derive “data” from a country’s unknown national IQ. – That’s an isolated demand for rigor. What field doesn’t commit fraud? Were public anti-HBD figures never found guilty of fraud? Is the fraud rate different enough to affect our priors? And your link does not show that Burt’s forgery was positively proven, but it admits that figures of heritability arrived at by independent researchers do not differ from Burt’s, so why should we care? Assuming that a layman could track it down from here, I’ll allow myself to quote Richard Haier (The Neuroscience of Intelligence, Cambridge University press, 2017): Subsequent twin studies done by different investigators around the world with large samples arrive at an average value for the correlation of intelligence scores among identical twins raised apart of .75 (Plomin & Petrill, 1997). Burt’s value was .77. For comparison, based on 19 studies ranging in sample sizes between 26 and 1,300 identical twin pairs, the average value for identical twins raised together is about .86 (see Loehlin & Nichols, 1976, table 4.10, p. 39)… Thus, the .771 “fraud” ends with recognition of overwhelming data from independent researchers that are fully consistent with Burt’s analyses, flawed as they may have been. Any single study, or any one researcher, can be flawed, but the basic conclusion that genes play an important role in intelligence is consistently supported by data from numerous studies of twins, adoptees, and adopted twins. This is an excellent example of looking at the weight of evidence (recall my three laws from the Preface: no story is simple; no one study is definitive; it takes many years to sort out conflicting and inconsistent findings and establish a weight of evidence). … The weight of evidence summarized in this chapter leaves no reasonable doubt. Only extreme ideologues are still in denial. As for Lynn’s country data, well, the same logic applies. Do we have any more trustworthy data? Does it refute Lynn’s? Then why not just refer to it instead? Please don’t say that it’s not very interesting and nobody has bothered to collect proper measurements, IQ and race (or rather, ethnicity) is literally the most painful question in modern science, and it’s evident from such dramas that a great many researchers are emotionally invested in proving the relationship wrong.
Ultimately, exposure to this sort of content has done for me what it has done for this person:
I want to make it perfectly clear: those question marks in my point-by-point do not actually indicate uncertainty. They could as well have been references to papers. The field really is about as advanced as Bird’s study suggests – only in the direction he disapproves of. But this isn’t the place for it, surely people can go to some edgier venue and ask for receipts. The point I’m trying to make is: you say “Okay, if there’s anyone here who actually believes in HBD, here’s a couple reasons why you shouldn’t.” On an ignorant but moderately skeptical person your little list can, and likely will, have an effect that’s the opposite of what you intend to achieve. To “who actually believes in HBD”, it’s utterly unconvincing. If I may be so blunt, it’s almost as sad as quasi-scientific gotchas of flat earthers.
And this is how all of anti-HBD rhetoric is, in my experience. It crumbles under basic scrutiny, links do not show what they are purported to show, there are simple misunderstandings of what terms mean, there is no coherent epistemology or a single model, there’s suppression of inconvenient evidence, there’s substitution of evidence with confident op-eds in Vox from people who are supposed to be experts (but whose legitimate work doesn’t support their confident claims), there are cascades of internally inconsistent Gish Gallops and other fallacies; worst of all, the reader is assumed to just not be all that bright. It’s a collection of purely reactive objections that might come across as persuasive to like-minded people, but are not battle-tested – and indicate general unwillingness to test one’s beliefs.
I expect very little payoff from this labor. But it would be nice if EAs were to become a little more reserved on this topic, and at least stopped turning off potential recruits with irrefutable displays of irrationality.
84% of surveyed intelligence researchers believe the gaps are at least partially genetic.[1] This statement is not just an appeal to authority, it is also inaccurate.
https://www.sciencedirect.com/science/article/abs/pii/S0160289619301886
Why did you reply to MissionCriticalBit when it was I who made that claim? I almost didn’t see it.
Also pointing out that the academics who study this stuff for a living don’t believe in it is not fallacious, but rather a very useful piece of information.
Anyway, I wanted to give the HBDers another shot so I downloaded the survey (can we all agree that paywalls for publicly funded research is bullshit?) and I have two important things to note: genetic gaps is not equivalent to racial gaps, and the survey itself admits it is unrepresentative.
It was an internet survey:
and had a high nonresponse rate:
with respondents who are different than the field as a whole:
which heavily biases the results in favor of your position:
EDIT: To respond to missioncriticalbit below. My comment was about the sentence “HBD is not generally accepted in academia”. The reason I can’t show you a survey that shows you that is the same reason I can’t show you a survey that zoologists don’t believe in unicorns, they don’t engage with it so there is no survey available (even the bad survey by anon rationalist is not about HBD). But I don’t want to make an assertion without citing anything, so what is the best available option? How about an example of a professional biologists with no conflict of interests using publicly available data to create a well received paper that has been seen more than 12000 times that clearly rejects HBD.
Missioncriticalbit just makes assertions without citing anything. The reason I don’t respond and refused to continue to read his reply is not because I am afraid, but because he hadn’t cited anything, didn’t engage with my writings and outright insulted me.
The reason I respond in an edit instead of a reply is because the HBDers have removed half a dozen of my latest comments from the frontpage while taking away a big chunk of my voting-power on this forum. I’m not inclined to give them another way to take away my voting-power, but I don’t want to silence myself, so using the edit button is my workaround.
First, that depends on what you mean by “this stuff”; Bird does not study intelligence nor behavioral genetics for a living, he’s a plant geneticist. Skewed though the survey may be, it’s probably more representative than a single non-expert.
Second, why do you suppose the non-response rate is so high and so skewed? And might it have something in common with your own refusal to continue our conversation on merits of your list?
I suspect that professionals who prefer not to respond, rather than respond in the negative about genetic contributions to the IQ gap, are driven by contradictory impulses: they believe that the evidence doesn’t allow for a confident “100% environmental” response and, being scientists, have problem with outright lying, but they also don’t want to give the impression of supporting socially unapproved beliefs or “validating” the very inquiry into this topic. So they’d rather wash their hands of the whole issue, and allow their less squeamish colleagues to give the impression of moderate consensus in favor of genetic contribution.
Differential response within the survey is again as bad.
The response rate for the survey as a whole was about 20% (265 of 1345), and below 8% (102) for every individual question on which data was published across three papers (on international differences, the Flynn effect, and controversial issues).
Respondents attributed the heritability of U.S. black-white differences in IQ 47% on average to genetic factors. On similar questions about cross-national differences, respondents on average attributed 20% of cognitive differences to genes. On the U.S. question, there were 86 responses, and on the others, there were between 46 and 64 responses.
Steve Sailer’s blog was rated highest for accuracy in reporting on intelligence research—by far, not even in the ballpark of sources that got more ratings (those sources being exactly every mainstream English-language publication that was asked about). It was rated by 26 respondents.
The underlying data isn’t available, but this is all consistent with the (known) existence of a contingent of ISIR conference attendees who are likely to follow Sailer’s blog and share strong, idiosyncratic views on specifically U.S. racial differences in intelligence. The survey is not a credible indicator of expert consensus.
(More cynically, this contingent has a history of going to lengths to make their work appear more mainstream than it is. Overrepresenting them was a predictable outcome of distributing this survey. Heiner Rindermann, the first author on these papers, can hardly have failed to consider that. Of course, what you make of that may hinge on how legitimate you think their work is to begin with. Presumably they would argue that the mainstream goes to lengths to make their work seem fringe.)
Even if you think my reasons failed, why would that push you towards accepting it? HBD is a hypothesis for how the world works, so the burden of proof is on HBD and giving a bad reason not to believe in HBD is not evidence for HBD. To give a very clear example, if someone says ‘I believe in unicorns’, and I say ‘no unicorns do not exist because 1+1=3’ that would fail to be evidence for unicorns not existing, but that does not mean it counts towards evidence for unicorns existing.
Thank you for donating to GiveWell! Unimportant nitpick that has always bothered me: LW has an empiricist tradition, the term ‘rationalist’ is a misnomer.
I wouldn’t say other ethical theories are internally inconsistent. They might have other attributes or conclusions that you think are bad, but the major ethical theories don’t have any inconsistencies as far as I can tell. Do you have an example?On the other hand I do think Eliezer has some inconsistencies in his philosophy, although it’s hard to tell because he’s quite vague, doesn’t always use philosophical terminology (in fact he is very dismissive of the field as a whole) and has a tendency to reinvent the wheel instead (e.g his ‘Requiredism’ is what philosophers would call compatibilism). Now usually I wouldn’t mind it that much, but since philosophy requires such precision of language if you don’t want to talk past each other, I do think this doesn’t work in his favor.
I would like to point out that my comment was not about Bostrom.
I mean even if you don’t know which way the arrow of causality points, that’s still an unnecessarily big risk. It’s not particularly altruistic to make statements that have that big a chance of helping racists. You could also spend your time… not doing that. Also even if you reject arguments from historical precedent there is still the entire field of linguistic racism.
Just because people won’t publicly state it doesn’t mean it doesn’t influence their thinking. Take for example the stereotype of the welfare queen. While not everyone will explicitly state ‘this person has a lower moral worth’ (although some will) the racist stereotyping does lead to black people being harmed both socially and economically. The myth of meritocracy is strong, and people who are seen as unable to ‘pull themselves up by their bootstraps’ are looked down upon.
What global utility? Racists want us to talk about this stuff, there are other correlations that are both on firmer ground, have more global utility and aren’t fulfilling the desires of racists.
If you had read my comments you would’ve seen that I both didn’t respond to Bostrom, did respond to HBD and did support the environmental explantation of the IQ gap.
My comment didn’t deny the existence of an IQ gap and my comment was responding to sapphire who was talking about HBD specifically and so it wasn’t “irrelevant anti-HBD talking point”. If you’re not engaging with what I actually write I’m starting to think that spending hours on this comment wasn’t the best use of my time.
Very civil. It will not surprise you to learn that this does not motivate me to keep reading.
Yeah I’m out.
*I’m going to spend my time on something else now.
This logic is only applicable to contrived scenarios where there is no prior knowledge at all – but you need some worldly knowledge to understand what both these hypotheses are about.
Crucially, there is the zero-sum nature of public debate. People deliberately publicizing reasons to not believe some politically laden hypothesis are not random sources of data found via unbiased search: they are expected to cherrypick damning weaknesses. They are also communicating standards of the intellectual tradition that stands by the opposing hypothesis. A rational layman starts with equal uncertainty about truth values of competing hypotheses, but learning that one side makes use of arguments that are blatantly unconvincing on grounds of mundane common sense can be taken as provisional evidence against their thesis even before increasing object-level certainty: poor epistemology is evidence against ability to discover truth, and low-quality cherrypicked arguments point to a comprehensively weak case. Again, consider beliefs generally known to be kooky, and what they bring to bear on the opposition. Their standard of rigor alone is discrediting to what they believe in.
Moreover, I’ve established that, upon checking, some of your links positively provide evidence in favor of HBD, rather than against – at least by the standard of evidence implicit in the phrasing of the list. Returning again to the Identical Ancestors Point, is presented as an Anti-HBD finding in the first place because it implies a very low prior for genetic divergence of populations, migrations somehow averaging it all out: Human migrations over the last thousand years have been such that literally everyone on Earth is a descendant of literally everyone that lived 7000 years ago whose offspring didn’t die out. (Is this the wrong takeaway? What, then, did you mean to say by adding it?) Looking into the actual paper, we see: …For example, a present-day Norwegian generally owes the majority of his or her ancestry to people living in northern Europe at the IA point, and a very small portion to people living throughout the rest of the world. Furthermore, because DNA is inherited in relatively large segments from ancestors, an individual will receive little or no actual genetic inheritance from the vast majority of the ancestors living at the IA point. Not only does this make the original argument invalid (even in a strong absolute sense – there can be zero common inheritance!) – it directly reinforces the HBD conjecture that long-term (i.e. pre-IAP) divergent local adaptation is relevant to current genetic (and trait) differences.
I agree that this is improper and irritating terminology, because doctrinally, LW asserts its allegiance to empiricism, with all the talk about Bayes-updating on evidence and how rationalists must “win”. But in practice this isn’t so clear-cut: LW is fascinated with armchair thought experiments (that routinely count as evidence to update on), and all the attention devoted to infohazards, Pascal mugging, one-boxing, AI scenarios etc. suggests that they, as a living tradition, are not resilient to speculation the way pure empiricists – say, regular natural scientists – would be. So, not necessarily a misnomer.
They are internally consistent, but I think the point of ethical theory is to clarify the intuitively knowable essence of moral action for purposes of nontrivial decisionmaking, not to assert what morality is and derive an arbitrary decision rule from there. Utilitarianism is often criticized for things like the repugnant conclusion, yet non-utilitarian ethical theories routinely produce more grating outputs, because they fail to capture the most significant part of intuitive ethics, which is mostly about harm reduction under conditions of resource scarcity. They are less consistent with ethics given to us in lived experience, so to speak.
No, the extent of the purported risk matters. You are just falling back on the unsupported prior about cost-benefit ratio because you have preemptively excluded all factors that may change the qualitative conclusion of “not doing that”. To give a specific example: under the assumption that HBD is wrong, we must consider disparate outcomes to be a result of some discrimination and devote resources to alleviate it; but if HBD is actually right, this’d necessarily mean that our costly attempts to help low-performing groups are suboptimal or futile (as in, not effective), and that we will have unfairly attributed blame, harming other groups psychologically and materially. Then there are knock-on effects of harming science: for starters, fears of enabling racists can hold back genomic medicine (and population-specific treatment) by increasing hurdles to data collection and access. We do not have a priori knowledge as to which costs are negligible. On a more meta level, Scott Alexander’s parable comes to mind.
IMO it’s a weak argument because for all the racism, black Americans still report the highest self-esteem of all racial groups; and theories of stereotype threat are apparently unsupported by high-quality data; so it isn’t clear what the odds are that some HBD research or whatever would harm people substantially. But even before that – there are laws against hate speech and discrimination, and they can be strengthened if needed; it seems very suboptimal to focus on not developing neutral knowledge only to deny hateful ideologies rhetorical ammo, instead of dealing with them directly. By the way, cannot racists point to censorshipas sufficient evidence of their correctness, if their intent is to spin available facts to their benefit? Actually, doesn’t this enable them to – convincingly – claim that facts are much worse than they are, that the genomic gap in cognitive ability is bigger than the non-zero gap we’d have found (and, I believe, have partially found) with proper research (which is currently prohibited)? And in any case, you have to put racism-driven harms in the context of costs of pretending that HBD is certainly false – that is, under the assumption that we are “just not doing that” and have no clue whether it is or isn’t true.
As an aside, I am personally puzzled by the strong conviction of many that HBD becoming common knowledge could lead to normalization of racial discrimination. This is a normative, not scientific question. Societies with Social Darwinist values do not need HBD to embrace and exacerbate the status quo of disparate power. Societies with ethnocentric values opportunistically oppose and exploit ethnic outgroups regardless of relative merit. Mainstream modern value system depends on the premise of human rights, not equality of capability. We do not hold that it is normal to oppress individuals who are known to be below average in some morally neutral trait (except maybe for an expansive definition of “oppress” and clinical issues having to do with lack of legal capacity), we have a strong revulsion to identity-based discrimination, and we understand the unreasonableness of treating individuals on the basis of average values.
Uncharitably, in the case of EA, this concern may have to do with the strain (common to EA and LW) of conspiratorial elitism and distrust in the democratic process, and with the unconscious belief that intelligence does define moral worth. That’s …not a very popular belief. I would deeply hate it if my cognitive betters acted like they have greater moral worth than myself, and therefore, to be fair, I cannot deny equal moral worth to people of lower ability. Most people correctly believe that they aren’t brilliant, but they’re not so dull as to not arrive at this logic. There are some contingent factors that complicate the picture, but not fundamentally.
Without getting into the weeds of stereotype scholarship, the extent of claimed harms, and the irrational denial of the role of merit in achievement (reasoning in that wiki page doesn’t even begin to address what would happen in a “proper” meritocratic society after a few generations, because it is premised on genes not contributing to achievement; this is a typical case of an unexamined anti-HBD prior leading to policy errors)… I’ll just say that in my opinion both those issues, insofar as they harm anyone, have to do with beliefs about moral qualities. If “Protestant ethic” is alive and prescribes vilification of people of lower morally neutral ability, then that is a problem in its own right and beyond the scope of this conversation. Luckily, Protestant ethic also encourages treating people on a case by case basis.
Crucially, the search for interventions that actually close the IQ gap. As it stands, we have picked low-hanging fruit like lead exposure, malnutrition, iodine deficiency, parasites and such (in developed nations; I expect EA efforts in Africa to keep delivering on this front), and are left with pursuing dead ends of addressing iniquity like the “food deserts” nonsense, or doubling down on stuff like school spending, that has long ran into diminishing or zero returns and is only popular because to point out its inefficacy means to risk being labeled racist. As Nathan Cofnas argues,
Ironically, Cofnas got in trouble for this. If the suppression starts this far upstream from the object level, how can our priors be trusted?
I sincerely doubt you can prove 1 or 2 (given that your critiques of relevant methodology weren’t persuasive), and it looks like assigning any value to 3, on its own, is pure spite that is best left out of effective altruism. Making racists mad is not, in fact, a positive good, fun as it may be.
Have read some. I explicitly say I’m addressing the state of discourse here, more than just your comment. I respond to you in particular when I quote specific passages. Sorry if that was unclear.
Again, this is not HBD Central, and it is sufficient to establish that there is legitimate uncertainty, so we cannot fall back on the comfortable prior that costs of repudiating HBD are negligible.
Well, I believe that misleading people, and even wasting people’s time on true but irrelevant, misleadingly phrased, blatantly misinterpreting and patently false claims is a form of rudeness that’s extra obnoxious, because it craftily avoids opprobrium one could earn with trivial show of disrespect. It’s not fair to act indignant about an unflattering comparison after doing that. Even so, I’ve made peace with Brandolini’s law, and kept addressing those claims on the object level, to substantiate my “very civil” summary and so that “EAs were to become a little more reserved on this topic, and at least stopped turning off potential recruits with irrefutable displays of irrationality.” To be honest your reaction isn’t wholly unexpected, but I did hope that I’ve been polite enough to merit some tolerance.
OK but please think of your stated desire is to persuade those who happen to believe in HBD to disbelieve it. Obviously you’ve failed in my case, but I maintain that flaming out like that is detrimental even as far as fence-sitters are concerned. I believe I’ve provided sufficient receipts for the purpose of showing how your list is inadequate.
It is really not hard to showboat on this topic, by citing from very clearly argued stuff like this or “authoritative” sources like that review or very technical recent papers or just by gesturing in the general direction of environmentalist rhetoric that is… the way I’ve described, and evident in, e.g., this condemnation of Cofnas, mired in (what I hope is obvious after my initial comment) logical fallacies and half-truths and raw indignation. Or one can just say that if this guy is challenged not by rational and empirical arguments but by being repeatedly called a pseudoscientist and getting a page full of personal attacks on him to the top of search results for his name (a page he responds to with an even pettier page), then he may get a lot of uncomfortable stuff right.
My point is not to showboat but to argue that people who pursue this anti-HBD rhetorical strategy, including you, are probably not succeeding, and are doing the community no favors.
I don’t want to engage with your arguments. I strongly think you’re wrong, but it seems much less relevant to what I can contribute (or generally want to engage with) than the fact that you’ve posted that comment and people have upvoted it.
I don’t understand how this can happen on the EA Forum. Why would anyone believing in this and wanting to do good promote this?
If anyone here does believe in ideas that have caused a great amount of harm and will cause more if spread, they should not spread them. If that’s not the specific arguments that you think might be better and should be improved in such and such way but the views that you’re arguing about, don’t! If you want to do good, why would you ever, in our world, spread these views? If the impact of spreading these views is more tragedies happening, more suffering, and more people dying early, please consider these views an infohazard and don’t even talk about them unless you’re absolutely sure your views are not going to spread to people who’ll become more intolerant- or more violent.
If you, as a rationalist, came up with a Basilisk that you thought actually works, thinking that it’s the truth that it works should be a really strong reason not to post it or talk about it, ever.
The feeling of successfully persuading people (or even just engaging in interesting arguments), as good as it might be, isn’t worth a single tragedy that will result from spreading this kind of ideas. Please think about the impact of your words. If people persuaded by what you say might do harm, don’t.
One day, if the kindest of rationalists do solve alignment and enough time passes for humanity to become educated and caring, the AI will tell us what the truth is without a chance of it doing any harm. If you’re right, you’ll be able to say, “I was right all along, and all these woke people were not, and my epistemology was awesome”. Before then, please, if anyone might believe you, don’t tell them what you consider to be the truth.
But can you be trusted to actually think that, given what you say about utility of public admission of opinions in question? For an external observer, it’s a coin toss. And the same for the entirety of your reasoning. As an aside, I’d be terrified of a person who can willfully come to believe – or go through the motions of believing – what he or she believes to be morally prudent but epistemically wrong. Who knows what else can get embedded in one’s mind in this manner.
Well, consider that, as it tends to happen in debates, people on the other side may be as perfectly sure about you being misguided and promoting harmful beliefs as you are about them; and that your proud obliviousness with regard to their rationale doesn’t do your attempt at persuasion any more good than your unwillingness to debate the object level does.
Consider, further, that your entire model of this problem space really could be wrong and founded on entirely dishonest indoctrination, both about the scholarly object level and about social dynamics and relative effects of different beliefs.
Finally, consider that some people just have a very strong aversion to the idea that a third party can have the moral and intellectual authority to tell them which thoughts are infohazards. If nothing else, that could help you understand how this can happen.
Personally – because I do, in fact, believe that you are profoundly wrong, that even historically these views did not contribute to much harm (despite much misinformation concocted by partisans: policies we know to be harmful are attributable to different systems of views); that, in general, any thesis about systematic relation in the pattern {views I don’t like}=>{atrocities} is highly suspect and should be scrutinized (e.g. with theists who attribute Stalin’s brutality to atheism, or derive all of morality from their particular religion); and that my views offer a reliable way to reduce the amount of suffering humans are subjected to, in many ways from optimizing allocation of funds to unlocking advances in medical and educational research to mitigating slander and gaslighting heaped upon hundreds of millions of innocent people.
Crucially, because I believe that, all that medium-term cost-benefit analysis aside, the process of maintaining views you assume are beneficial constitutes an X-risk (actually a family of different X-risks, in Bostrom’s own classification), by comprehensively corrupting the institution of science and many other institutions. In other words: I think there is no plausible scenario where we achieve substantially more human flourishing in a hundred years – or ever – while deluding ourselves about the blank slate; that it’s you who is infecting others with the “Basilisk” thought virus. And that, say, arguments about the terrible history of some tens of thousands of people whom Americans have tortured under the banner of eugenics – after abusing and murdering millions of people whilst being first ignorant, then in denial about natural selection – miss the point entirely, both the point of effective altruism and of rational debate.
This is an impossible standard and you probably know it. Risks of a given strategy must be assessed in the context of the full universe of its alternatives; else the party that gets to cherrypick which risks are worth bringing up can insist on arbitrary measures. By the way, I could provide nontrivial evidence that your views have contributed to making a great number of people more intolerant and more violent, and have caused thousands of excess deaths over the last three years; but, unlike your wholly hypothetical fearmongering, it’s likely to get me banned.
Indeed, I could ask in the same spirit: what makes people upvote you? If your logic of cherrypicking risks and demonizing comparative debate is sound, then why don’t they just disregard GiveWell and donate all of their savings to the first local pet shelter that gets to pester them with heart-rending imagery of suffering puppies? Maybe they like puppies to suffer?! This is not just manipulation: rising above such manipulation is the whole conceit of this movement, yet you commit it freely and to popular applause.
To make me or anyone like me change my mind, strong and honest empirical and consequentialist arguments addressing these points are required. But that’s exactly what you say is “much less relevant” than just demanding compliance. Well. I beg to differ.
For my part, I do not particularly hope to persuade you or anyone here, and guidelines say we should strive to limit ourselves to explaining the issue. Honestly it’s just interesting at this point, can you contemplate the idea of being wrong, not just about “HBD” but about its consequences, or are you the definition of a mindkilled fanatic who can’t take a detached view at his own sermon and see that it’s heavy on affirmation, light on evidence?
Adding on to this with regards to IQ in particular, I recommend this article and it’s followup by academic intelligence researchers debunking misconceptions about their field. To sum up some of their points:
IQ test scores are significantly affected by socio-economic and other environmental factors, to the point where one study found adoption from a poor family to a rich one causes a 12-18 point jump in IQ score.
The average IQ of the whole populace jumped 18 points in 50 years due to the Flynn effect.
The gap in test scores between races has been dropping for decades, including a 5 point drop in the IQ test score gap over 30 years.
With the above points in mind, the remaining IQ test score gap of 9.5 points does not seem particularly large, and does not seem to require any genetic explanation.
I don’t think one of the claims, that “Twin studies are flawed in methodology. Twins, even identical twins, simply do not have exactly the same DNA”, is true. As I see, it is not supported by the link and the study.
The difference of 5.2 out of 6 billion letters that identical twins have on average is not something that makes their DNA distinct enough to make the correlations between being identical tweens or not and having something in common more often to be automatically invalid.
One of the people involved in the study is cited: “Such genomic differences between identical twins are still very rare. I doubt these differences will have appreciable contribution to phenotypic [or observable] differences in twin studies.”
Twin studies being something we should be able to rely on seems like a part of the current scientific view, and some EA decisions might take such studies into consideration.
I think it’s important not to compromise our intellectual integrity even when we debunk foundations for awful and obviously wrong beliefs that are responsible for so much unfairness and suffering that exist in our world and for so many deaths.
I think if the community uses words that are persuasive but don’t contain actually good evidence, then even if we’re arguing for the truth that’s important and impactful to spread, in the long-term, this might lead to people putting less trust in any of our words arguing for the truth and more people believing something harmful and untrue. And on the internet, there are a lot of words containing bad arguments for the truth because it’s easy for people to be in the mode of finding persuasive arguments, which don’t necessarily have to be actually good evidence.
I think it’s really important for the EA community to be epistemically honest and talk about the actual reasons we have for believing something, instead of trying to find the most persuasive list of reasons for believing in what we believe in and just copying it without verifying that all the reasons are good and should update people in the claimed direction.
These are two separate links for two separate claims. ‘Twin studies are flawed in methodology.’ and ‘Twins, even identical twins, simply do not have exactly the same DNA.’, both of which are true. The confidence in the proposed HBD conclusions is simply not warranted by the evidence.
Many twin studies have the assumption that they share 100% of their DNA (which is false) andthat they share the exact same environment (which is also false). This leads to underestimating environmental factors and underestimating non-genetic biological factors.
Furthermore, separated twin pairs, identical or fraternal, are generally separated by adoption. This makes them unrepresentative of twins as a whole and there can be some issues of undetected behaviors in the case of behaviors that many people keep secret presently or in their earlier lives.
Oops! Sorry, I only discovered the second link; but before writing my comment, I looked up the first myself.
I’m not a biologist and will probably defer to any biologist entering this thread and commenting on the twin studies.
Twins (mostly, as the linked study shows) do not have exactly the same DNA. But it doesn’t seem to be relevant. The relevant assumption is that there’s almost no difference between the DNAs of “identical” twins and a large difference between the DNAs of non-identical reared-together twins, which is true despite a couple of random mutations per 6 billion letters.
The next two linked articles are paywalled. Is there somewhere to read them?
The third is a review of a short book, available after a sign-up, and it says that “some studies on twins are good, some bad”, and the author feels, but “doesn’t actually know” that the reviewed one is good. The reviewed book performed a study on twins and noticed there isn’t much of a difference between the correlation of the similarity of many personality traits with whether people are identical twins, and concluded that, since you’d expect to see a difference if the traits have different degree of heritability, many personality traits are results of the environment.
How is this an evidence that twin studies are flawed and shouldn’t be used? If that’s a correct study, it’s just evidence that personality traits are mostly formed by environment (which is something I already believe and have believed for the most of my life), but, e.g., why would this be relevant for a discussion of whether or not some disease has a genetic component to it, when a twin study shows that there is?
It’s important to carefully compare the numbers; but obviously there are things that identical twins have in common more often then non-identical twins, because these things are heritable at to larger or lesser degree; like hair color or height.
Of course, any study makes some underrepresentation of humanity. But if your study is about the degree of heredity of something and not about twins, why would this matter? If there’s a difference between adopted identical and non-identical twins that’s better explained by genetics (e.g., non-identical twins would have a different height more often), why does it matter how well they represent twins in general? Unless you’re studying how likely people are to be adopted, I don’t understand the claim.
The last link is paywalled, but again, why would this affect the difference between identical twins and non-identical twins? Until a year ago, I kept secret that I’m bi and would’ve kept it secret from scientists; but I don’t think this kind of thing affects conclusions you’d make if identical twins answered identically to some question more often than non-identical twins (e.g., imagine a society where people with green eyes are persecuted and a lot of them use contact lenses. Some would still say the truth, in confidence, to scientists; and the number of identical twins telling the same answer would be greater than the amount of non-identical twins telling the same answer, and the scientists will correctly infer this to be evidence for the heritability of eye color, even though a lot of twins would lie about their eye color).
So while it’s possible to just compare full DNAs and account for lots of different factors (all sorts of various environmental conditions that might be different between the subjects of the study) to find out whether DNA correlates with eye color, it’s much easier to do a twin study, and a strong correlation there will be a strong evidence
It’s fine.
Studies don’t just use identical twins but twins in general. You are equating my two claims and attacking claims that I haven’t even made, I never talked about “whether or not some disease has a genetic component to it, when a twin study shows that there is?”. I made a claim that twins, even identical twins, don’t share exactly the same DNA and provided a link to an article that gave more information, and I made a second claim that twin studies were flawed and provided that claim with a link to an article with more information about that. All this stuff about that it can’t help us find diseases or that twin studies “shouldn’t be used” are claims I never made.
EDIT:
For the record my study has some biostatistics, but it isn’t my strongest field and I’m mostly leaning on stuff my professors have explained:
I will also probably defer to a biologist/biostatistician.
As a different perspective to your list, I’d like to reference this thread of 25 threads, which provides extensive research in the opposite direction. Like you, I do not claim that this is all correct (I’m not an expert on this topic), but the evidence is certainly much less clear-cut than one might think from just reading the pieces you provided.
Given my priors and respect for my leisure time I’m not going to read those giant threads. I won’t downvote you since I haven’t actually read it, but let me ask you a related question:
Do you think that out of the billions of possible correlations in the social sciences, the best use of our finite time on earth is to study this one?
The incredibly flawed measure of ‘low iq’ is correlated with the arbitrary socially-contingent western category of ‘black people’ (almost certainly because of environmental factors). But there are millions of things correlated with the western category of ‘black people’ and there are millions of things correlated with ‘iq’.
Furthermore, there are so many more variables to study that are less flawed and less arbitrary, why should we focus on the one correlation out of billions that racist people –who want to make the world worse for out fellow human beings– want to us to talk about?
I agree with basically everything you say here, but I also think it’s a bit unfair to point this out in the context of Kaspar Brandner sharing a lot of links after you did the same thing first (sharing a lot of links). :)
In any case, I think
not discussing the issue >> discussing the issue >> discussing the issue with flawed claims.
(And I think we’re all in trouble as a society because, unfortunately, people disagree about what the flawed claims are and we get sucked into the discussion kind of against our will because flawed claims can feel triggering.)
Yeah that’s fair. I mean I did give summaries, but it’s still fair. If I could go back in time I would’ve posted that comment first and I would’ve tried to explain my emotions/reasoning process to the HBDers on this forum more.
I would have said: I get the allure of taboo studies. I want to be a moral philosopher, but moral philosophers are very smart and they don’t get a lot of funding. So even if I work very very hard, I probably won’t get to be one. I need a way to stand out, to make people notice I would be a good researcher… oh what’s this? Everyone is avoiding these taboo studies. So researching them makes me both stand out and makes me a comparatively high quality researcher since almost no one is competing with me.
It’s competition/capitalism/moloch that is driving me towards these subjects and even knowing this doesn’t make the allure go away. But ultimately I care about people more, and there are often good reasons these taboo’s were put in place.
Perhaps the readers of this comment feel the same way. I empathize. The smart thing to have done was unpack these mental prioritizations immediately instead of grabbing my research. But I didn’t think of it, probably because I wanted to show that I could be a good researcher. Moloch got me. Please don’t spend your limited time on this subject, please don’t let moloch get you too.
Writing on such topics does the opposite of favoring your academic career. It is rather a form of career suicide, since you will likely get cancelled and ostracized. The topic is extremely taboo, as we can see with the reaction to Bostrom’s old email. He didn’t even support hereditarianism about IQ gaps, he just said they exist, which even environmentalists accept!
And with good reason, out of the billions of possible correlations to talk about this is one of the very few that will help racists.
True, but most people can’t cut it in academia and if one fancies themselves a researcher this path will allow you to continue to keep doing that without a lot of intellectual competition. Plus you can still get funding from shady organizations like the Pioneer Fund (I call them shady because they funded the distribution of ‘Erbkrank’-a Nazi propaganda film about eugenics- as one of their first projects and because they have ties to white supremacists groups, so their impartiality is suspect)
Strong disagree here. See the quote of the paper I posted below.
I don’t fault you for not reading it all, but it is a good resource for looking up specific topics. (I have summarized a few of the points here.) And I don’t think IQ is a flawed measure, since it is an important predictor for many measures of life success. Average national IQ is also fairly strongly correlated with measures of national welfare such as per Capita GDP.
To be clear, I’m not saying studying this question is more important than anything else, just that research on it should not be suppressed, whatever the truth may be. This point was perhaps best put in the conclusion of this great paper on the topic:
IMO, I agree with the idea that EA shouldn’t invest anything in studying this, though I took a different path.
I think IQ differences are real and they matter.
However, I think the conclusion that HBD and far-righters/neo-nazis wants us to reach is pretty incorrect, given massive issues with both evidence bases and motivated reasoning/privileging the hypothesis.
Comment erased due to formatting error; apologies. The correct version is here.
Could the people who are heavily downvoting this chain explain why? Is it because people disagree with the claims Mohammad/Sharmake/sapphire are making, or because they think it is violating EA forum norms?
I downvoted it (weakly) because my impression is that “it’s pseudoscience” is not a nuanced statement on a topic where there’s bad science all over the place on both sides. Apart from the awfully racially-biased beliefs of many early scientists/geneticists, there has been a lot of pseudoscience from far right sources on this also more recently – that’s important to mention – but so has there been pseudoscience in Soviet Russia (Lysenkoism) that goes in the other ideological direction and we’re currently undergoing a wave of science denial where it’s controversial in some circles to believe that there are any psychological differences whatsoever between men and women. Inheritance stuff also seems notoriously difficult to pin down because there’s a sense in which everything is “partly environmental” (if put babies on the moon, they all end up dead) and you cannot learn much from simple correlation studies (there could still be environmental influences in there). I think a lot of the argument against genetic influences is about pointing out these limitations of the research and then concluding that, because of the limitations, it must be environmental only. But that’s only half-right: if the research has all these limitations, it makes more sense to be uncertain about the causes.
When I closely followed the controversy around Sam Harris and his interview of Charles Murray and later the conflict and subsequent discussion between Sam Harris and Ezra Klein, I noticed that the side that was accusing Sam Harris of pandering to pseudoscience was lying about a bunch of easily-verifiable things. I’m not sure I would understand the science well enough to say that they’re wrong about their scientific claims (and I didn’t bother to read their work in detail), but I think it’s good practice not to trust liars. (Ezra Klein was more of a weasel in that discussion than a liar – the people I think were lying were people whose hit piece against Harris and Murray Ezra Klein allowed to be published on Vox.)
Given the above, it seems possible to me that genetic influences also play a role. It seems plausible on priors (would be a coincidence if all groups are the same in all regards), we have some precedent for group differences (I think the research on Ashkenazi jews having higher average IQ is less controversial?), and it can’t fill you with confidence in the other position when we can observe how some people are morally confused so they think the topic is so politically dangerous that they feel the need to lie about things (e.g., in the Sam Harris context, but also recent EA twitter threads I’ve seen go in that direction).
It seems clear that some group differences are environmental-only. However, note that, even if they weren’t, it wouldn’t have any political implications. The benefits of access to good things that underprivileged groups often have less of, like access to education, health care, infrastructure, both parents involved in upbringing (though of course many single parents do an excellent job raising their kids), etc., these benefits don’t have much to do with IQ increases! Instead, access to these things is beneficial in all kinds of ways for anyone. So, politically, nothing would change and it would remain morally important to work towards more equality.
As I said before, it’s totally counterproductive for the goal of fighting racism to stake your case on scientific claims that could turn out to be false. (Imagine how much of a convenient weapon you’d be handing over to racists if they can point out how the anti-racists are staking their claims on potentially flawed science and how they’re punishing anyone who expresses uncertainty.) There’s no reason to consider group averages morally relevant. It’s a huge confusion to act as though there’s a lot that morally depends on it.
I also downvoted sapphire’s comments in some places (though not this thread) because they make it seem like there’s some conspiracy in EA around this stuff and because I don’t like their use of the term “Scientific Racism.” (I think the term is very appropriate for many scientists in the early 20th century or before, but very unfair to use towards people like Charles Murray or ones who say things like Bostrom said in his apology.) Regarding the alleged conspiracy, I had to look up what “HBD” exactly means. It might be true that some contrarian types are drawn to these topics in Bay area and via that spinoff from Slatestarcodex where people get kicks from discussing controversial topics. But that seems not particularly representative to me (and more rationalists than EAs)? In any case, I mostly talk to EAs in London and Oxford, where I’ve never seen anyone express any interest in these topics whatsoever, “EA leadership” least of all. I agree that the voting patterns maybe suggests something about EA being unusual, but to me that mostly implies stuff like “EAs/rationalists are skeptical of making confident claims where the evidence is unlikely to support such claims.”
You and I have a very opposite reflection of the Sam Harris vs Ezra Klein fiasco.
I’d like to hear what you think about Klein’s point that environmental factors explain may >100% of the black-white iq gap, and yet this is alien in the race realism discourse. https://forum.effectivealtruism.org/posts/ALzE9JixLLEexTKSq/cea-statement-on-nick-bostrom-s-email?commentId=YN85c93DD3EiNLFfo
There is so much evidence at this point against race realism/ HBD. There is no possibility of it “could be false” without evoking some grand conspiracy. Can we never call it pseudoscience? My goal is to fight for scientific truth, not some anti-racist agenda. Check out Ben Jacob’s great resources.
That’s a cool point by Klein.
If the consensus is strong enough then yes, we should call it pseudoscience.
I read the Wikipedia article you linked on the topic and my feeling was that there’s some remaining disagreement in many places, but overall it does read as though the science supports environmental factors much more than genetic ones. I’m not 100% on how much I should trust it given political pressure and some yellow flags in the article like their uncritical mention of the Southern Poverty Law Center, which have behaved awfully and at times tried to cancel people like Sam Harris or Maajid Nawaz, who are “clearly good people” in my book. (And they still have Charles Murray on their list of extremists, putting him in the same category as neo-nazis, which is awful and immoral.)
I already looked at the resources by Bob Jacobs and thought some of them seemed a bit condescending in the sense that I’d expect people who feel confident enough to downvote or upvote claims on this topic would already be familiar with them. (E.g., some of the points he makes would also speak against studying whether mammals are smarter than fish given that fish have more genetic diversity than all mammals together and are a bit of an “unnaturally drawn group in biology”.) That said, it’s good to highlight the point about African diversity and, e.g., Nigerians having higher education scores in some areas than Europeans (and high conscientiousness – whether it’s cultural or genetic).
Other points seem overstated to me (e.g., criticism of validity of IQ). I think the Wikipedia article you linked to is a better source to convince people that genetic influences may not play much of role.
On the topic of the discussion as a whole, the current situation is clearly very unfortunate. It seems like there are many people who only get interested in the topic because they have the impression there’s censorship and they’re against such censorship. If we slightly relaxed on what inferences are defensible to draw draw from the science, then most people would lose interest, which would lower social polarization? Maybe the best message to promote is something like “If there are genetic influences, they’re likely no larger than environmental ones, and there may not be, and overall the question doesn’t seem to have any moral or political/practical relevance.”
I did not downvote any comments, but I am confused by some of the claims.
How is it pseudoscience to say that one is unsure about a topic? How is it hurtful to black people to say this? I do not mean any offense with these questions.
I do understand how it is hurtful to use slurs and I think Bostrom was wrong to do so in the original email, even in context.
Whether or to which extent it is hurtful is indeed unclear.
Where is the evidence for this claim? The extensive research on this topic suggests it is not pseudoscience at all.
I say it is pseudoscience on the grounds that there is a scientific consensus that genetic explanations for racial iq-gaps are deemed pseudoscientific.
https://en.wikipedia.org/wiki/Scientific_racism
https://en.wikipedia.org/wiki/Race_and_intelligence
To anyone who sincerely wonders if there’s anything to the “race-iq realism” theory, I ask you to consider this great point point made Ezra Klein to Sam Harris when Harris embraced The Bell Curve: very rarely will you see any serious consideration of the possibility that environmental factors explain >100% of the black-white iq gap. In other words, on a purely genetic basis that black people may be more intelligent than white people is alien to the discourse.
And of course there’s all sorts of other issues that cloud this discussion about the arbitrariness of race, the poor correlation between “race” and genetics, black americans having substantial european ancestry owing to slavery etc etc.
https://www.vox.com/the-big-idea/2017/6/15/15797120/race-black-white-iq-response-critics
The science is clear, it’s pseudoscience folks.
Thank you Bob Jacobs for your resources.
I just say that if you read the references I cited in the linked post you see they contradict that conclusion.
And why do you presume your sources are better.
Your source is a fringe twitter account, followed by alt-right accounts, cherry picking bits and pieces from journals. It doesn’t even properly link to the primary sources so I can’t even examine the the weakness/ context.
Worry more about my Jacobs’s sources contradicting your sources.
He links to a large number of research articles. It could be cherry picking, but the same thing could be said e.g. about linking to Vox articles, a source which is known to have a strong leftist bias.
There is no such consensus, though. Your links do not support your very strong claim.
E.g. Vox:
This does not allow to claim consensus, and the way it’s worded is obviously motivated by the desire to downplay the belief of experts in causal role of genetics. We have a newer survey they do not mention, too, Survey of Expert Opinion on Intelligence: Causes of International Differences in Cognitive Ability Tests, Rindermann, 2016:
If anything, the consensus seems to be that genes play some role here.
I do not see why this hypothetical is impressive? The best that could be said for it is that it is logically sound and novel. But heritability and norms of reaction impose limits on such explanations. If X percent of a trait’s variance can be explained by a factor, then there’s only so much you can get by changing the sum of non-X factors. Adult intelligence has roughly 80% heritability (equally within white and black populations; actually this alone invalidates the idea). For 1 d of difference in intelligence to be explained away by the environment, the gap in environmental quality must be 2.24 d. This is implausibly large for intra-national racial differences, contradicted by direct measures of environmental quality and indirect proxies of deprivation (such as stress and self-esteem), made suspect by the fact that there’s been a great deal of improvement in race relations and equalization of living standards since the 60s, yet no large narrowing of the IQ gap; and for the case where black people have higher “genotypic IQ”, environmental deprivation must be even greater than 2.24 d.
Laying aside whether CEA commenting on this was a virtuous action (I think it was virtuous here): People draw adverse inferences when there is a matter of significant public interest involving a leading figure in a social movement, and no appropriate person or entity from that movement issues a statement. Whether or not you think people should do that, they do, and the harm to public reputation is the same whether or not the inference is justified.
On the other side of the balance, it’s not clear what the harm of speaking here is.
I probably suggest clarifying what you refer to with ‘his words’ here, as I’ve seen people both criticize his writing from 26 years ago and his apology letter for being racist, while I assume you only refer to his writing from 26 years ago?
Ah, your title says that your statement is about Bostrom’s mail, and Bostrom’s apology is not a mail but a letter apologizing for his mail from 26 years ago. Might still be worth clarifying, I might not be the only one who’s initially confused.
The statement is almost certainly intentionally ambiguous. That’s kind of how a lot of PR works: say things directionally and let people read in their preferred details.
I’m not CEA, but in my opinion the same applies to his so-called apology.
I really don’t like this post.
Factually, I think it removes critical context and is sorely lacking in nuance.
Crucial context that was missing:
It was sent 25+ years ago when Bostrom was a student
It was sent as part of a conversation about offensive communication styles
Bostrom apologised for it at the time within 24 hours
Bostrom apologised again for the email now
Beyond the lack of nuance, this feels like it’s optimised for PR management and not honest communication or representation of your fully considered beliefs. I find that disappointing. I greatly preferred Habiba’s statement on this issue despite it largely expressing similar sentiments because it did feel like honest communication/representation of her beliefs (I’ve strongly downvoted this post and strongly upvoted that one, despite largely disagreeing with the sentiment expressed).
And I don’t really like the obsession with PR management in the community. I think it’s bad for epistemic integrity, and it’s bad for expected impact of the effective altruism community on a brighter world.
Emotionally, this made me feel disappointed and a bit bitter.
This might be less than perfectly charitable, but my subjective impression of the past year or so of EA work is something like:
~Neartermists focusing on global poverty: “Look at our efforts towards eradicating tuberculosis! While you’re here, don’t forget to take a look at what the Lead Exposure Elimination Project has been doing.”
~Neartermists focusing on animal welfare: “Here are the specific policy changes we’ve advocated for that will vastly reduce the amount of suffering necessary for eggs. In terms of more speculative things, we think shrimp might have moral value? Huge implications if true.”
~Longtermists focusing on existential risk: “so incidentally here’s some racist emails of ours”
“also we stole billions of dollars”
”actually there were two separate theft incidents”
″also we haven’t actually done anything about existential risk. you can’t hold that against us though because our plans that didn’t work still had positive EV”
I recognize that there are many longtermists and existential-risk-oriented people who are making genuine efforts to solve important problems, and I don’t want to discount that. But I also think that it’s important to make sure that as effective altruists we are actually doing things that make the world better, and separately, it (uncharitably) feels like some longtermists are doing unethical things and then dragging the rest of the movement down with them.
Here’s a VERY unharitable idea (that I hope will not be removed because it could be true, and if so might be useful for EAs to think about):
Others have pointed to the rationalist transplant versus EA native divide. I can’t help but feel that this is a big part of the issue we’re seeing here.
I would guess that the average “EA native” is motivated primarily by their desire to do good. They might have strong emotions regarding human happiness and suffering, which might bias them against a letter using prima facie hurtful language. They are also probably a high decoupler and value stuff like epistemic integrity—after all, EA breaks from intuitive morality a lot—but their first impulses are to consider consequences and goodness.
I would guess that the average “rationalist transplant” is motivated primarily by their love of epistemic integrity and the like. They might have a bias in favor of violating social norms, which might bias them in favor of a letter using hurtful language. They probably also value social welfare (they wouldn’t be here if they didn’t) but their first impulses favor finding a norm-breaking truth. It may even be a somewhat deolontogical impulse: it’s good to challenge social norms in search of truth, independent of whether it creates good consequences.
I believe the EA native impulse seems more helpful to the EA cause than the rationalist impulse.
And I worry the rationalist impulse may even be actively harmful if it dilutes EA’s core values. For example, in this post a rationalist transplant describes themself as motivated by status instead of morality. This seems very bad to me.
Again, I recognize that this is a VERY uncharitable view. I’d like to hasten to say that there are probably a great many rationalist-transplants whose commitment to advancing social welfare are equal to or greater than mine, as an EA native. My argument is about group averages, not individual characteristics.
...
Okay, yes, I found that last sentence really enjoyable to write, guilty as charged
This looks like retconning of history. EA and rationalism go way back, and the entire premise of EA is that determining what makes more good through “rationalist”, or more precisely, consequentialist lens is moral. There is no conflict of principles.
The quality of discussion on the value of tolerating Bostrom’s (or anyone else’s ) opinions on race&IQ is incredibly low, and the discussion is informed by emotion rather than even trivial consequentialist analysis. The failure to approach this issue analytically is a failure both by Rationalist and by old-school EA standards.
I’m arguing not for a “conflict of principles” but a conflict of impulses/biases. Anecdotally, I see a bias for believing that the truth is probably norm-violative in rationalist communities. I worry that this biases some people such that their analysis fails to be sufficiently consequentialist, as you describe.
I’m not aware of the two separate theft incidents (or forgot about one), can you tell me more about them?
SBF
Avraham Eisenberg (with the Mango Markets exploit, which he has now been arrested for)
Thanks; what has Avraham done that makes him longtermist? Did he / does he identify as longtermist?
I am very confused. Did someone dig this up and then he wrote that in a scramble, or did he proactively come out with this unilaterally? If it’s the latter, we should be applauding his courage in forthrightness for apologizing in his current letter and intentionally letting us know, while naturally condemning his words as a student 26 years ago he made on the mailing list. This post currently does not distinguish between these stances; I consider the apology to be a really important social technology if we want to be humans in a functioning community of other humans rather than subject to the vast impersonal forces of ostracism.
First sentence of the apology says “I have caught wind that somebody has been digging through the archives of the Extropians listserv with a view towards finding embarrassing materials to disseminate about people.” So it seems like he is trying to get ahead of a public disclosure by someone else.
My read is that Bostrom had reason to believe that the email would come out either way, and then he elected to get out in front of the probable blowback.
As evidence, here is Émile Torres indicating that they were planning to write something about the email.
That said, it’s not entirely clear whether Bostrom knew the email specifically was going to be written about or knew that someone was poking around in the extropian mailing list and then guessed that the email would come out as a result.
In any case, I think it’s unlikely that he posted his apology for the email unprovoked.
I think this would be true except his apology imo is not a good one. He gets some points for apologizing proactively, but I don’t give him many, because the apology doesn’t come across to me as sincere to me (but rather defensive).
I initially strongly upvoted this post but have since retracted my vote. I think the statement is vague as to which “words” it “condemns”. It would be better for CEA to take a firm, concrete stance against scientific racism (“SR”) specifically. As other people on the forum have pointed out, the promotion of SR in the community is harmful for many reasons: SR ideas have directly harmed people of color, discussion of SR deters people of color from participating in the movement, it makes the movement look bad, and it distracts from the movement’s actual priorities.
As a step further, CEA should consider banning all promotion of scientific racism on the forum. At a minimum, CEA should make it clear that SR ideas have no place in the EA movement.
Clarification: is scientific racism something like “there is a scientific paper relating to race and IQ, [discussion on implication]”?
“Scientific racism” is admittedly a bit of a misnomer because “scientific” racism is not scientific.
+60% on scientific devaluing on poc(true or false) deterring poc from participating.
Not sure if overall would be good though. The clearerThinking podcast w/Magnus Carlson say that allowing misinformation to be voiced may be effective at reducing misinformation. Ex. can point out why the view may fall short.
Yeah I think that’s a good point. An interesting perspective is that freedom of speech includes the right to express controversial ideas as well as the right to listen to them. Members of the EA community have the right to learn about ideas that may be classified as scientific racism and decide for themselves whether they are true and false. (And of course, my use of the term “scientific racism” presumes that these ideas are pseudoscience, which other people on the forum have disputed.) However, I really think that the EA Forum is not the right place for these discussions for the reasons I gave above. At least they should be limited to the “Personal Blog” section.
I appreciate this quick and clear statement from CEA.
Someone did the right thing today. Thank you.
You should make public the details of your early involvement with Alameda and stop trying to cancel other people until you’ve addressed your own past mistakes and wrongdoings.
I’m troubled by this statement. It completely fails to take Bostrom’s apology into account in any form. Moreover, accusing Bostrom of racism in this manner could legitimately be viewed as borderline slanderous. The accusation of racism can destroy a persons career, career-prospects, and reputation. In effect it can be a social death sentence. An organisation which wants to uphold the values of consequentialism should be much more careful in assessing the consequences of its public actions for the affected individual.
That’s not my reading of the statement (it says “unacceptably racist language” and then condemns the manner of discussion rather than beliefs held).
Yeah, but that can be okay if you think it’s higher priority to make a public statement about the contents of the email.
I initially didn’t think such a statement was necessary because disagreeing with the email seemed like a no-brainer, so I didn’t think anyone would have any uncertainty about the views of an organization like CEA. But apparently some (very few) people are not only defending the apology – which I’ve done myself – but argue that the original email was ~fine(?). I don’t agree with such reactions (and Bostrom doesn’t agree either and I see him a sincere person who wouldn’t apologize like that if he didn’t think he messed up), but they show that the public statement serves a purpose beyond just virtue-signalling to make sure there are no misunderstandings. (Note that it’s possible to condemn someone’s actions from long ago as”definitely not okay” without saying that the person is awful or evil!)
“To make sure there are no misunderstandings” it is arguably a fatal strategy not to acknowledge his apologies and not to mention that the “recklessly flawed and reprehensible words” stem from a very old email. As it is written, the statement simply sounds like it is calling him out for racism, which is an extremely serious accusation.
I think the natural move is to create a chapter within CEA that actively supports Black people. Honestly, i have been to EA conferences, and I can tell there is still work to be done on the diversity part, also including woman representation. Overall I love CEA and want to see how to be more diverse. One place to start might be supporting emerging markets like Africa, not only through donations but programs. For example 80, 000 hours is tailored for someone in the Global North, we need to rethink how does 80K look if we want to end unemployments rates in Southern Africa.
I thank you for responding quickly and mitigating PR damage. We already got a big PR hit, we don’t need another one so soon.
To the comments who criticize it: I feel like people are underrating PR concerns right now.
So I get this mail is bad PR, but people seem to object to it beyond that and it isn’t clear to me why. If it is because he claims that Blacks have a lower average IQ than mankind in general I think that would be a terrible reason as I am not aware of a single intelligence researcher who would dispute this. Or is it because he uses the word [edited: see moderators comment]?
I checked what strong forum norms Bostroms mail would have violated.
“Unnecessary rudeness or offensiveness”
This would make sense to me, although I will point out that his point back then was precisely that his offensiveness is not unnecessary.
“Hate speech or content that promotes hate based on identity.”
I don’t think he did this:
“They would think that I were a “racist”: that I _disliked_ black people and thought that it is fair if blacks are treated badly. I don’t.”
Just fyi, there is an extremely strong taboo (esp. in the US) against saying “the n-word” and most people are not sympathetic to use-mention distinction arguments in this particular case, even if they would be in theory. I strongly suspect this is why your comment was downvoted.
Speaking as a moderator: the Forum currently doesn’t have a policy banning any specific words, although we might change that.
But we do have norms about kindness, avoiding unnecessary offense, and behaving with civility, and many people (reasonably) find use of the n-word extremely hurtful or upsetting, even if it’s used as an example or in a quote.
Overall, I think the word’s use here is not helpful and violates those norms, so I ask you to remove it.
I’ll continue discussing with the moderation team, both to develop an overall policy and to see if they disagree with my decision here.
The reason why I don’t say “the n-word” as you did is because it can be misleading. I have seen people using it to quote somebody else who just said “negro”. So to me saying “the n-word” would violate another (to my mind much more important) norm:
“Be honest.
Don’t mislead or manipulate.”
I try to follow rules I disagree with. However if I violated any rules here then at least they should have been clearer, so I appreciate the ongoing effort.
I agree that the current norms are probably not clear enough to cover this situation, we are thinking about adding more specific ones.
As a non-american, I also initially found the lack of use-mention distinction to be very counterintuitive. But it’s culturally very important for a very large fraction of forum readers, so I would also ask you to edit your initial comment. If you don’t do that, we may just edit the comment for you.
Using “n-word” would definitely not count as “misleading or manipulating”. If you wanted to be absolutely precise I would recommend linking to the original source.
My initial comment seemed completely innocent to me. I didn’t expect the backlash and don’t want to clog this thread further with a discussion that isn’t really on topic, so this will be my last reply here. (DM me if you want)
thx @RAB for the reply
To all the people who downvoted. My comment was asking for clarification. If you then downvote it for presumably the same reason you took offense to the Bostrom comment without explaining why this isn’t very productive.
@moderators I edited my comment.
The effect that the rules will have won’t be that I(and I assume this is true for others as well) will act the way that you think is proper, instead I will just not engage at all. This isn’t just true for this instance (which would be trivial), but in general.
Thank you for editing the comment and really thank you for the feedback.
I agree, we will need to be very careful about striking a proper balance here, but I think we can find something that’s better than the status quo (which results in downvotes and off-topic discussions, and detracts from discussions on how to do the most good)
There was a time when being a nonbeliever put you at serious risk of loosing everything (in the muslim countries they’ll still kill you for apostasy). So elite groups which understood that the forcefully imposed religious consensus was obviously wrong would instead setup a fake version of Christianity for new recruits, and then once these rose in the ranks tell them “yeah it’s obviously bullshit, congrats for suspecting it… but you are not allowed to say it!”
Of course every once in a while, someone high up goes rogue or forgets the vow of silence, and gets caned. Everyone else at the higher ranks submits the formulaic denunciation, and the lie continues until the next person screws up (or their heresy gets revealed when they are investigated for entirely unrelated things).
https://slatestarcodex.com/2017/10/23/kolmogorov-complicity-and-the-parable-of-lightning/
It’s not a moral fault to not finds these things out on your own but to join the mob instantly against a titan falls without wondering whether he was in fact expressing the well understood fact, which is also a taboo enforced by complete personal destruction… God, save us from these dim-witted inquisitors, for they know not what they do.
Well, at least now Bostrom will have his name on the same list as James Watson, Francis Kirk and Galileo.
Consider that perhaps the reason most of us on the forum aren’t agreeing with you- and the reason Bostrom himself repudiated his words- isn’t the taboo around the belief, but rather that most of us think the evidence is unconvincing at best.
But most of us are not willing to have the object-level debate here for reasons such as politics being mindkiller, not wanting this to become a forum for debating taboo positions, and yes, internal and public perception of the community.
(I don’t have survey data or anything, but I’d bet this is the case.)
If so, to the extent the majority of EA’s tend to be right about things, you should update in that direction in lieu of having the thoughtful critiques of your position.
As a BLACK Birmingham City University Computer Science graduate who studied under Dr Carlo ‘Secret Hitler’ Harvey, and filed a few graphene patents, these satanic-Nazi enemies of Allah the ALL Conqueror do no surprize me.
Never heard of this particular racist moron and as a believer in Allah the most wise and just and a follower of the Shia faith (NOT the satanic Khomeini-Khamenei CIA-Satanism) idiots and racists such as this Nick weirdo are doomed to get hypersonic NUKED by Xi and Putin as we enter the age of the Global Majority.
Nick did us ALL a favour with his silly writing and satanic anti human whatever mutterings.
Silly horrible, wicked racist, waste of a human being. #getoffmyneck #VictoryisintheGRAVE
I am banning this user indefinitely for violating Forum norms.
What even is this comment?
As a moderator, I think this comment is unnecessarily rude and breaks Forum norms. I’m bringing this up to the rest of the moderation team, and we’ll discuss whether to take any further action. Please don’t post any more comments like this.
So CEA’s gone off the deep end into left wing political partisanship over object level discussion too. At least it is out now.