This was a distinctively wholesome read. I restarted my (mostly focus) meditation practice late last year, and I have been meaning to leverage that foundation for a loving-kindness practice as well. The details of your post have substantially motivated that intention. Thank you for sharing!
CG
I could be wrong but I didn’t see political conflict mentioned specifically in that article, at least not explicitly. Not saying it can’t reasonably be inferred but given the political centrist majority within EA, I just wanted to clarify this observation as it could be misleading (?).
From what I briefly read (and gleaned from asking Ghostreader [GPT-3] in Readwise Reader), the studies found that when there is a lot of different knowledge and experience, increased task conflict (e.g. viewpoint diversity over content of a task) can override other forms of conflict, and actually lead to improved performance. More work here is needed, of course, but thanks for sharing this.
Note: If I had more than 10 minutes to make an extended comment or post that was less likely to be tone-police bait and properly formatted, I would have. That was my first EA forum comment and it came after a few emotionally exhausting days reviewing this discourse. I frankly just needed to get my thoughts off my chest.
But! Now for my second EA forum comment ever:
Was that Chris finds it difficult to justify devoting effort/time/money to EA causes (and convincing others to do so) instead of focusing “on direct threats to our lives, voting rights or civil liberties” (presumably in the context of black Americans?) because of EA’s lack of diversity and willingness to discuss this topic.
Thank you for the clarification, Anon Rationalist. That is, in large part, what I meant. But the willingness to discuss this topic is not my main issue, and neither is how tactfully people can make statements that I believe are still at odds with the basic empirics of how clinal traits like skin color work.
My point is that by subjecting oneself to conversations (particularly like this) with people who a) strongly align with HBD (and the HBD Institute) and b) may be more concerned with being perceived as non-racist/EA value-aligned than updating priors on possible externalities, one faces an increased risk of epistemic exploitation (not losing my life directly).
My point is not to be combative or inflammatory, but the direction longtermism appears to be taking suggests that this occupational hazard will be less likely in other social movements. And as others have noted , longtermism has “brought a shift of funding away from causes such as global health and poverty which greatly benefitted the residents of nonwestern nations, including many women and people of color, towards funding research in North America and Western Europe, to the benefit of a small number of highly-educated and highly-paid researchers, often white men.” I agree with him that this is likely unintentional but it’s notable regardless if you/we want to do the most good.
While I believe that this is a nonsensical argument against a social movement with nearly all of its attention to global health being dedicated to saving (mostly black) lives as efficiently as possible, I want to try to understand the argument as best as possible, and think you may have misinterpreted.
This reaction pattern-matches with some of my individual impressions of some push-back I’ve received from a variety of people to EA’s messaging, or when I say we should help them help the African diaspora. I’ve often defended EA, Game B, and other movements associated with x-risk as not immediately dismissible sci-fi-laced navel-gazing, jargon-spewing crypto-bros and doomers who care more about good epistemics than base reality. Natheless, I still applaud the work EA has accomplished, in promoting the importance of long-term thinking (because important it is), and its members’ commitment to combating biases.
However, I also have an increasingly hard time picturing how I might sustainably, in good conscience, decouple links the Human Biodiversity Institute has to the alt-right and how promoting and normalizing HBD has psychosocial externalities that include making those who peddle pseudoscience more acceptable. It’s quite hard when seeing the tangible harms these consequences can cause in e.g. the NFL or authoritarian jerks in office who don’t mince their bigotry or unintentionally do so covertly as Bostrom suggests.
I understand that it is not many folks here’s intention. I strongly want to believe that EA is different and will be better but I had to take a break from reading the forums after seeing the posts.
Now back to Ivan’s comment:
If you think that genetic differences in IQ immediately imply inferiority, then unless you deny that individuals have different levels of cognitive ability because of genetic differences, you must be committed to thinking you are “intellectually superior” than a bunch of people. But you probably don’t talk like that because it makes you seem like a jerk. (which I don’t think you are).
I don’t think (genetic) differences in IQ imply inferiority. My point is that my anticipated experience suggests that people have and do immediately make that jump to justify illiberal policies in the name of reason, science and evidence—even when it completely wrong.
I’m not trying to make people look worse than they are, I’m just baffled by holding up a mirror to what EA looks like to the very people they say they/we are advocating for, and wondering who EA wants to be.
I think if we found a comment that you considered racist/sexist and asked the author if they thought their comment was racist/sexist, the author would likely say no.
James Watson’s denial of having made racist statements is a social fact worth noting. Most ‘alt-center,’ etc. researchers in HBD and the latest thinking on euphemisms intended to reappropriate racism for metapolitical and game-theoretic purposes scientifically will, perforce, never outright say this.
To be clear, I don’t think many EAs are formally working in race science, and surely skeptical and morally astute EAs can have the integrity to admit to having made racist comments or reasonably disagree. (And no: as an African American EA on the left, I don’t think we should unsubscribe every HBD-EA, Bostrom, etc., from social life. Instead, we should model a safer environment for us all to be wrong categorically. Effective means getting all x-risks and compound x-risks, etc. right the first time.)
But after mulling over most of the HBD-affirmed defenses of Bostrom’s email/apology that I’ve read or engaged on the EA forum that weren’t obviously (yet also highly upvoted) red pills by bad actors, I think there are other reasons many of those EAs won’t say their comments were racist even if they themselves are not actually certain they are non-racist.
My hunch is whether those EAs see HBD as part of hard core or protective belt of longtermism/EA’s program may be a good predictor of whether they believe and therefore would be willing say that their comments were racist.[1]
For these, among other reasons, I think this instance of Hirshman’s rhetoric of reaction above is mistaken. It is not disvaluable that community builders in a demographically, socially and epistemically isolated elitist technocratic movement like EA doesn’t allow the best provisional statement clearly stating their stance on these issues to become the enemy of the good.
As I was relieved to see this, as well as the fact that Guy made the pushback I wish I had time to do 3 days ago. If there’s any way I can support your efforts, please let me know!
- ^
1.1 For want of an intensional definition of value-alignment.
1.2. I take little pleasure in suggesting that HBD-relevant beliefs strongly coupled with, e.g., Beckstead et al.’s (frankly narrow and imaginatively lacking) stance on the most likely sources of economic innovation in the future which therefore may have greater instrumental value to longtermist utopia may be one contributing factor for this problem within EA. And even anti-eugenics has its missteps.
- ^
This is a good and I think likely useful distinction. At least to the extent that being philosophically EA meets the satisfaction conditions for value-alignment (however defined) in order to be heeded by the relevant card-carrying EA stakeholders.
Excalidraw + Obsidian’s infinite canvas core plugin is truly a delight that I’m excited to see develop further. Lots of possibilities for better epistemics/PKM, and even more incredibly underrated for public sense-making/social epistemics in Obsidian.
Agreed. I’ve also seen other studies that suggest that the rate and quality of knowledge production increases from that kind of good faith dialectical feedback. Makes a lot of sense that some forms of conflict could be quite synergistic.
I will definitely give the piece a more thorough review when I get a chance.
I just realized that I forgot to respond earlier, but your consideration and transparent explanation are appreciated.
Rohit could have referred to the Wikipedia summary of the scientific consensus on race and IQ like the GCRI did in their statement here along with other scholarship, as Torres also does here. I can understand how some would prefer not further to dignify racialism as a legitimate topic of scientific debate as it can genuflect a bothsideist dynamic that serves neo-Nazis, etc. But I can also understand why some would, in good faith, find the omission questionable.
However, the problem with Bostrom’s statements that I haven’t always seen clarified in the limited EA discourse I’ve personally observed isn’t acknowledging mere differences measured by IQ between groups or that IQ reflects the psychometric construct “g ” more than a measure or estimate of general intellectual potential. Instead, the issue is that he did not (seem bothered to know or workshop his apology draft with others to know and therefore) recognize that the case for a biological or genetic basis for that differential is not supported by scientific consensus. Nor did it acknowledge that race, as a supposed biological construct, does not map onto genetic population structures, making the evidentiary case for the inherent intellectual inferiority of people with darker or “black” skin (like myself) empirically highly questionable.
But since we have feelings and thoughts, I find it extremely (as well as credibly and existentially) threatening, and quite frightening that an exalted leader in a movement that I believe (believed?) in, commanding billions of dollars in funding to shape human and posthuman futures holds these historically dangerous (read: not safe) and empirically unsupported beliefs about my intrinsic inferiority and subsequent negligible value to futures that matter, while being supported by a non-trivial subset of the EA/rationalist community who also holds these beliefs, or in “HBD” (which itself is an attempt to push white supremacy into the mainstream anti/woke culture war under the veneer of scientific objectivity but that’s another discussion).
This is also particularly disturbing as I try to convince myself and others, including and especially humans who look like me, that we might want to ignore EA’s glaring diversity problem and parts of EA’s unwillingness to change to build a better world for future generations rather than focus on direct threats to our lives, voting rights or civil liberties.
Geoffrey, or anyone really, can you please define wokeness?
I fail to see how EA‘s vague opposition to being anti-woke in partisan culture wars are anything more than internecine credible threats to open society. As a neurodivergent
and self-identified Black AmericanEA who was moved by and still respects your article on viewpoint and neurodiversity but pragmatically votes on the left as a transpartisan because I don’t see another middle way that isn’t omnicidal?With genuine respect, I find the blanket dismissals of wokenness to be extremely inflammatory and ineffective in eliciting the calm and respectful pushback from people who want to break new ground that you/EA/we(?) are looking for.
Also, thank you, Lauren, Nick and others for bringing attention to this.