How can we “find out using evidence and reason how to do as much good as possible, and to apply that knowledge in practice” if some avenues to well-being are forbidden? The idea that no potential area is off limits is inherent in the mission. We must be open to doing whatever does the most good possible regardless of how it interacts with our pre-existing biases or taboos.
Anon Rationalist
I think you have misread and misused the quote. It does not suggest following HBD to its “logical destination” (which in my mind evokes things like genocide and forced sterilization), it suggests that if EA were to accept different biological bases for observed phenomena, such as men disproportionately occupying positions of leadership, EA would then naturally progress to the idea that other observed phenomena (Asian and Jewish overrepresentation in cognitively demanding fields, for example) could be reflections of biological reality.
I think Jgray’s comment addresses my other notion that your position on a topic being too toxic to even discuss in good faith perfectly fits Hanania’s framework of the taboo.
Considering an EA group leader said that they would permanently ban anyone who brought up some of these topics, I am content in my choice of anonymity.
I generally agree that being palatable and well-funded are beneficial to effective altruism, and palatability and effectiveness exist on a utility curve. I do not know how we can accurately assess what cause areas should be of principle concern if certain avenues are closed due to respect for others’ sacred cows. I think the quote from Scott Alexander addresses this nicely; if you could replicate Jewish achievement, whether culturally or genetically, doing so would be the single most significant development for human welfare in history. Regardless of taboos, that should be a cause area of principle concern, and would be if EA held such ideas to cost-benefit analyses instead of sacred beliefs. And as the lists of sacred beliefs grow, it further hampers other cause areas that would benefit from a rationalist mindset.
I treat posts based on their content rather than the author. I have no idea who different posters are, nor do I care much on issues unrelated to specific experience or expertise.
You are correct that my question was uncharitable and posted in frustration at a comment that I found detrimental to discussion. I should have said, and pose this as an open question:
I agree that applying cost-benefit analysis in a manner consistent with EA principles to areas like zoning, drug approval, and nuclear energy are good. I do not agree that increased credentialism or additional taboos are beneficial to the stated goals of EA, for the reasons outlined in the article.
I would ask that you state specifically what you find horrible, what cause areas should be exempt from cost-benefit analysis, and why? The current comment, as posted, does not contribute to meaningful discussion by way of its vagueness.
Yeah, where it was immediately downvoted without generating any meaningful discussion on what people disliked or disagreed with, only proving the point that EA has some sacred cows that must not be touched!
Could you expand on this? What do you find horrible about the ability to recreate the success of Ashekenazi Jews among different populations, for example?
Why EA Will Be Anti-Woke or Die
Because disaster giving (in a fairly developed country, especially) is the antithesis of effective giving. These organizations, especially now, are not neglected, and are almost certainly not as cost effective as something like AMF. Disaster giving is the textbook example of philanthropy based on bias instead of cost effectiveness.
I want to adhere to forum norms and maintain a high quality in my posts, but this is tempting me to throw all that out the window. Of course, I will read a summary if one is provided, but going over these chapter titles, this book could just as well be a caricature of wokeness. Prioritizing Black Vegans? Queer Eye on the EA Guys? The celebratory quote complaining about white males getting it all wrong? Not to mention chapter 11 sounds seriously reminiscent of degrowthers. “Sure, alternative proteins ended factory farming, but they didn’t overthrow capitalism.”
My priors on this having any value to the goal of doing the most good are incredibly low.
I have been using a burner account recently as opposed to my account with my real name following the Bostrom controversy. That decision is not motivated by any fear of reprisal within EA communities; at my local EA group, I am perfectly happy to espouse the beliefs that I’d want anonymity for on here.
The reasons for doing so are as follows:
Potential Costs: EA seems to be under a microscope in the current landscape (See Bostrom, FTX, recent Time article on SA). This forum is not viewed only by people with beliefs in charitable understanding and respect for evidence-driven conclusions. If I say something “controversial” to an EA and provide sufficient evidence, I have much more confidence that, even if they disagree, they will be understanding of my thought process. I have no fear of social costs among EAs; not so with journalists trawling for someone to quote as a “eugenicist” or other pejorative. This and showing up as a Google result could have severe costs to my reputation and life outcomes.
Lack of Potential Benefits: Because of EAs high decoupling norms, I don’t think that attaching my real name to posts provides much marginal value. In my experience and viewing others’, people are judging the content of posts on their merits. I don’t think my posts would receive significantly different attention, provided the content remained the same.
Unresolved Area of Potentially High Cost: The experience of discourse with a group of high-decouplers who grant charitable understanding to beliefs, such as I experience in my local EA group, is amazing, and I want more of it. I know that if I used my real name, posted more often and earnestly, I could cultivate a larger group for this kind of activity. I may benefit from adopting a consistent pseudonym across online profiles that would allow this sort of connection. However, I am, in my current position, unwilling to risk potential reputational damage outside EA circles.
In summation, I don’t believe this is an EA problem. I believe it is a problem outside of EA from which the EA Forum cannot shelter its users.
My understanding of this section:
This is also particularly disturbing as I try to convince myself and others, including and especially humans who look like me, that we might want to ignore EA’s glaring diversity problem and parts of EA’s unwillingness to change to build a better world for future generations rather than focus on direct threats to our lives, voting rights or civil liberties.
Was that Chris finds it difficult to justify devoting effort/time/money to EA causes (and convincing others to do so) instead of focusing “on direct threats to our lives, voting rights or civil liberties” (presumably in the context of black Americans?) because of EA’s lack of diversity and willingness to discuss this topic.
While I believe that this is a nonsensical argument against a social movement with nearly all of its attention to global health being dedicated to saving (mostly black) lives as efficiently as possible, I want to try to understand the argument as best as possible, and think you may have misinterpreted.
If one truly believes in maximizing human welfare in a rigorous and evidence-based fashion, the suggestion that these two modes of intervention (ie EA Global Health vs. USA Domestic political activism) are comparable in the saving of black lives does not add up. One can always give to the actual effective causes without aligning or identifying with EA.
I am confused by this post. Bostrom never claimed a genetic basis for observed differences in IQ between races. He specifically did not address that and deferred to the experts in his apology. The Wikipedia page you reference supports his statement, charitably rephrased as “On average, white people score higher on IQ tests than black people.”
Is your displeasure that he did not specifically disavow potential genetic explanations, because the Wikipedia article on the topic says they are not empirically supported? (It should be noted here that all conducted surveys of intelligence researchers, though they have their problems, have found that a supermajority of experts believe at least some of the gap is genetic). Additionally, I am unaware of any transracial adoption studies or admixture studies (which, to my understanding, would be the most relevant experiments) that have not suggested at least a partial genetic explanation.
I think this is the issue that DPiepgrass highlighted. If one does not believe in rigorous empirical study of issues that could potential address human welfare, I don’t think EA is for them.
While on it’s face, increasing demographic diversity seems like it would result in an increase in political diversity, I don’t think that is actually true.
This rests on several assumptions:
I am looking through the lens of U.S. domestic politics, and identifying political diversity by having representation of America’s two largest political parties.
Increases in diversity will not be evenly distributed across the American population. (White Evangelicals are not being targeted in a diversity push, and we would expect the addition of college grad+ women and BIPOC.)
Of all demographic groups, white college grad+ men, “Sams,” are the most politically diverse group, at 48 D, 46R. By contrast, the groups typically understood to be represented by increased diversity:
College Grad+ Women: 65 D, 30R
There is difficulty in a lack of BIPOC breakdown by education level, but assuming that trends of increased education would result in a greater democratic disparity, these are useful lower bounds:
Black: 83 D, 10R
Hispanic: 63 D, 29 R
Asian American: 72 D, 17R
While I would caution against partisanship in the evaluation of ideas and programs, I don’t think there’s anything inherently wrong in a movement having a partisan lean to its membership. A climate change activist group can work in a non-partisan manner, but the logical consequence of their membership will be primarily Democratic voters, because that party appeals to their important issue.
if you encourage people from Ghana, you’ll get whole new political ideologies nobody at silicon valley has even heard of.
I think this aspect of diversity would offer real value in terms of political diversity, and could potentially add value to EA. I think clarification on what it means to “increase diversity” are required to assess the utility. I am biased by my experience in which organizations become more “diverse” in skin color, while becoming more culturally and politically homogenous.
Would you prefer Bostrom’s apology read:
I am sorry for saying that black people are stupider than whites. I no longer hold that view.
Even if he, with evidence, still believes it to be true? David Thorstad can write all he wants about changing his views, but the evidence of the existence of a racial IQ gap has not changed. It is as ironclad and universally accepted by all researchers as it was in 1996 following the publication of the APA’s Intelligence: Knowns and Unknowns.
This may be a difference of opinion, but I don’t think that acknowledging observed differences in reality is a racist view. But I am interested to know if you would prefer he make the statement anyway.
I do not follow the relevance of this critique. If Nick Bostrom, or anyone else, denied the Holocaust, and there was ample evidence to support the position, people would be talking about the virtues of his epistemic integrity. If he denied the Holocaust without ample evidence, people would be critiquing the virtues of his epistemic integrity. The crux of the matter of epistemic integrity is whether or not the evidence supports the position.
Those defending him now are likely doing so because, on some level, they are at least willing to consider holding the same specific beliefs as him on race differences, are becoming increasingly aware that these beliefs are understood to be problematic and harmful but remain committed to those beliefs and to Bostrom regardless. Don’t try to sugar coat things and please be honest, with yourselves and others. Appealing to some notion of “epistemic integrity” here just seems deeply disingenuous.
This seems to rest on the false assumption that “defenders” do not hold these beliefs out of epistemic integrity, but out of some other sort of animus. I do not think that is true for most, and certainly not true for myself. I hold my understanding of Bostrom’s statement (that black people are on average less intelligent than white people) to be true because that’s what all available evidence suggests.[1]
Bostrom did not delve into the causal mechanism for this phenomenon, which is considerably murkier. His statement was a plain stating of observed facts. albeit in an insensitive manner. That is why people defend his epistemic integrity in this instance.
This is an excellent point and has meaningfully challenged my beliefs. From a policy and cause area standpoint, the rationalists seem ascendant.
EA, and this forum, “feels” less and less like LessWrong. As I mentioned, posts that have no place in a “rationalist EA” consistently garner upvotes (I do not want to link to these posts, but they probably aren’t hard to identify). This is not enough empiric data, and, having looked at the funding of cause areas, the revealed preferences of rationality seem stronger than ever, even if stated preferences lean more “normie.”
I am not sure how to reconcile this, and would invite discussion.
Maybe. I am having a hard time imagining how this solution would actually manifest and be materially different from the current arrangement.
The external face of EA, in my experience, has had a focus on global poverty reduction; everyone I’ve introduced has gotten my spiel about the inefficiencies of training American guide dogs compared to bednets, for example. Only the consequentialists ever learn more about AGI or shrimp welfare.
If the social capital/external face of EA turned around and endorsed or put funding towards rationalist causes, particularly taboo or unpopular ones, I don’t think there would be sufficient differentiation between the two in the eyes of the public. Further, the social capital branch wouldn’t want to endorse the rationalist causes: that’s what differentiates the two in the first place.
I think the two organizations or movements would have to be unaligned, and I think we are heading this way. When I see some of the upvoted posts lately, including critiques that EA is “too rational” or “doesn’t value emotional responses,” I am seeing the death knell of the movement.
Tyler Cowen recently spoke about demographics as destiny of a movement, and that EA is doomed to become the US Democratic Party. I think his critique is largely correct, and EA as I understand it, ie the application of reason to the question of how to do the most good, is likely going to end. EA was built as a rejection of social desirability in a dispassionate effort to improve wellbeing, yet as the tent gets bigger, the mission is changing.
Despite us being on seemingly opposite sides of this divide, I think we arrived at a similar conclusion. There is an equilibrium between social capital and epistemic integrity that achieves the most total good, and EA should seek that point out.
We may have different priors as to the location of that point, but it is a useful shared framing that works towards answering the question.
This is an excellent post, but if I may offer one critique, it would be that it doesn’t answer potential issues of change in population composition. Niklas’ comment mentions this to an extent, but, to be blunt, much of my motivation for worry about population decline and stagnation is a fear of current dysgenic trends, specifically in IQ. The negative correlation between measured IQ and fertility, both within countries and between them, underlies much natalist and transhumanist sentiment.