Do you disagree with this framing? For example, do you think that the core divide is something else?
I think this framing is accurate, and touches on a divide that has been repeatedly arising in EA discussions. I have heard this as “rationalists vs. normies,” “high decouplers vs. low decouplers,” and in the context of “feminization” of the movement (in reference to traditionally masculine dispassionate reason falling out of favor in exchange for emphasis on social harmony).
Additionally, I believe there are significant costs to total embrace of both sides of the “divide.”
There are cause areas with significant potential to improve effectiveness that are underexplored due to social stigma. A better understanding of heritability and genetic influences on all aspect of human behavior could change the trajectory of effective interventions in education, crime reduction, gene-editing, and more. A rationalist EA movement would likely do more good per dollar.
Embrace of rationalism would be toxic to the brand of EA and its funding sources. The main animus I have seen for the Bostrom email is that he said black people have lower IQs than white people. This, though an empirical fact, is clearly beyond the pale to a large percentage of the population EA seeks to win over. Topics like longtermism and animal welfare are weird to most people, but HBD actively lowers public perception and resultant funding. The good per dollar may go up, but without a degree of tact, the amount of dollars would significantly decrease.
I must imagine that there is a utility function that could find the equilibrium between these two contrasting factors: where the good per dollar and amount of funding achieve the most possible total good.
On those assumptions, would the utility function likely call for splitting the movement into two separate ones, so that the “toxic[ity]” from the “rationalist” branch doesn’t impede the other branch very much (and the social-desirability needs of the other branch doesn’t impede the “effectiveness” of the “rationalist” branch very much)?
I am in favour of Giving What We Can building its brand with some degree of (but not complete separation from) the rest of the EA community. Giving What We Can naturally wants to be as large as possible as fast as possible, while excessive growth could potentially be damaging for the rest of EA.
Maybe. I am having a hard time imagining how this solution would actually manifest and be materially different from the current arrangement.
The external face of EA, in my experience, has had a focus on global poverty reduction; everyone I’ve introduced has gotten my spiel about the inefficiencies of training American guide dogs compared to bednets, for example. Only the consequentialists ever learn more about AGI or shrimp welfare.
If the social capital/external face of EA turned around and endorsed or put funding towards rationalist causes, particularly taboo or unpopular ones, I don’t think there would be sufficient differentiation between the two in the eyes of the public. Further, the social capital branch wouldn’t want to endorse the rationalist causes: that’s what differentiates the two in the first place.
I think the two organizations or movements would have to be unaligned, and I think we are heading this way. When I see some of the upvoted posts lately, including critiques that EA is “too rational” or “doesn’t value emotional responses,” I am seeing the death knell of the movement.
Tyler Cowen recently spoke about demographics as destiny of a movement, and that EA is doomed to become the US Democratic Party. I think his critique is largely correct, and EA as I understand it, ie the application of reason to the question of how to do the most good, is likely going to end. EA was built as a rejection of social desirability in a dispassionate effort to improve wellbeing, yet as the tent gets bigger, the mission is changing.
The clearest ways I have seen EA change over the last few years is a shift from working solely in global health and animal welfare to including existential risk, longtermism, and AI safety. By most demographic overlaps this is more aligned with rationalist circles, not less. I don’t see a shift towards including longtermism and existential risk as the end of “the application of reason to the question of how to do the most good”.
This is an excellent point and has meaningfully challenged my beliefs. From a policy and cause area standpoint, the rationalists seem ascendant.
EA, and this forum, “feels” less and less like LessWrong. As I mentioned, posts that have no place in a “rationalist EA” consistently garner upvotes (I do not want to link to these posts, but they probably aren’t hard to identify). This is not enough empiric data, and, having looked at the funding of cause areas, the revealed preferences of rationality seem stronger than ever, even if stated preferences lean more “normie.”
I am not sure how to reconcile this, and would invite discussion.
I think it is that the people who actually donate money (and especially the people who have seven figure sums to donate) might be far weirder than the average person who posts and make votes on the forum.
On which topic, I really, really should go back to mostly being a lurker.
I think that the nature of EA’s funding—predominately from young tech billionaires / near-billionaires --is to some extent a historical coincidence but risks becoming something like a self-fulfilling prophecy.
I don’t see the two wings being not-very-connected as a necessarily bad thing. Both wings get what they feel they need to achieve impact—either pure epistemics or social capital—without having to compromise with what the other wing needs. In particular, the social-capital wing needs lots of money to scale global health interventions, and most funders who are excited about that just aren’t going to want to be associated with a movement that is significantly about the taboo. I expect that the epistemic branch would, by nature, focus on things that are less funding-constrained.
If EA was “built as a rejection of social desirability,” then it seems that the pure-epistemics branch doesn’t need the social-capital branch (since social-desirability thinking was absent in the early days). And I don’t think it likely that social-capital-branch EAs will just start training guide dogs rather than continuing to do things at high multiples of GiveDirectly after the split. If the social-capital-branch gets too big and starts to falter on epistemics as a result, it can always split again so that there will still be a social-capital-branch with good epistemics.
I think this framing is accurate, and touches on a divide that has been repeatedly arising in EA discussions. I have heard this as “rationalists vs. normies,” “high decouplers vs. low decouplers,” and in the context of “feminization” of the movement (in reference to traditionally masculine dispassionate reason falling out of favor in exchange for emphasis on social harmony).
Additionally, I believe there are significant costs to total embrace of both sides of the “divide.”
There are cause areas with significant potential to improve effectiveness that are underexplored due to social stigma. A better understanding of heritability and genetic influences on all aspect of human behavior could change the trajectory of effective interventions in education, crime reduction, gene-editing, and more. A rationalist EA movement would likely do more good per dollar.
Embrace of rationalism would be toxic to the brand of EA and its funding sources. The main animus I have seen for the Bostrom email is that he said black people have lower IQs than white people. This, though an empirical fact, is clearly beyond the pale to a large percentage of the population EA seeks to win over. Topics like longtermism and animal welfare are weird to most people, but HBD actively lowers public perception and resultant funding. The good per dollar may go up, but without a degree of tact, the amount of dollars would significantly decrease.
I must imagine that there is a utility function that could find the equilibrium between these two contrasting factors: where the good per dollar and amount of funding achieve the most possible total good.
On those assumptions, would the utility function likely call for splitting the movement into two separate ones, so that the “toxic[ity]” from the “rationalist” branch doesn’t impede the other branch very much (and the social-desirability needs of the other branch doesn’t impede the “effectiveness” of the “rationalist” branch very much)?
I am in favour of Giving What We Can building its brand with some degree of (but not complete separation from) the rest of the EA community. Giving What We Can naturally wants to be as large as possible as fast as possible, while excessive growth could potentially be damaging for the rest of EA.
Maybe. I am having a hard time imagining how this solution would actually manifest and be materially different from the current arrangement.
The external face of EA, in my experience, has had a focus on global poverty reduction; everyone I’ve introduced has gotten my spiel about the inefficiencies of training American guide dogs compared to bednets, for example. Only the consequentialists ever learn more about AGI or shrimp welfare.
If the social capital/external face of EA turned around and endorsed or put funding towards rationalist causes, particularly taboo or unpopular ones, I don’t think there would be sufficient differentiation between the two in the eyes of the public. Further, the social capital branch wouldn’t want to endorse the rationalist causes: that’s what differentiates the two in the first place.
I think the two organizations or movements would have to be unaligned, and I think we are heading this way. When I see some of the upvoted posts lately, including critiques that EA is “too rational” or “doesn’t value emotional responses,” I am seeing the death knell of the movement.
Tyler Cowen recently spoke about demographics as destiny of a movement, and that EA is doomed to become the US Democratic Party. I think his critique is largely correct, and EA as I understand it, ie the application of reason to the question of how to do the most good, is likely going to end. EA was built as a rejection of social desirability in a dispassionate effort to improve wellbeing, yet as the tent gets bigger, the mission is changing.
The clearest ways I have seen EA change over the last few years is a shift from working solely in global health and animal welfare to including existential risk, longtermism, and AI safety. By most demographic overlaps this is more aligned with rationalist circles, not less. I don’t see a shift towards including longtermism and existential risk as the end of “the application of reason to the question of how to do the most good”.
This is an excellent point and has meaningfully challenged my beliefs. From a policy and cause area standpoint, the rationalists seem ascendant.
EA, and this forum, “feels” less and less like LessWrong. As I mentioned, posts that have no place in a “rationalist EA” consistently garner upvotes (I do not want to link to these posts, but they probably aren’t hard to identify). This is not enough empiric data, and, having looked at the funding of cause areas, the revealed preferences of rationality seem stronger than ever, even if stated preferences lean more “normie.”
I am not sure how to reconcile this, and would invite discussion.
Maybe new arguments have been written for AI Safety which are less dependent on someone having been previously exposed to the rationalist memeplex?
I think it is that the people who actually donate money (and especially the people who have seven figure sums to donate) might be far weirder than the average person who posts and make votes on the forum.
On which topic, I really, really should go back to mostly being a lurker.
I think that the nature of EA’s funding—predominately from young tech billionaires / near-billionaires --is to some extent a historical coincidence but risks becoming something like a self-fulfilling prophecy.
Yeah, this is why earn to give needs to come back as a central career recommendation.
I don’t see the two wings being not-very-connected as a necessarily bad thing. Both wings get what they feel they need to achieve impact—either pure epistemics or social capital—without having to compromise with what the other wing needs. In particular, the social-capital wing needs lots of money to scale global health interventions, and most funders who are excited about that just aren’t going to want to be associated with a movement that is significantly about the taboo. I expect that the epistemic branch would, by nature, focus on things that are less funding-constrained.
If EA was “built as a rejection of social desirability,” then it seems that the pure-epistemics branch doesn’t need the social-capital branch (since social-desirability thinking was absent in the early days). And I don’t think it likely that social-capital-branch EAs will just start training guide dogs rather than continuing to do things at high multiples of GiveDirectly after the split. If the social-capital-branch gets too big and starts to falter on epistemics as a result, it can always split again so that there will still be a social-capital-branch with good epistemics.