On those assumptions, would the utility function likely call for splitting the movement into two separate ones, so that the “toxic[ity]” from the “rationalist” branch doesn’t impede the other branch very much (and the social-desirability needs of the other branch doesn’t impede the “effectiveness” of the “rationalist” branch very much)?
I am in favour of Giving What We Can building its brand with some degree of (but not complete separation from) the rest of the EA community. Giving What We Can naturally wants to be as large as possible as fast as possible, while excessive growth could potentially be damaging for the rest of EA.
Maybe. I am having a hard time imagining how this solution would actually manifest and be materially different from the current arrangement.
The external face of EA, in my experience, has had a focus on global poverty reduction; everyone I’ve introduced has gotten my spiel about the inefficiencies of training American guide dogs compared to bednets, for example. Only the consequentialists ever learn more about AGI or shrimp welfare.
If the social capital/external face of EA turned around and endorsed or put funding towards rationalist causes, particularly taboo or unpopular ones, I don’t think there would be sufficient differentiation between the two in the eyes of the public. Further, the social capital branch wouldn’t want to endorse the rationalist causes: that’s what differentiates the two in the first place.
I think the two organizations or movements would have to be unaligned, and I think we are heading this way. When I see some of the upvoted posts lately, including critiques that EA is “too rational” or “doesn’t value emotional responses,” I am seeing the death knell of the movement.
Tyler Cowen recently spoke about demographics as destiny of a movement, and that EA is doomed to become the US Democratic Party. I think his critique is largely correct, and EA as I understand it, ie the application of reason to the question of how to do the most good, is likely going to end. EA was built as a rejection of social desirability in a dispassionate effort to improve wellbeing, yet as the tent gets bigger, the mission is changing.
The clearest ways I have seen EA change over the last few years is a shift from working solely in global health and animal welfare to including existential risk, longtermism, and AI safety. By most demographic overlaps this is more aligned with rationalist circles, not less. I don’t see a shift towards including longtermism and existential risk as the end of “the application of reason to the question of how to do the most good”.
This is an excellent point and has meaningfully challenged my beliefs. From a policy and cause area standpoint, the rationalists seem ascendant.
EA, and this forum, “feels” less and less like LessWrong. As I mentioned, posts that have no place in a “rationalist EA” consistently garner upvotes (I do not want to link to these posts, but they probably aren’t hard to identify). This is not enough empiric data, and, having looked at the funding of cause areas, the revealed preferences of rationality seem stronger than ever, even if stated preferences lean more “normie.”
I am not sure how to reconcile this, and would invite discussion.
I think it is that the people who actually donate money (and especially the people who have seven figure sums to donate) might be far weirder than the average person who posts and make votes on the forum.
On which topic, I really, really should go back to mostly being a lurker.
I think that the nature of EA’s funding—predominately from young tech billionaires / near-billionaires --is to some extent a historical coincidence but risks becoming something like a self-fulfilling prophecy.
I don’t see the two wings being not-very-connected as a necessarily bad thing. Both wings get what they feel they need to achieve impact—either pure epistemics or social capital—without having to compromise with what the other wing needs. In particular, the social-capital wing needs lots of money to scale global health interventions, and most funders who are excited about that just aren’t going to want to be associated with a movement that is significantly about the taboo. I expect that the epistemic branch would, by nature, focus on things that are less funding-constrained.
If EA was “built as a rejection of social desirability,” then it seems that the pure-epistemics branch doesn’t need the social-capital branch (since social-desirability thinking was absent in the early days). And I don’t think it likely that social-capital-branch EAs will just start training guide dogs rather than continuing to do things at high multiples of GiveDirectly after the split. If the social-capital-branch gets too big and starts to falter on epistemics as a result, it can always split again so that there will still be a social-capital-branch with good epistemics.
On those assumptions, would the utility function likely call for splitting the movement into two separate ones, so that the “toxic[ity]” from the “rationalist” branch doesn’t impede the other branch very much (and the social-desirability needs of the other branch doesn’t impede the “effectiveness” of the “rationalist” branch very much)?
I am in favour of Giving What We Can building its brand with some degree of (but not complete separation from) the rest of the EA community. Giving What We Can naturally wants to be as large as possible as fast as possible, while excessive growth could potentially be damaging for the rest of EA.
Maybe. I am having a hard time imagining how this solution would actually manifest and be materially different from the current arrangement.
The external face of EA, in my experience, has had a focus on global poverty reduction; everyone I’ve introduced has gotten my spiel about the inefficiencies of training American guide dogs compared to bednets, for example. Only the consequentialists ever learn more about AGI or shrimp welfare.
If the social capital/external face of EA turned around and endorsed or put funding towards rationalist causes, particularly taboo or unpopular ones, I don’t think there would be sufficient differentiation between the two in the eyes of the public. Further, the social capital branch wouldn’t want to endorse the rationalist causes: that’s what differentiates the two in the first place.
I think the two organizations or movements would have to be unaligned, and I think we are heading this way. When I see some of the upvoted posts lately, including critiques that EA is “too rational” or “doesn’t value emotional responses,” I am seeing the death knell of the movement.
Tyler Cowen recently spoke about demographics as destiny of a movement, and that EA is doomed to become the US Democratic Party. I think his critique is largely correct, and EA as I understand it, ie the application of reason to the question of how to do the most good, is likely going to end. EA was built as a rejection of social desirability in a dispassionate effort to improve wellbeing, yet as the tent gets bigger, the mission is changing.
The clearest ways I have seen EA change over the last few years is a shift from working solely in global health and animal welfare to including existential risk, longtermism, and AI safety. By most demographic overlaps this is more aligned with rationalist circles, not less. I don’t see a shift towards including longtermism and existential risk as the end of “the application of reason to the question of how to do the most good”.
This is an excellent point and has meaningfully challenged my beliefs. From a policy and cause area standpoint, the rationalists seem ascendant.
EA, and this forum, “feels” less and less like LessWrong. As I mentioned, posts that have no place in a “rationalist EA” consistently garner upvotes (I do not want to link to these posts, but they probably aren’t hard to identify). This is not enough empiric data, and, having looked at the funding of cause areas, the revealed preferences of rationality seem stronger than ever, even if stated preferences lean more “normie.”
I am not sure how to reconcile this, and would invite discussion.
Maybe new arguments have been written for AI Safety which are less dependent on someone having been previously exposed to the rationalist memeplex?
I think it is that the people who actually donate money (and especially the people who have seven figure sums to donate) might be far weirder than the average person who posts and make votes on the forum.
On which topic, I really, really should go back to mostly being a lurker.
I think that the nature of EA’s funding—predominately from young tech billionaires / near-billionaires --is to some extent a historical coincidence but risks becoming something like a self-fulfilling prophecy.
Yeah, this is why earn to give needs to come back as a central career recommendation.
I don’t see the two wings being not-very-connected as a necessarily bad thing. Both wings get what they feel they need to achieve impact—either pure epistemics or social capital—without having to compromise with what the other wing needs. In particular, the social-capital wing needs lots of money to scale global health interventions, and most funders who are excited about that just aren’t going to want to be associated with a movement that is significantly about the taboo. I expect that the epistemic branch would, by nature, focus on things that are less funding-constrained.
If EA was “built as a rejection of social desirability,” then it seems that the pure-epistemics branch doesn’t need the social-capital branch (since social-desirability thinking was absent in the early days). And I don’t think it likely that social-capital-branch EAs will just start training guide dogs rather than continuing to do things at high multiples of GiveDirectly after the split. If the social-capital-branch gets too big and starts to falter on epistemics as a result, it can always split again so that there will still be a social-capital-branch with good epistemics.