The clearest ways I have seen EA change over the last few years is a shift from working solely in global health and animal welfare to including existential risk, longtermism, and AI safety. By most demographic overlaps this is more aligned with rationalist circles, not less. I don’t see a shift towards including longtermism and existential risk as the end of “the application of reason to the question of how to do the most good”.
This is an excellent point and has meaningfully challenged my beliefs. From a policy and cause area standpoint, the rationalists seem ascendant.
EA, and this forum, “feels” less and less like LessWrong. As I mentioned, posts that have no place in a “rationalist EA” consistently garner upvotes (I do not want to link to these posts, but they probably aren’t hard to identify). This is not enough empiric data, and, having looked at the funding of cause areas, the revealed preferences of rationality seem stronger than ever, even if stated preferences lean more “normie.”
I am not sure how to reconcile this, and would invite discussion.
I think it is that the people who actually donate money (and especially the people who have seven figure sums to donate) might be far weirder than the average person who posts and make votes on the forum.
On which topic, I really, really should go back to mostly being a lurker.
I think that the nature of EA’s funding—predominately from young tech billionaires / near-billionaires --is to some extent a historical coincidence but risks becoming something like a self-fulfilling prophecy.
The clearest ways I have seen EA change over the last few years is a shift from working solely in global health and animal welfare to including existential risk, longtermism, and AI safety. By most demographic overlaps this is more aligned with rationalist circles, not less. I don’t see a shift towards including longtermism and existential risk as the end of “the application of reason to the question of how to do the most good”.
This is an excellent point and has meaningfully challenged my beliefs. From a policy and cause area standpoint, the rationalists seem ascendant.
EA, and this forum, “feels” less and less like LessWrong. As I mentioned, posts that have no place in a “rationalist EA” consistently garner upvotes (I do not want to link to these posts, but they probably aren’t hard to identify). This is not enough empiric data, and, having looked at the funding of cause areas, the revealed preferences of rationality seem stronger than ever, even if stated preferences lean more “normie.”
I am not sure how to reconcile this, and would invite discussion.
Maybe new arguments have been written for AI Safety which are less dependent on someone having been previously exposed to the rationalist memeplex?
I think it is that the people who actually donate money (and especially the people who have seven figure sums to donate) might be far weirder than the average person who posts and make votes on the forum.
On which topic, I really, really should go back to mostly being a lurker.
I think that the nature of EA’s funding—predominately from young tech billionaires / near-billionaires --is to some extent a historical coincidence but risks becoming something like a self-fulfilling prophecy.
Yeah, this is why earn to give needs to come back as a central career recommendation.