Rationality as an EA Cause Area

The Rationality and Effective Altruism communities have experienced wildly different trajectories in recent years. While EA has meetups at most major universities, is backed by a multi-billion dollar foundation, has proliferated organisations to the point of confusion and now even has its own media outlet; the rationality community had to struggle just to manage to resurrect Less Wrong. LW finally seems to be on a positive trajectory, but the rationality community is still much less than what it could have been. Its ideas have barely penetrated academia, there isn’t a rationality conference and there isn’t even an organisation dedicated to growing the movement.

A large part of this reason is that the kinds of people who would actually do something and run these kinds of projects have been drawn into either EA or ai-risk. While these communities benefit from this talent, I suspect that this effect has occurred to the point of killing the geese that lays the golden egg (this is analogous to concerns about immigration to the Bay Area hollowing out local communities).

I find this concerning for the following reasons:

  • The Less Wrong community has traditionally been a fantastic recruiting ground for EA. Companies often utilise multiple brands to target different audience segments and this principle still applies even though LW and EA are seperate

  • Many of the most prominent EAs consider AI safety the highest priority cause area. LW has been especially effective as a source of AI safety researchers and many of the initial ideas about AI safety were invented here.

  • EA has managed to hold unusually high epistemic standards and has been much more successful than average movements at updating based on new evidence and avoiding ideological capture. LW has produced much of the common knowledge that has allowed this to occur. The rationality community also provides a location for the development of advice related to health, productivity, personal development and social dynamics.

The failure of LW to fulfil its potential has made these gains much less than what they could have been. I suspect that as per the Pareto Principle, a small organisation promoting rationality might be far better than no organisation trying to promote it (CFAR focuses on individuals, not broader groups within society or society as a whole). At the very least, a small scale experiment seems worthwhile. Even though there is a high chance that the intervention would have no discernible effect, as per Owen’s Prospecting for Gold talk, the impacts in the tail could be extremely large, so the gamble seems worthwhile. I don’t know what to suggest that such and the organisation could do, but I imagine that there are a number of difference approaches they could experiment with, at least some of which might plausibly be effective.

I do see a few potential risks with this project:

  • This project wouldn’t succeed without buy-in from the LW community. This requires people with sufficient credibility being pursuing this at the expense of other opportunities and incurs opportunity cost in the case where they do.

  • Increasing the prominence of LW mean that people less aligned with the community have access to more of its insights, so perhaps this would make it easier for someone unaligned to develop an AGI which turns out poorly.

Nonetheless, funding wouldn’t have to be committed until it could be confirmed that suitable parties were interested and the potential gains seem like they could justify the opportunity cost. In terms of the second point, I suspect that far more good actors will be created than bad actors, such that the net effect is positive.

This post was written with the support of the EA Hotel