Rationality as an EA Cause Area

The Ra­tion­al­ity and Effec­tive Altru­ism com­mu­ni­ties have ex­pe­rienced wildly differ­ent tra­jec­to­ries in re­cent years. While EA has mee­tups at most ma­jor uni­ver­si­ties, is backed by a multi-billion dol­lar foun­da­tion, has pro­lifer­ated or­gani­sa­tions to the point of con­fu­sion and now even has its own me­dia out­let; the ra­tio­nal­ity com­mu­nity had to strug­gle just to man­age to re­s­ur­rect Less Wrong. LW fi­nally seems to be on a pos­i­tive tra­jec­tory, but the ra­tio­nal­ity com­mu­nity is still much less than what it could have been. Its ideas have barely pen­e­trated academia, there isn’t a ra­tio­nal­ity con­fer­ence and there isn’t even an or­gani­sa­tion ded­i­cated to grow­ing the move­ment.

A large part of this rea­son is that the kinds of peo­ple who would ac­tu­ally do some­thing and run these kinds of pro­jects have been drawn into ei­ther EA or ai-risk. While these com­mu­ni­ties benefit from this tal­ent, I sus­pect that this effect has oc­curred to the point of kil­ling the geese that lays the golden egg (this is analo­gous to con­cerns about im­mi­gra­tion to the Bay Area hol­low­ing out lo­cal com­mu­ni­ties).

I find this con­cern­ing for the fol­low­ing rea­sons:

  • The Less Wrong com­mu­nity has tra­di­tion­ally been a fan­tas­tic re­cruit­ing ground for EA. Com­pa­nies of­ten util­ise mul­ti­ple brands to tar­get differ­ent au­di­ence seg­ments and this prin­ci­ple still ap­plies even though LW and EA are seperate

  • Many of the most promi­nent EAs con­sider AI safety the high­est pri­or­ity cause area. LW has been es­pe­cially effec­tive as a source of AI safety re­searchers and many of the ini­tial ideas about AI safety were in­vented here.

  • EA has man­aged to hold un­usu­ally high epistemic stan­dards and has been much more suc­cess­ful than av­er­age move­ments at up­dat­ing based on new ev­i­dence and avoid­ing ide­olog­i­cal cap­ture. LW has pro­duced much of the com­mon knowl­edge that has al­lowed this to oc­cur. The ra­tio­nal­ity com­mu­nity also pro­vides a lo­ca­tion for the de­vel­op­ment of ad­vice re­lated to health, pro­duc­tivity, per­sonal de­vel­op­ment and so­cial dy­nam­ics.

The failure of LW to fulfil its po­ten­tial has made these gains much less than what they could have been. I sus­pect that as per the Pareto Prin­ci­ple, a small or­gani­sa­tion pro­mot­ing ra­tio­nal­ity might be far bet­ter than no or­gani­sa­tion try­ing to pro­mote it (CFAR fo­cuses on in­di­vi­d­u­als, not broader groups within so­ciety or so­ciety as a whole). At the very least, a small scale ex­per­i­ment seems worth­while. Even though there is a high chance that the in­ter­ven­tion would have no dis­cernible effect, as per Owen’s Prospect­ing for Gold talk, the im­pacts in the tail could be ex­tremely large, so the gam­ble seems worth­while. I don’t know what to sug­gest that such and the or­gani­sa­tion could do, but I imag­ine that there are a num­ber of differ­ence ap­proaches they could ex­per­i­ment with, at least some of which might plau­si­bly be effec­tive.

I do see a few po­ten­tial risks with this pro­ject:

  • This pro­ject wouldn’t suc­ceed with­out buy-in from the LW com­mu­nity. This re­quires peo­ple with suffi­cient cred­i­bil­ity be­ing pur­su­ing this at the ex­pense of other op­por­tu­ni­ties and in­curs op­por­tu­nity cost in the case where they do.

  • In­creas­ing the promi­nence of LW mean that peo­ple less al­igned with the com­mu­nity have ac­cess to more of its in­sights, so per­haps this would make it eas­ier for some­one un­al­igned to de­velop an AGI which turns out poorly.

Nonethe­less, fund­ing wouldn’t have to be com­mit­ted un­til it could be con­firmed that suit­able par­ties were in­ter­ested and the po­ten­tial gains seem like they could jus­tify the op­por­tu­nity cost. In terms of the sec­ond point, I sus­pect that far more good ac­tors will be cre­ated than bad ac­tors, such that the net effect is pos­i­tive.

This post was writ­ten with the sup­port of the EA Hotel