Its ideas have barely penetrated academia, there isn’t a rationality conference and there isn’t even an organisation dedicated to growing the movement.
I think you can think of the new LessWrong organization as doing roughly that (though I don’t think the top priority should be growth, but more about building infrastructure to make sure the community can productively grow and be productive). We are currently focusing on the online community, but we also did some thing to improve the meetup system, are starting to run more in-person events, and might run a conference in the next year (right now we have the Secular Solstice, which I actually think complements existing conferences like EA Global quite well, and does a good job at a lot of the things you would want a conference to achieve).
I agree that it’s sad that there hasn’t been an org focusing on this for the last few years.
On the note of whether the ideas of the rationality community have failed to penetrated academia, I think that’s mostly false. I think the ideas have probably penetrated academia more than the basics of Effective Altruism have. In terms of web-traffic and general intellectual influence among the intellectual elite, the sequences as well as HPMOR and Scott Alexander’s writing have attracted significant attention and readership, and mostly continue doing so (as a Fermi, I expect about 10x more people have read the LW sequences/Rationality:A-Z than have read Doing Good Better, and about 100x have read HPMOR). Obviously, I think we can do better, and do think there is a lot of value in distilling/developing core ideas in rationality more and helping them penetrate into academia and other intellectual hubs.
I do think that in terms of community-building, there has been a bunch of neglect, though I think overall in terms of active meetups and local communities, the rationality community is still pretty healthy. I do agree that on some dimensions there has been a decline, and would be excited about more people trying to put more resources into building the rationality community, and would be excited about collaborating and coordinating with them.
To give a bit of background in terms of funding, the new LessWrong org was initially funded by an EA-Grant, and is currently being funded by a grant from BERI, Nick Beckstead and Matt Wage. In general EA funders have been supportive for the project and I am glad for their support.
“In terms of web-traffic and general intellectual influence among the intellectual elite, the sequences as well as HPMOR and Scott Alexander’s writing have attracted significant attention and readership, and mostly continue doing so”—I was talking more about academia than the blogosphere. Here, only AI safety has had reasonable penetration. EA has had several heavyweights in philosophy, plus FHI for a while and also now GPI.
Whether you count FHI as rationality or EA is pretty ambigious. I think memetically FHI is closer to the transhumanist community, and a lot of the ideas that FHI publishes about are ideas that were discussed on SL4 and LessWrong before FHI published them in a more proper format.
Scott Alexander has actually gotten academic citations, e.g. in Paul Bloom’s book Against Empathy (sadly I don’t remember which article of his Bloom cites), and I get the impression a fair few academics read him.
I think you can think of the new LessWrong organization as doing roughly that (though I don’t think the top priority should be growth, but more about building infrastructure to make sure the community can productively grow and be productive). We are currently focusing on the online community, but we also did some thing to improve the meetup system, are starting to run more in-person events, and might run a conference in the next year (right now we have the Secular Solstice, which I actually think complements existing conferences like EA Global quite well, and does a good job at a lot of the things you would want a conference to achieve).
I agree that it’s sad that there hasn’t been an org focusing on this for the last few years.
On the note of whether the ideas of the rationality community have failed to penetrated academia, I think that’s mostly false. I think the ideas have probably penetrated academia more than the basics of Effective Altruism have. In terms of web-traffic and general intellectual influence among the intellectual elite, the sequences as well as HPMOR and Scott Alexander’s writing have attracted significant attention and readership, and mostly continue doing so (as a Fermi, I expect about 10x more people have read the LW sequences/Rationality:A-Z than have read Doing Good Better, and about 100x have read HPMOR). Obviously, I think we can do better, and do think there is a lot of value in distilling/developing core ideas in rationality more and helping them penetrate into academia and other intellectual hubs.
I do think that in terms of community-building, there has been a bunch of neglect, though I think overall in terms of active meetups and local communities, the rationality community is still pretty healthy. I do agree that on some dimensions there has been a decline, and would be excited about more people trying to put more resources into building the rationality community, and would be excited about collaborating and coordinating with them.
To give a bit of background in terms of funding, the new LessWrong org was initially funded by an EA-Grant, and is currently being funded by a grant from BERI, Nick Beckstead and Matt Wage. In general EA funders have been supportive for the project and I am glad for their support.
“In terms of web-traffic and general intellectual influence among the intellectual elite, the sequences as well as HPMOR and Scott Alexander’s writing have attracted significant attention and readership, and mostly continue doing so”—I was talking more about academia than the blogosphere. Here, only AI safety has had reasonable penetration. EA has had several heavyweights in philosophy, plus FHI for a while and also now GPI.
Whether you count FHI as rationality or EA is pretty ambigious. I think memetically FHI is closer to the transhumanist community, and a lot of the ideas that FHI publishes about are ideas that were discussed on SL4 and LessWrong before FHI published them in a more proper format.
Scott Alexander has actually gotten academic citations, e.g. in Paul Bloom’s book Against Empathy (sadly I don’t remember which article of his Bloom cites), and I get the impression a fair few academics read him.
Bostrom has also cited him in his papers.