“In terms of web-traffic and general intellectual influence among the intellectual elite, the sequences as well as HPMOR and Scott Alexander’s writing have attracted significant attention and readership, and mostly continue doing so”—I was talking more about academia than the blogosphere. Here, only AI safety has had reasonable penetration. EA has had several heavyweights in philosophy, plus FHI for a while and also now GPI.
Whether you count FHI as rationality or EA is pretty ambigious. I think memetically FHI is closer to the transhumanist community, and a lot of the ideas that FHI publishes about are ideas that were discussed on SL4 and LessWrong before FHI published them in a more proper format.
Scott Alexander has actually gotten academic citations, e.g. in Paul Bloom’s book Against Empathy (sadly I don’t remember which article of his Bloom cites), and I get the impression a fair few academics read him.
“In terms of web-traffic and general intellectual influence among the intellectual elite, the sequences as well as HPMOR and Scott Alexander’s writing have attracted significant attention and readership, and mostly continue doing so”—I was talking more about academia than the blogosphere. Here, only AI safety has had reasonable penetration. EA has had several heavyweights in philosophy, plus FHI for a while and also now GPI.
Whether you count FHI as rationality or EA is pretty ambigious. I think memetically FHI is closer to the transhumanist community, and a lot of the ideas that FHI publishes about are ideas that were discussed on SL4 and LessWrong before FHI published them in a more proper format.
Scott Alexander has actually gotten academic citations, e.g. in Paul Bloom’s book Against Empathy (sadly I don’t remember which article of his Bloom cites), and I get the impression a fair few academics read him.
Bostrom has also cited him in his papers.