Speaking for myself, I was interested in a lot of the same things in the LW cluster (Bayes, approaches to uncertainty, human biases, utilitarianism, philosophy, avoiding the news) before I came across LessWrong or EA. The feeling is much more like “I found people who can describe these ideas well” than “oh these are interesting and novel ideas to me.” (I had the same realization when I learned about utilitarianism...much more of a feeling that “this is the articulation of clearly correct ideas, believing otherwise seems dumb”).
That said, some of the ideas on LW that seemed more original to me (AI risk, logical decision theory stuff, heroic responsibility in an inadequate world), do seem both substantively true and extremely important, and it took me a lot of time to be convinced of this.
(There are also other ideas that I’m less sure about, like cryonics and MW).
Speaking for myself, I was interested in a lot of the same things in the LW cluster (Bayes, approaches to uncertainty, human biases, utilitarianism, philosophy, avoiding the news) before I came across LessWrong or EA. The feeling is much more like “I found people who can describe these ideas well” than “oh these are interesting and novel ideas to me.” (I had the same realization when I learned about utilitarianism...much more of a feeling that “this is the articulation of clearly correct ideas, believing otherwise seems dumb”).
That said, some of the ideas on LW that seemed more original to me (AI risk, logical decision theory stuff, heroic responsibility in an inadequate world), do seem both substantively true and extremely important, and it took me a lot of time to be convinced of this.
(There are also other ideas that I’m less sure about, like cryonics and MW).