At some point, I had to face the fact that I’d wasted years of my life. EA and rationality, at their core (at least from a predictive perspective), were about getting money and living forever. Other values were always secondary. There are exceptions, Yudkowsky seems to have passed the Ring Temptation test, but they’re rare. I tried to salvage something. I gave it one last shot and went to LessOnline/​Manifest. If you pressed people even a little, they mostly admitted that their motivations were money and power.
I’m sorry you feel this way. Though I would still disagree with you, I think you mean to say the part of EA focused on AI has a primary motivation of getting money and living forever. The majority of EAs are not focused on AI, and are instead focused on nuclear, bio risk, global health and development, animal welfare, etc and they generally are not motivated by living forever. Those who are doing direct work in these areas nearly all do so on low salaries.
I’m sorry you feel this way. Though I would still disagree with you, I think you mean to say the part of EA focused on AI has a primary motivation of getting money and living forever. The majority of EAs are not focused on AI, and are instead focused on nuclear, bio risk, global health and development, animal welfare, etc and they generally are not motivated by living forever. Those who are doing direct work in these areas nearly all do so on low salaries.