I think many of these lessons have more merrit to them than you assume. To speak specifically about the ‘earning to give’ one, yes EA has pointed out that you should not do harm with your job to give it away. However I also think it is a bit psychologically naïeve to think that what happened with FTX is the last time that giving people the advice of earning to give is the last time it will lead to people doing harm to make money.
Trade-offs between ethical principles and monetary gain are not rare, and once we have established making as much money as possible (to give it away) as a goal in itself and something that gives status, it can be hard to make these trade-offs the way you are supposed to. It is not easy to accept a setback in wealth, power and (moral) status so lying to yourself or others to think that what you are doing is ethical becomes easy. It is also generally risky for individuals to become incredibly rich or powerful, especially if that depends on a misguided believe that some group membership (ea) makes you inherently ethical and therefore more trustworthy, since power tends to corrupt.
At the minimum I would like EA to talk more about how to jointly maximize the ethics of how you earn and spend your money, making sure that we promote people to gain their wealth in ways that add value to the world.
It makes me quite sad that in practice EA has become so much about specific answers (work on AI risk, donate to this charity, become vegan) to the question of how we effective make the world a better place, that not agreeing with a specific answer can create so much friction. In my mind EA really is just about the question itself and the world is super complicated so we should be skeptical of any particular answer.
If we accidentally start selecting for people that intuitive agree with certain answers (which it sounds like we are doing, I know people that have a deep desire to make a lot of counterfactual impact, but were turned of because they ‘disagreed’ with some common EA belief, and sounds like if you read superintelligence earlier that would have been the case for you as well) that has a big negative effect on our epistemics and ultimately hurts our goal. We won’t be able to check each others biases and have a less diverse set of views and viewpoints.