Yes, I imagine funding diversification would help, though I’m not sure if it would go far enough to make EA a good career bet.
My own solution is to work myself up to the point where I’m financially independent from EA so my agency is not compromised by someone elses model of what works
And you’re right that better epistemics might help address the other two problems, but only insofar that these are interventions that are targeted at “s1 epistemics” i.e. the stuff that doesn’t necessarily follow from conscious deliberation. Most of the techniques in this category would fall under the banner of spirituality (the pragmatic type without metaphysics). This is something that the rationalist project has not addressed sufficiently. I think there’s a lot of unexplored potential there.
Excellent idea. This would also incentivize writing an application that is generally convincing instead of trying to hack the preferences of the specific fund