Thanks! Sorry to hear the epistemics stuff was so frustrating for you and caused you to leave EA.
Yes, very plausible that the example interventions don’t really get to the core of the issue—I didn’t spend long creating those and they’re more meant to be examples to help spark ideas rather than confident recommendations on the best interventions or some such. Perhaps I should have flagged this in the post.
Re “centralized control and disbursion of funds”: I agree that my example ideas in the epistemics section wouldn’t help with this much. Would the “funding diversification” suggestions below help here?
And I’m intrigued if you’re up for elaborating why you don’t think the sorts of “What could be done?” suggestions would help with the other two problems you highlight. (They’re not optimising for addressing those two specific concerns of course, but insofar as they all relate back to bad/weird epistemic practices, then things like epistemics training programmes might help?) No worries if you don’t want to or don’t have time though.
Yes, I imagine funding diversification would help, though I’m not sure if it would go far enough to make EA a good career bet.
My own solution is to work myself up to the point where I’m financially independent from EA so my agency is not compromised by someone elses model of what works
And you’re right that better epistemics might help address the other two problems, but only insofar that these are interventions that are targeted at “s1 epistemics” i.e. the stuff that doesn’t necessarily follow from conscious deliberation. Most of the techniques in this category would fall under the banner of spirituality (the pragmatic type without metaphysics). This is something that the rationalist project has not addressed sufficiently. I think there’s a lot of unexplored potential there.
Thanks! Sorry to hear the epistemics stuff was so frustrating for you and caused you to leave EA.
Yes, very plausible that the example interventions don’t really get to the core of the issue—I didn’t spend long creating those and they’re more meant to be examples to help spark ideas rather than confident recommendations on the best interventions or some such. Perhaps I should have flagged this in the post.
Re “centralized control and disbursion of funds”: I agree that my example ideas in the epistemics section wouldn’t help with this much. Would the “funding diversification” suggestions below help here?
And I’m intrigued if you’re up for elaborating why you don’t think the sorts of “What could be done?” suggestions would help with the other two problems you highlight. (They’re not optimising for addressing those two specific concerns of course, but insofar as they all relate back to bad/weird epistemic practices, then things like epistemics training programmes might help?) No worries if you don’t want to or don’t have time though.
Thanks again!
Yes, I imagine funding diversification would help, though I’m not sure if it would go far enough to make EA a good career bet.
My own solution is to work myself up to the point where I’m financially independent from EA so my agency is not compromised by someone elses model of what works
And you’re right that better epistemics might help address the other two problems, but only insofar that these are interventions that are targeted at “s1 epistemics” i.e. the stuff that doesn’t necessarily follow from conscious deliberation. Most of the techniques in this category would fall under the banner of spirituality (the pragmatic type without metaphysics). This is something that the rationalist project has not addressed sufficiently. I think there’s a lot of unexplored potential there.