Thanks a lot for this comment! I think delving into the topic of epistemic learned helplessness will help me learn how to form proper inside views, which is something I’ve been struggling with.
I’m very worried about this ceasing to be the case.
Are you worried just because it would be really bad if EA in the future (say 5 years) was much worse at coming to correct conclusions, or also because you think it’s likely that will happen?
Are you worried just because it would be really bad if EA in the future (say 5 years) was much worse at coming to correct conclusions, or also because you think it’s likely that will happen?
I’m not sure how likely this is but probably over 10%? I’ve heard that social movements generally get unwieldier as they get more mainstream. Also some people say this has already happened to EA, and now identify as rationalists or longtermists or something. It’s hard to form a reference class because I don’t know how much EA benefits from advantages like better organization and currently better culture.
To form proper inside views I’d also recommend reading this post, which (in addition to other things) sketches out a method for healthy deference:
I think that something like this might be a good metaphor for how you should relate to doing good in the world, or to questions like “is it good to work on AI safety”. You try to write down the structure of an argument, and then fill out the steps of the argument, breaking them into more and more fine-grained assumptions. I am enthusiastic about people knowing where the sorrys are—that is, knowing what assumptions about the world they’re making. Once you’ve written down in your argument “I believe this because Nick Bostrom says so”, you’re perfectly free to continue believing the same things as before, but at least now you’ll know more precisely what kinds of external information could change your mind.
The key event which I think does good here is when you realize that you had an additional assumption than you realized, or when you realized that you’d thought that you understood the argument for X but actually you don’t know how to persuade yourself of X given only the arguments you already have.
Thanks a lot for this comment! I think delving into the topic of epistemic learned helplessness will help me learn how to form proper inside views, which is something I’ve been struggling with.
Are you worried just because it would be really bad if EA in the future (say 5 years) was much worse at coming to correct conclusions, or also because you think it’s likely that will happen?
I’m not sure how likely this is but probably over 10%? I’ve heard that social movements generally get unwieldier as they get more mainstream. Also some people say this has already happened to EA, and now identify as rationalists or longtermists or something. It’s hard to form a reference class because I don’t know how much EA benefits from advantages like better organization and currently better culture.
To form proper inside views I’d also recommend reading this post, which (in addition to other things) sketches out a method for healthy deference: