I like all of your suggested actions. Two thoughts:
1) EA is a both a set of strong claims about causes + an intellectual framework which can be applied to any cause. One explanation for what’s happening is that we grew a lot recently, and new people find the precooked causes easier to engage with (and the all-important status gradient of the community points firmly towards them). It takes a lot of experience and boldness to investigate and intervene on a new cause.
I suspect you won’t agree with this framing but: one way of viewing the play between these two things is a classic explore/exploit tradeoff.[1] On this view, exploration (new causes, new different people) is for discovering new causes.[2] Once you find something huge, you stop searching until it is fixed.
IMO our search actually did find something so important, neglected, and maybe tractable (AI) that it’s right to somewhat de-emphasise cause exploration until that situation begins to look better. We found a combination gold mine / natural fission reactor. This cause is even pluralistic, since you can’t e.g. admire art if there’s no world.
2) But anyway I agree that we have narrowed too much. See this post which explains the significance of cause diversity on a maximising view, or my series of obituaries about people who did great things outside the community.
Thanks for the comment—this and the other comments around cause neutrality have given me a lot to think about! My thoughts on cause neutrality (especially around where the pressure points are for me in theory vs. practice) are not fully formed; it’s something I’m planning to focus a lot on in the next few weeks, in which time I might have a better response.
I like all of your suggested actions. Two thoughts:
1) EA is a both a set of strong claims about causes + an intellectual framework which can be applied to any cause. One explanation for what’s happening is that we grew a lot recently, and new people find the precooked causes easier to engage with (and the all-important status gradient of the community points firmly towards them). It takes a lot of experience and boldness to investigate and intervene on a new cause.
I suspect you won’t agree with this framing but: one way of viewing the play between these two things is a classic explore/exploit tradeoff.[1] On this view, exploration (new causes, new different people) is for discovering new causes.[2] Once you find something huge, you stop searching until it is fixed.
IMO our search actually did find something so important, neglected, and maybe tractable (AI) that it’s right to somewhat de-emphasise cause exploration until that situation begins to look better. We found a combination gold mine / natural fission reactor. This cause is even pluralistic, since you can’t e.g. admire art if there’s no world.
2) But anyway I agree that we have narrowed too much. See this post which explains the significance of cause diversity on a maximising view, or my series of obituaries about people who did great things outside the community.
I suspect this because you say that we shouldn’t have a “singular best way to do good”, and the bandit framing usually assumes one objective.
Or new perspectives on causes / new ideas for causes / hidden costs of interventions / etc
Thanks for the comment—this and the other comments around cause neutrality have given me a lot to think about! My thoughts on cause neutrality (especially around where the pressure points are for me in theory vs. practice) are not fully formed; it’s something I’m planning to focus a lot on in the next few weeks, in which time I might have a better response.