Thanks for your thoughtful comment, & thanks for providing some more explicit/concrete examples of the kind of thing you’d like to see more of — that was really helpful!
(And I hadn’t read that article you linked before, or thought about the “missing middle” as a frame — thanks!)
I think I’m now more confident that I disagree with the argument you’ve laid out here.
The main reason is that I disagree with your claim that we’d be able to do more good by reviewing our methodology & de-emphasising neglectedness.
I basically just think neglectedness is really important for what I’m trying to do when I’m trying to do good.
I think there are really compelling arguments for working on e.g. immigration policy and criminal justice reform, that are going to appeal to a much broader audience than the one on this Forum. You don’t need to be, like, a ‘moral weirdo’ to think that it’s unnacceptable that we keep humans in near-indefinite imprisonment for the crime of being born in the wrong country.
And I think the core strength of EA is that we’ve got a bunch of ‘moral weirdos,’ who are interested in looking at ways of doing good for which there aren’t clear, emotionally compelling arguments, or that don’t seem good at first. E.g. when improving education, everyone thinks it seems good to provide teachers and textbooks, but fewer people think of removing intestinal parasites. [1]
I recognise this isn’t anywhere close to a watertight defence of the current main focus of longtermists versus the other kinds of interventions you highlighted, but I think it’s the core thing driving why I don’t currently buy the argument you laid out here :)
[1] putting aside for one second the arguments about whether this actually works, lol! Was just the first example that came to mind of something deeply “unsexy” that EAs talk about.
I see what you mean and figured that the neglectedness consideration will be a significant block to my argument within the EA/Longtermist framework but (my current inability to provide a methodological appraisal notwithstanding) I still find myself hesitant to accept that we should not delve into these issues (given what is at stake that is, the quality of lives of potentially billions of future beings). I also reckon that part of the work under good value lock-in will inevitably involve working on many systemic problems even if the difference will lie in the approach we take. Ultimately though my argument is hinged on whether we should as a community find it acceptable to ignore these issues while proclaiming we want to do the most good we can (and for the most number of people). I concede that these are indeed very hard problems to solve as seen by the several players who have been trying to solve them but this community has some of the smartest and most innovative minds, I think the challenge might be one worth taking up.
Thanks for your thoughtful comment, & thanks for providing some more explicit/concrete examples of the kind of thing you’d like to see more of — that was really helpful!
(And I hadn’t read that article you linked before, or thought about the “missing middle” as a frame — thanks!)
I think I’m now more confident that I disagree with the argument you’ve laid out here.
The main reason is that I disagree with your claim that we’d be able to do more good by reviewing our methodology & de-emphasising neglectedness.
I basically just think neglectedness is really important for what I’m trying to do when I’m trying to do good.
I think there are really compelling arguments for working on e.g. immigration policy and criminal justice reform, that are going to appeal to a much broader audience than the one on this Forum. You don’t need to be, like, a ‘moral weirdo’ to think that it’s unnacceptable that we keep humans in near-indefinite imprisonment for the crime of being born in the wrong country.
And I think the core strength of EA is that we’ve got a bunch of ‘moral weirdos,’ who are interested in looking at ways of doing good for which there aren’t clear, emotionally compelling arguments, or that don’t seem good at first. E.g. when improving education, everyone thinks it seems good to provide teachers and textbooks, but fewer people think of removing intestinal parasites. [1]
I recognise this isn’t anywhere close to a watertight defence of the current main focus of longtermists versus the other kinds of interventions you highlighted, but I think it’s the core thing driving why I don’t currently buy the argument you laid out here :)
[1] putting aside for one second the arguments about whether this actually works, lol! Was just the first example that came to mind of something deeply “unsexy” that EAs talk about.
I see what you mean and figured that the neglectedness consideration will be a significant block to my argument within the EA/Longtermist framework but (my current inability to provide a methodological appraisal notwithstanding) I still find myself hesitant to accept that we should not delve into these issues (given what is at stake that is, the quality of lives of potentially billions of future beings). I also reckon that part of the work under good value lock-in will inevitably involve working on many systemic problems even if the difference will lie in the approach we take. Ultimately though my argument is hinged on whether we should as a community find it acceptable to ignore these issues while proclaiming we want to do the most good we can (and for the most number of people). I concede that these are indeed very hard problems to solve as seen by the several players who have been trying to solve them but this community has some of the smartest and most innovative minds, I think the challenge might be one worth taking up.