I’m glad to hear that, and thanks for your comment, it’s been very helpful!
I think your summary is pretty accurate, although I think I ought to clarify my position on things we’d like to see more of than less of. I really make no claims about what I want to see less of. For example, I don’t think that the work being done on x-risks is overemphasised; rather, I am making the argument that we can begin to tackle more problems as longtermists given their huge potential benefits in the far future.
A bit of a warning: I haven’t spent as much time thinking about the solution space but intend to do more of this going forward, so take my responses as preliminary thoughts.
When writing this post, I did think that most of these claims seemed reasonable enough, but I worried that I could be missing a crucial fact that would show these views as not fitting into the longtermist philosophy or practise, e.g., an informal consensus on longtermism being solely focused on ensuring that future people exist. This was also a concern because, other than passing mentions of good value lock-in and trajectory changes in literature, I haven’t seen examples of work pursued for these reasons.
Regardless, I think that theoretically, longtermism could espouse these views. However, my reflections really come from the actual work being done in the longtermist space, which I can call here longtermism in practise (I refer here to the kinds of work I see being supported by longtermist organisations). My solution space at the moment is quite significantly influenced by the ideas put forward by Iason Gabriel and Brian McElwee when discussing the missing middle problem. In brief, they attempt to show that EA (in the general sense as to include Longtermism) tends to focus on low-value/high-confidence low-impact interventions (picking the low hanging fruits) such as vaccinations on the one hand and high-value/low-confidence (x-risk and GCR work) on the other hand. They point out that interventions in the medium-value/medium-confidence band could potentially help us tackle more problems, including ones that would contribute to systemic change. Some examples of this work would include the immigration policy and criminal justice reform work supported by Open Phil which seem to be the only interventions pursued in EA under this band (although perhaps some animal welfare work could fall here). Such interventions, I argue, should be more actively sought out and funded by longtermists because of their potential impact on not only improving the lives of current or near-future generations but also setting a positive trajectory for their descendants. I think that the availability of funding for such medium-value/medium-confidence interventions (for example, in tackling poverty or racial and gender inequality, etc.) would incentivize the best minds to explore these issues more and provide more robust proposals for interventions. I also think this is where my call to make it easier to make a case for certain interventions tackling systemic problems would apply to both EA and Longtermism recognising that work here will necessarily have numerous benefits for current generations but is keenly focused on the long view, where it will have the highest returns.
On your last concern, I acknowledge that for many of the problems I specifically highlighted, there is a high chance that most of their most promising interventions are not neglected at all, which is part of the reason I think we might gain more if we review the methodology we use as well. Without seriously considering this particular point, I think that the importance of these interventions (how many lives they may potentially positively affect and for how long) could be argued in place of the emphasis on neglectedness. There might also be other interventions in these areas that may have high returns but are less popularised or explored. Regardless of the robustness of my rough suggestions here, I think encouraging more work here makes it easier to hear of and support more (and better quality) proposed interventions on these issues.
Thanks for your thoughtful comment, & thanks for providing some more explicit/concrete examples of the kind of thing you’d like to see more of — that was really helpful!
(And I hadn’t read that article you linked before, or thought about the “missing middle” as a frame — thanks!)
I think I’m now more confident that I disagree with the argument you’ve laid out here.
The main reason is that I disagree with your claim that we’d be able to do more good by reviewing our methodology & de-emphasising neglectedness.
I basically just think neglectedness is really important for what I’m trying to do when I’m trying to do good.
I think there are really compelling arguments for working on e.g. immigration policy and criminal justice reform, that are going to appeal to a much broader audience than the one on this Forum. You don’t need to be, like, a ‘moral weirdo’ to think that it’s unnacceptable that we keep humans in near-indefinite imprisonment for the crime of being born in the wrong country.
And I think the core strength of EA is that we’ve got a bunch of ‘moral weirdos,’ who are interested in looking at ways of doing good for which there aren’t clear, emotionally compelling arguments, or that don’t seem good at first. E.g. when improving education, everyone thinks it seems good to provide teachers and textbooks, but fewer people think of removing intestinal parasites. [1]
I recognise this isn’t anywhere close to a watertight defence of the current main focus of longtermists versus the other kinds of interventions you highlighted, but I think it’s the core thing driving why I don’t currently buy the argument you laid out here :)
[1] putting aside for one second the arguments about whether this actually works, lol! Was just the first example that came to mind of something deeply “unsexy” that EAs talk about.
I see what you mean and figured that the neglectedness consideration will be a significant block to my argument within the EA/Longtermist framework but (my current inability to provide a methodological appraisal notwithstanding) I still find myself hesitant to accept that we should not delve into these issues (given what is at stake that is, the quality of lives of potentially billions of future beings). I also reckon that part of the work under good value lock-in will inevitably involve working on many systemic problems even if the difference will lie in the approach we take. Ultimately though my argument is hinged on whether we should as a community find it acceptable to ignore these issues while proclaiming we want to do the most good we can (and for the most number of people). I concede that these are indeed very hard problems to solve as seen by the several players who have been trying to solve them but this community has some of the smartest and most innovative minds, I think the challenge might be one worth taking up.
Hi Bella,
I’m glad to hear that, and thanks for your comment, it’s been very helpful!
I think your summary is pretty accurate, although I think I ought to clarify my position on things we’d like to see more of than less of. I really make no claims about what I want to see less of. For example, I don’t think that the work being done on x-risks is overemphasised; rather, I am making the argument that we can begin to tackle more problems as longtermists given their huge potential benefits in the far future.
A bit of a warning: I haven’t spent as much time thinking about the solution space but intend to do more of this going forward, so take my responses as preliminary thoughts.
When writing this post, I did think that most of these claims seemed reasonable enough, but I worried that I could be missing a crucial fact that would show these views as not fitting into the longtermist philosophy or practise, e.g., an informal consensus on longtermism being solely focused on ensuring that future people exist. This was also a concern because, other than passing mentions of good value lock-in and trajectory changes in literature, I haven’t seen examples of work pursued for these reasons.
Regardless, I think that theoretically, longtermism could espouse these views. However, my reflections really come from the actual work being done in the longtermist space, which I can call here longtermism in practise (I refer here to the kinds of work I see being supported by longtermist organisations). My solution space at the moment is quite significantly influenced by the ideas put forward by Iason Gabriel and Brian McElwee when discussing the missing middle problem. In brief, they attempt to show that EA (in the general sense as to include Longtermism) tends to focus on low-value/high-confidence low-impact interventions (picking the low hanging fruits) such as vaccinations on the one hand and high-value/low-confidence (x-risk and GCR work) on the other hand. They point out that interventions in the medium-value/medium-confidence band could potentially help us tackle more problems, including ones that would contribute to systemic change. Some examples of this work would include the immigration policy and criminal justice reform work supported by Open Phil which seem to be the only interventions pursued in EA under this band (although perhaps some animal welfare work could fall here). Such interventions, I argue, should be more actively sought out and funded by longtermists because of their potential impact on not only improving the lives of current or near-future generations but also setting a positive trajectory for their descendants. I think that the availability of funding for such medium-value/medium-confidence interventions (for example, in tackling poverty or racial and gender inequality, etc.) would incentivize the best minds to explore these issues more and provide more robust proposals for interventions. I also think this is where my call to make it easier to make a case for certain interventions tackling systemic problems would apply to both EA and Longtermism recognising that work here will necessarily have numerous benefits for current generations but is keenly focused on the long view, where it will have the highest returns.
On your last concern, I acknowledge that for many of the problems I specifically highlighted, there is a high chance that most of their most promising interventions are not neglected at all, which is part of the reason I think we might gain more if we review the methodology we use as well. Without seriously considering this particular point, I think that the importance of these interventions (how many lives they may potentially positively affect and for how long) could be argued in place of the emphasis on neglectedness. There might also be other interventions in these areas that may have high returns but are less popularised or explored. Regardless of the robustness of my rough suggestions here, I think encouraging more work here makes it easier to hear of and support more (and better quality) proposed interventions on these issues.
Thanks for your thoughtful comment, & thanks for providing some more explicit/concrete examples of the kind of thing you’d like to see more of — that was really helpful!
(And I hadn’t read that article you linked before, or thought about the “missing middle” as a frame — thanks!)
I think I’m now more confident that I disagree with the argument you’ve laid out here.
The main reason is that I disagree with your claim that we’d be able to do more good by reviewing our methodology & de-emphasising neglectedness.
I basically just think neglectedness is really important for what I’m trying to do when I’m trying to do good.
I think there are really compelling arguments for working on e.g. immigration policy and criminal justice reform, that are going to appeal to a much broader audience than the one on this Forum. You don’t need to be, like, a ‘moral weirdo’ to think that it’s unnacceptable that we keep humans in near-indefinite imprisonment for the crime of being born in the wrong country.
And I think the core strength of EA is that we’ve got a bunch of ‘moral weirdos,’ who are interested in looking at ways of doing good for which there aren’t clear, emotionally compelling arguments, or that don’t seem good at first. E.g. when improving education, everyone thinks it seems good to provide teachers and textbooks, but fewer people think of removing intestinal parasites. [1]
I recognise this isn’t anywhere close to a watertight defence of the current main focus of longtermists versus the other kinds of interventions you highlighted, but I think it’s the core thing driving why I don’t currently buy the argument you laid out here :)
[1] putting aside for one second the arguments about whether this actually works, lol! Was just the first example that came to mind of something deeply “unsexy” that EAs talk about.
I see what you mean and figured that the neglectedness consideration will be a significant block to my argument within the EA/Longtermist framework but (my current inability to provide a methodological appraisal notwithstanding) I still find myself hesitant to accept that we should not delve into these issues (given what is at stake that is, the quality of lives of potentially billions of future beings). I also reckon that part of the work under good value lock-in will inevitably involve working on many systemic problems even if the difference will lie in the approach we take. Ultimately though my argument is hinged on whether we should as a community find it acceptable to ignore these issues while proclaiming we want to do the most good we can (and for the most number of people). I concede that these are indeed very hard problems to solve as seen by the several players who have been trying to solve them but this community has some of the smartest and most innovative minds, I think the challenge might be one worth taking up.