At a risk of getting off topic from the core question, which interventions do you think are most effective in ensuring we thrive in the future with better cooperative norms? I don’t think it’s clear that this would be EA global health interventions. I would think boosting innovation and improving institutions are more effective.
I reviewed the piece you linked and fwiw strongly disagreed that the case it made was as clear cut as the authors conclude (in particular IIRC they observe a limited historical upside from RCT-backed interventions, but didn’t seem to account for the far smaller amount of money that had been put into them; they also gave a number of priors that I didn’t necessarily strongly disagree with, but seemed like they could be an order of magnitude off in either direction, and the end result was quite sensitive to these).
That’s not to say I think global health interventions are clearly better—just that I think the case is open (but also that, given the much smaller global investment in RCTs, there’s probably more exploratory value in those).
I could imagine any of the following turning out to be the best safeguard of the long term (and others):
Health and development interventions
Economic growth work
Differential focus on interplanetary settlement
Preventing ecological collapse
AI safety work
e/acc (their principles taken seriously, not the memes)
AI capabilities work (because of e/acc)
Work on any subset of global catastrophes (including seemingly minor ones like Kessler syndrome, which in itself has the potential to destabilise civilisation)
My best guess is the last one, but I’m wary of any blanket dismissal of any subset of the above.
What is the argument for Health and development interventions being best from a long-term perspective?
I think animal welfare work is underrated from a long-term perspective. There is a risk that we lock-in values that don’t give adequate consideration to non-human sentience which could enable mass suffering to persist for a very long time. E.g. we spread to the stars while factory farming is still widespread and so end up spreading factory farming too. Or we create digital sentience while we still don’t really care about non-human sentience and so end up creating vast amounts of digital suffering. I think working to end factory farming is one way to widen the moral circle and prevent these moral catastrophes from occurring.
I think animal welfare work is underrated from a long-term perspective.
Fwiw I don’t disagree that , and should have put it on my list. I would nonetheless guess it’s lower EV than global health.
What is the argument for Health and development interventions being best from a long-term perspective?
That’s a pretty large question, since I have to defend it against all alternatives (and per my previous comment I would guess some subset of GCR risk reduction work is better overall) But some views that make me think it could at least be competitive:
I am highly sceptical of both the historical track record and, relatedly, the incentives/(lack of) feedback loops in longtermist-focused work in improving the far future
I find the classic ‘beware surprising convergence’ class of argument for why we should try to optimise directly for longtermism is unconvincing theoretically, since it ignores the greater chance of finding the best longtermist-affecting neartermist intervention due to the tighter neartermist feedback loops
I think per my discussion here that prioritising events according to their probability of wiping out the last human is a potentially major miscalculation of long term expectation
the main mechanism you describe having longtermist value is somewhat present in GHD (expanding moral circle)
It just being much less controversial (and relatedly, less-based on somewhat subjective moral weight judgements) means it’s an easier norm to spread—so while it might not expand the moral circle as much in the long term, it probably expands it faster in the short term (and we can always switch to something more ambitious when the low hanging moral-circle-fruit are picked)
related to lack of controversy, it is much more amenable to empirical study than either longtermist or animal welfare work (the latter having active antagonists who try to hide information and prevent key interventions)
I find the economic arguments for animal welfare moral circle expansion naturally coming from in vitro meat compelling. I don’t think historical examples of sort-of-related things not happening are a strong counterargument. I don’t see what the incentives would be to factory farm meat in a world where you can grow it far more easily.
For the record, I’m not complacent about this and do want animal welfare work to continue. It’s just not what I would prioritise on the margin right now (if social concern for nonhuman animals dropped below a certain level I’d change my mind).
I am somewhat concerned about S-risk futures, but I think most of the risk comes from largely unrelated scenarios e.g. 1 economic incentives to create something like Hanson’s Age of Em world, where the supermajority of the population are pragmatically driven to subsistence living by an intentionally programmed fear of death (not necessarily this exact scenario, but a range like it), e.g. 2 intentionally programmed hellworlds. I’m really unsure about the sign of animal welfare work on the probability of such outcomes
I’m not negative-leaning, so think that futures in which we thrive and are generally benign but in which there are small numbers of factory-farm-like experiences can still be much better on net than a future in which we e.g. destroy civilisation, are forever confined to low-medium tech civilisations on Earth, and at any given point either exploit animals in factories or just don’t have any control over the biosphere and leave it to take care of itself
IIRC John and Hauke’s work suggested GHD work is in fact pretty high EV for economic growth but argued that growth-targeting strategies were much higher (the claim which I’m sceptical of)
To my knowledge, the EV of economic growth from RCT-derived interventions has been pretty underexplored. I’ve seen a few rough estimates, but nothing resembling a substantial research program (though I could easily have missed one).
My understanding is that Founder’s Pledge (I think it was them) tried to look at impactful donation opportunities to boost economic growth and didn’t find anything that had a good evidence base and that was neglected. So I’m a bit skeptical on that.
Even then, it seems unlikely that more economic growth will lead to better treatment of animals. Right now, countries getting richer is strongly correlated with more factory farming. Innovation and improvements in AI are currently used by companies to increase density in farms. We can make a point that more research will automatically lead to alternative proteins replacing everything but it’s very speculative.
At a risk of getting off topic from the core question, which interventions do you think are most effective in ensuring we thrive in the future with better cooperative norms? I don’t think it’s clear that this would be EA global health interventions. I would think boosting innovation and improving institutions are more effective.
Also boosting economic growth would probably be better than so-called randomista interventions from a long-term perspective.
I reviewed the piece you linked and fwiw strongly disagreed that the case it made was as clear cut as the authors conclude (in particular IIRC they observe a limited historical upside from RCT-backed interventions, but didn’t seem to account for the far smaller amount of money that had been put into them; they also gave a number of priors that I didn’t necessarily strongly disagree with, but seemed like they could be an order of magnitude off in either direction, and the end result was quite sensitive to these).
That’s not to say I think global health interventions are clearly better—just that I think the case is open (but also that, given the much smaller global investment in RCTs, there’s probably more exploratory value in those).
I could imagine any of the following turning out to be the best safeguard of the long term (and others):
Health and development interventions
Economic growth work
Differential focus on interplanetary settlement
Preventing ecological collapse
AI safety work
e/acc (their principles taken seriously, not the memes)
AI capabilities work (because of e/acc)
Work on any subset of global catastrophes (including seemingly minor ones like Kessler syndrome, which in itself has the potential to destabilise civilisation)
My best guess is the last one, but I’m wary of any blanket dismissal of any subset of the above.
What is the argument for Health and development interventions being best from a long-term perspective?
I think animal welfare work is underrated from a long-term perspective. There is a risk that we lock-in values that don’t give adequate consideration to non-human sentience which could enable mass suffering to persist for a very long time. E.g. we spread to the stars while factory farming is still widespread and so end up spreading factory farming too. Or we create digital sentience while we still don’t really care about non-human sentience and so end up creating vast amounts of digital suffering. I think working to end factory farming is one way to widen the moral circle and prevent these moral catastrophes from occurring.
Fwiw I don’t disagree that , and should have put it on my list. I would nonetheless guess it’s lower EV than global health.
That’s a pretty large question, since I have to defend it against all alternatives (and per my previous comment I would guess some subset of GCR risk reduction work is better overall) But some views that make me think it could at least be competitive:
I am highly sceptical of both the historical track record and, relatedly, the incentives/(lack of) feedback loops in longtermist-focused work in improving the far future
I find the classic ‘beware surprising convergence’ class of argument for why we should try to optimise directly for longtermism is unconvincing theoretically, since it ignores the greater chance of finding the best longtermist-affecting neartermist intervention due to the tighter neartermist feedback loops
I think per my discussion here that prioritising events according to their probability of wiping out the last human is a potentially major miscalculation of long term expectation
the main mechanism you describe having longtermist value is somewhat present in GHD (expanding moral circle)
It just being much less controversial (and relatedly, less-based on somewhat subjective moral weight judgements) means it’s an easier norm to spread—so while it might not expand the moral circle as much in the long term, it probably expands it faster in the short term (and we can always switch to something more ambitious when the low hanging moral-circle-fruit are picked)
related to lack of controversy, it is much more amenable to empirical study than either longtermist or animal welfare work (the latter having active antagonists who try to hide information and prevent key interventions)
I find the economic arguments for animal welfare moral circle expansion naturally coming from in vitro meat compelling. I don’t think historical examples of sort-of-related things not happening are a strong counterargument. I don’t see what the incentives would be to factory farm meat in a world where you can grow it far more easily.
For the record, I’m not complacent about this and do want animal welfare work to continue. It’s just not what I would prioritise on the margin right now (if social concern for nonhuman animals dropped below a certain level I’d change my mind).
I am somewhat concerned about S-risk futures, but I think most of the risk comes from largely unrelated scenarios e.g. 1 economic incentives to create something like Hanson’s Age of Em world, where the supermajority of the population are pragmatically driven to subsistence living by an intentionally programmed fear of death (not necessarily this exact scenario, but a range like it), e.g. 2 intentionally programmed hellworlds. I’m really unsure about the sign of animal welfare work on the probability of such outcomes
I’m not negative-leaning, so think that futures in which we thrive and are generally benign but in which there are small numbers of factory-farm-like experiences can still be much better on net than a future in which we e.g. destroy civilisation, are forever confined to low-medium tech civilisations on Earth, and at any given point either exploit animals in factories or just don’t have any control over the biosphere and leave it to take care of itself
IIRC John and Hauke’s work suggested GHD work is in fact pretty high EV for economic growth but argued that growth-targeting strategies were much higher (the claim which I’m sceptical of)
To my knowledge, the EV of economic growth from RCT-derived interventions has been pretty underexplored. I’ve seen a few rough estimates, but nothing resembling a substantial research program (though I could easily have missed one).
My understanding is that Founder’s Pledge (I think it was them) tried to look at impactful donation opportunities to boost economic growth and didn’t find anything that had a good evidence base and that was neglected. So I’m a bit skeptical on that.
Even then, it seems unlikely that more economic growth will lead to better treatment of animals. Right now, countries getting richer is strongly correlated with more factory farming. Innovation and improvements in AI are currently used by companies to increase density in farms. We can make a point that more research will automatically lead to alternative proteins replacing everything but it’s very speculative.