I think animal welfare work is underrated from a long-term perspective.
Fwiw I don’t disagree that , and should have put it on my list. I would nonetheless guess it’s lower EV than global health.
What is the argument for Health and development interventions being best from a long-term perspective?
That’s a pretty large question, since I have to defend it against all alternatives (and per my previous comment I would guess some subset of GCR risk reduction work is better overall) But some views that make me think it could at least be competitive:
I am highly sceptical of both the historical track record and, relatedly, the incentives/(lack of) feedback loops in longtermist-focused work in improving the far future
I find the classic ‘beware surprising convergence’ class of argument for why we should try to optimise directly for longtermism is unconvincing theoretically, since it ignores the greater chance of finding the best longtermist-affecting neartermist intervention due to the tighter neartermist feedback loops
I think per my discussion here that prioritising events according to their probability of wiping out the last human is a potentially major miscalculation of long term expectation
the main mechanism you describe having longtermist value is somewhat present in GHD (expanding moral circle)
It just being much less controversial (and relatedly, less-based on somewhat subjective moral weight judgements) means it’s an easier norm to spread—so while it might not expand the moral circle as much in the long term, it probably expands it faster in the short term (and we can always switch to something more ambitious when the low hanging moral-circle-fruit are picked)
related to lack of controversy, it is much more amenable to empirical study than either longtermist or animal welfare work (the latter having active antagonists who try to hide information and prevent key interventions)
I find the economic arguments for animal welfare moral circle expansion naturally coming from in vitro meat compelling. I don’t think historical examples of sort-of-related things not happening are a strong counterargument. I don’t see what the incentives would be to factory farm meat in a world where you can grow it far more easily.
For the record, I’m not complacent about this and do want animal welfare work to continue. It’s just not what I would prioritise on the margin right now (if social concern for nonhuman animals dropped below a certain level I’d change my mind).
I am somewhat concerned about S-risk futures, but I think most of the risk comes from largely unrelated scenarios e.g. 1 economic incentives to create something like Hanson’s Age of Em world, where the supermajority of the population are pragmatically driven to subsistence living by an intentionally programmed fear of death (not necessarily this exact scenario, but a range like it), e.g. 2 intentionally programmed hellworlds. I’m really unsure about the sign of animal welfare work on the probability of such outcomes
I’m not negative-leaning, so think that futures in which we thrive and are generally benign but in which there are small numbers of factory-farm-like experiences can still be much better on net than a future in which we e.g. destroy civilisation, are forever confined to low-medium tech civilisations on Earth, and at any given point either exploit animals in factories or just don’t have any control over the biosphere and leave it to take care of itself
IIRC John and Hauke’s work suggested GHD work is in fact pretty high EV for economic growth but argued that growth-targeting strategies were much higher (the claim which I’m sceptical of)
To my knowledge, the EV of economic growth from RCT-derived interventions has been pretty underexplored. I’ve seen a few rough estimates, but nothing resembling a substantial research program (though I could easily have missed one).
Fwiw I don’t disagree that , and should have put it on my list. I would nonetheless guess it’s lower EV than global health.
That’s a pretty large question, since I have to defend it against all alternatives (and per my previous comment I would guess some subset of GCR risk reduction work is better overall) But some views that make me think it could at least be competitive:
I am highly sceptical of both the historical track record and, relatedly, the incentives/(lack of) feedback loops in longtermist-focused work in improving the far future
I find the classic ‘beware surprising convergence’ class of argument for why we should try to optimise directly for longtermism is unconvincing theoretically, since it ignores the greater chance of finding the best longtermist-affecting neartermist intervention due to the tighter neartermist feedback loops
I think per my discussion here that prioritising events according to their probability of wiping out the last human is a potentially major miscalculation of long term expectation
the main mechanism you describe having longtermist value is somewhat present in GHD (expanding moral circle)
It just being much less controversial (and relatedly, less-based on somewhat subjective moral weight judgements) means it’s an easier norm to spread—so while it might not expand the moral circle as much in the long term, it probably expands it faster in the short term (and we can always switch to something more ambitious when the low hanging moral-circle-fruit are picked)
related to lack of controversy, it is much more amenable to empirical study than either longtermist or animal welfare work (the latter having active antagonists who try to hide information and prevent key interventions)
I find the economic arguments for animal welfare moral circle expansion naturally coming from in vitro meat compelling. I don’t think historical examples of sort-of-related things not happening are a strong counterargument. I don’t see what the incentives would be to factory farm meat in a world where you can grow it far more easily.
For the record, I’m not complacent about this and do want animal welfare work to continue. It’s just not what I would prioritise on the margin right now (if social concern for nonhuman animals dropped below a certain level I’d change my mind).
I am somewhat concerned about S-risk futures, but I think most of the risk comes from largely unrelated scenarios e.g. 1 economic incentives to create something like Hanson’s Age of Em world, where the supermajority of the population are pragmatically driven to subsistence living by an intentionally programmed fear of death (not necessarily this exact scenario, but a range like it), e.g. 2 intentionally programmed hellworlds. I’m really unsure about the sign of animal welfare work on the probability of such outcomes
I’m not negative-leaning, so think that futures in which we thrive and are generally benign but in which there are small numbers of factory-farm-like experiences can still be much better on net than a future in which we e.g. destroy civilisation, are forever confined to low-medium tech civilisations on Earth, and at any given point either exploit animals in factories or just don’t have any control over the biosphere and leave it to take care of itself
IIRC John and Hauke’s work suggested GHD work is in fact pretty high EV for economic growth but argued that growth-targeting strategies were much higher (the claim which I’m sceptical of)
To my knowledge, the EV of economic growth from RCT-derived interventions has been pretty underexplored. I’ve seen a few rough estimates, but nothing resembling a substantial research program (though I could easily have missed one).