As Nick Bostrom says in “Astronomical Waste,” what matters is the safety / wisdom with which we approach the future, not the speed. A lot of arguing needs to be done to say that speeding human development in the short run improves the safety of the future. I personally expect that many interventions are much better than human development for the far future, and short-term helping of humans may not be a very good proxy at all.
I agree that short-term helping of animals is also not a great proxy of long-term helping of animals, though the two may correlate because of memetic side effects. Memes might help make human development good for the far future too, though probably the effect is less than for animals because it’s already widely accepted that human suffering matters.
Thanks Brian. The argument was supposed to be that short term human welfare effects are a reasonable proxy for long term effects (after multiplying by a factor on whose size and sign I don’t make a conclusion, although I did point to people claiming it is positive), and that it’s harder to find such corrective factor for comparing different kinds of animal welfare interventions.
Your and Peter’s comments did persuade me to change the title of the post, which was slightly misleading in focusing attention on welfare.
Fair enough about citing others who claim it’s a positive correlation. :)
The idea that the quality of the far future is strongly influenced in a compounding fashion by human empowerment strikes me as a rather specific and controversial model. From the outside, it looks like anchoring to human-poverty charities. If I were to come up with a list of variables to push on that I thought would causally improve the far future, Third-world poverty or economic growth probably wouldn’t make the top 10.
Of course, other variables that I would care about (e.g., degree of international cooperation, good governance, philosophical sophistication, quality of discourse, empathy, etc.) might happen to correlate well with poverty reduction or growth, but causation matters. Even if the welfare of elderly patients is correlated with a good far future, working to improve the welfare of elderly people probably isn’t the best place to push.
The discussion of where to push to make the far future better seems to me inadequately discussed, with different people assuming their own particular views. (Hence part of the importance of GPP. :) )
Thanks for the post. :)
It’s far from obvious that short-term human development is a good metric for far-future trajectories. Indeed, some believe the opposite. I’m personally extremely ambivalent on the matter ( http://foundational-research.org/publications/differential-intellectual-progress-as-a-positive-sum-project/#economic-growth ).
As Nick Bostrom says in “Astronomical Waste,” what matters is the safety / wisdom with which we approach the future, not the speed. A lot of arguing needs to be done to say that speeding human development in the short run improves the safety of the future. I personally expect that many interventions are much better than human development for the far future, and short-term helping of humans may not be a very good proxy at all.
I agree that short-term helping of animals is also not a great proxy of long-term helping of animals, though the two may correlate because of memetic side effects. Memes might help make human development good for the far future too, though probably the effect is less than for animals because it’s already widely accepted that human suffering matters.
Thanks Brian. The argument was supposed to be that short term human welfare effects are a reasonable proxy for long term effects (after multiplying by a factor on whose size and sign I don’t make a conclusion, although I did point to people claiming it is positive), and that it’s harder to find such corrective factor for comparing different kinds of animal welfare interventions.
Your and Peter’s comments did persuade me to change the title of the post, which was slightly misleading in focusing attention on welfare.
Fair enough about citing others who claim it’s a positive correlation. :)
The idea that the quality of the far future is strongly influenced in a compounding fashion by human empowerment strikes me as a rather specific and controversial model. From the outside, it looks like anchoring to human-poverty charities. If I were to come up with a list of variables to push on that I thought would causally improve the far future, Third-world poverty or economic growth probably wouldn’t make the top 10.
Of course, other variables that I would care about (e.g., degree of international cooperation, good governance, philosophical sophistication, quality of discourse, empathy, etc.) might happen to correlate well with poverty reduction or growth, but causation matters. Even if the welfare of elderly patients is correlated with a good far future, working to improve the welfare of elderly people probably isn’t the best place to push.
The discussion of where to push to make the far future better seems to me inadequately discussed, with different people assuming their own particular views. (Hence part of the importance of GPP. :) )