Some possibilities to consider in >100 years that could undermine the reliability of any long-term positive effects from helping the global poor:
Marginal human labour will (quite probably, but not with overwhelming probability?) have no, very little or even negative productive value due to work being fully or nearly fully automated. The bottlenecks to production will just be bottlenecks for automation, e.g. compute, natural resources, energy, technology, physical limits and the preferences of agents that decide how automation is used. Humans will compete over the little share of useful work they can do, and may even push to do more work that will in fact be counterproductive and better off automated. What we do today doesn’t really affect long-run productive capacity with substantial probability (or may even be negative for it). This may be especially true for people and their descendants who are less likely to advance the technological frontier in the next 100 years, like the global poor.
Biological humans (and our largely biological or more biological-like descendants) may compete over resources with artificial moral patients who can generate personal welfare value more efficiently per unit of resources. Saving human lives today and increasing the long-run biological(-like) population could mean fewer such more welfare-efficient moral patients. And that could be bad.
OTOH, maybe with more humans, we exploit more resources than otherwise without competing, because we aren’t exploiting resources as aggressively as possible or as aggressively as is not too objectionable to future moral agents (because future moral agents don’t find it useful enough). So, this seems speculative either way.
How much artificial moral patients matter is partly a normative/moral issue, not just an empirical one, so you could split across views for the relative weight of artificial moral patients. But those views could come to opposite conclusions on this issue and end up needing to cooperate to do something else to avoid acting against one another.
(Minor edits.)
Some possibilities to consider in >100 years that could undermine the reliability of any long-term positive effects from helping the global poor:
Marginal human labour will (quite probably, but not with overwhelming probability?) have no, very little or even negative productive value due to work being fully or nearly fully automated. The bottlenecks to production will just be bottlenecks for automation, e.g. compute, natural resources, energy, technology, physical limits and the preferences of agents that decide how automation is used. Humans will compete over the little share of useful work they can do, and may even push to do more work that will in fact be counterproductive and better off automated. What we do today doesn’t really affect long-run productive capacity with substantial probability (or may even be negative for it). This may be especially true for people and their descendants who are less likely to advance the technological frontier in the next 100 years, like the global poor.
Biological humans (and our largely biological or more biological-like descendants) may compete over resources with artificial moral patients who can generate personal welfare value more efficiently per unit of resources. Saving human lives today and increasing the long-run biological(-like) population could mean fewer such more welfare-efficient moral patients. And that could be bad.
OTOH, maybe with more humans, we exploit more resources than otherwise without competing, because we aren’t exploiting resources as aggressively as possible or as aggressively as is not too objectionable to future moral agents (because future moral agents don’t find it useful enough). So, this seems speculative either way.
How much artificial moral patients matter is partly a normative/moral issue, not just an empirical one, so you could split across views for the relative weight of artificial moral patients. But those views could come to opposite conclusions on this issue and end up needing to cooperate to do something else to avoid acting against one another.
Thanks, yeah, these are important possibilities to consider!