Summary
Given only “neutral” factor-augmenting technology, to reliably get the result that the increase in substitutability between capital and labor lowers wages, we need
decreasing returns to scale and
substitutability great enough that the decreasing returns to scale outweighs the fact that effective capital is now plentiful and maybe complementing labor a little bit. In the extreme, as shown above, decreasing returns to scale + perfect substitutability lowers wages.
I’ll note that these are almost exactly the same conditions that I outlined in my recent article about the effects of AGI on human wages. It seems we’re in agreement.
Your point that people may not necessarily care about humanity’s genetic legacy in itself is reasonable. However, if people value simulated humans but not generic AIs, the key distinction they are making still seems to be based on species identity rather than on a principle that a utilitarian, looking at things impartially, would recognize as morally significant.
In this context, “species” wouldn’t be defined strictly in terms of genetic inheritance. Instead, it would encompass a slightly broader concept—one that includes both genetic heritage and the faithful functional replication of biologically evolved beings within a digital medium. Nonetheless, the core element of my thesis remains intact: this preference appears rooted in non-utilitarian considerations.
Right now, we lack significant empirical evidence to determine whether AI civilization will ultimately generate more or less valuable than human civilization from a utilitarian point of view. Since we cannot say which is the case, there is no clear reason to default to delaying AI development over accelerating it. If AIs turn out to be generate more moral value, then delaying AI would mean we are actively making a mistake—we would be pushing the future toward a suboptimal state from a utilitarian perspective, by entrenching the human species.
This is because, by assumption, the main effect from delaying AI is to increase the probability that AIs will be aligned with human interests, which is not equivalent to maximizing utilitarian moral value. Conversely, if AIs end up generating less moral value, as many effective altruists currently believe, then delaying AI would indeed be the right call. But since we don’t know which scenario is true, we should acknowledge our uncertainty rather than assume that delaying AI is the obvious default course of action.
Given this uncertainty, the rational approach is to suspend judgment rather than confidently assert that slowing down AI is beneficial. Yet I perceive many EAs as taking the confident approach—acting as if delaying AI is clearly the right decision from a longtermist utilitarian perspective, despite the lack of solid evidence.
Additionally, delaying AI would likely impose significant costs on currently existing humans by delaying technological development, which in my view shifts the default consideration in the opposite direction from what you suggest. This becomes especially relevant for those who do not adhere strictly to total utilitarian longtermism but instead care, at least to some degree, about the well-being of people alive today.