I wonder if part of this is an (understandable) reaction to the various fairly unsophisticated anti-growth arguments which float around in environmentalist and/or anticapitalist circles. It would be a mistake to dismiss this as a concern simply because some related arguments are bad. To sustain increasing growth, our productive output per unit resource has to become arbitrarily large (unless space colonisation). It seems not only possible but somewhat likely that this “efficiency” measure will reach a cap some time before space travel meaningfully increases our available resources.
I’d like to see more sophisticated thought on this. As a (very brief) sketch of one failure mode:
- Sub AGI but still powerful AI ends up mostly automating the decision making of several alrge companies, which with their competitive advantage then obtain and use huge amounts of resources.
- They notice each other, and compete to grab those remaining resources as quickly as possible.
- Resources gone, very bad.
(This is along the same lines as “AGI acquires paperclips”, it’s not meant to be a fully fleshed out example, merely an illustrative story)
Just flagging that space doesn’t solve anything—it just pushes back resource constraints a bit. Given speed-of-light constraints, we can only increase resources via space travel ~quadratically with time, which won’t keep up with either exponential or hyperbolic growth.
Thanks, this is useful to flag. As It happens I think the “hard cap” will probably be an issue first, but it’s definitely noteworthy that even if we avoid this there’s still a softer cap which has the same effect on efficiency in the long run.
And yes, wasting or misusing resources due to competitive pressure in my view is one of the key failure modes to be mindful of in the context of AI alignment and AI strategy. FWIW, my sense is that this belief is held by many people in the field, and that a fair amount of thought has been going into it. (Though as with most issues in this space I think we don’t have a “definite solution” yet.)
Yes, I think it is very likely that growth eventually needs to become polynomial rather than exponential or hyperbolic. The only two defeaters I can think of are (i) we are fundamentally wrong about physics or (ii) some weird theory of value that assigns exponentially growing value to sub-exponential growth of resources.
This post contains some relevant links (though note I disagree with the post in several places, including its bottom line/emphasis).
When Roodman’s awesome piece on modelling the human trajectory came out, I feel like far too little attention was paid to the catastrophic effects of including finite resources in the model.
I wonder if part of this is an (understandable) reaction to the various fairly unsophisticated anti-growth arguments which float around in environmentalist and/or anticapitalist circles. It would be a mistake to dismiss this as a concern simply because some related arguments are bad. To sustain increasing growth, our productive output per unit resource has to become arbitrarily large (unless space colonisation). It seems not only possible but somewhat likely that this “efficiency” measure will reach a cap some time before space travel meaningfully increases our available resources.
I’d like to see more sophisticated thought on this. As a (very brief) sketch of one failure mode:
- Sub AGI but still powerful AI ends up mostly automating the decision making of several alrge companies, which with their competitive advantage then obtain and use huge amounts of resources.
- They notice each other, and compete to grab those remaining resources as quickly as possible.
- Resources gone, very bad.
(This is along the same lines as “AGI acquires paperclips”, it’s not meant to be a fully fleshed out example, merely an illustrative story)
Just flagging that space doesn’t solve anything—it just pushes back resource constraints a bit. Given speed-of-light constraints, we can only increase resources via space travel ~quadratically with time, which won’t keep up with either exponential or hyperbolic growth.
Why not cubically? Because the Milky Way is flat-ish?
Volume of a sphere with radius increasing at constant rate has a quadratic rate of change.
Ah yeah. Damn, I could have sworn I did the math before on this (for this exact question) but somehow forgot the result.😅
This is why you should have done physics ;)
Thanks, this is useful to flag. As It happens I think the “hard cap” will probably be an issue first, but it’s definitely noteworthy that even if we avoid this there’s still a softer cap which has the same effect on efficiency in the long run.
And yes, wasting or misusing resources due to competitive pressure in my view is one of the key failure modes to be mindful of in the context of AI alignment and AI strategy. FWIW, my sense is that this belief is held by many people in the field, and that a fair amount of thought has been going into it. (Though as with most issues in this space I think we don’t have a “definite solution” yet.)
Yes, I think it is very likely that growth eventually needs to become polynomial rather than exponential or hyperbolic. The only two defeaters I can think of are (i) we are fundamentally wrong about physics or (ii) some weird theory of value that assigns exponentially growing value to sub-exponential growth of resources.
This post contains some relevant links (though note I disagree with the post in several places, including its bottom line/emphasis).