Why don’t models of intelligence explosion assume diminishing marginal returns? In the model below, what are the arguments for assuming a constant ω, rather than diminishing marginal returns (eg, ωt→−∞). With diminishing returns, an AI can only improve itself at a dimishing rate, so we don’t get a singularity.
Why don’t models of intelligence explosion assume diminishing marginal returns? In the model below, what are the arguments for assuming a constant ω, rather than diminishing marginal returns (eg, ωt→−∞). With diminishing returns, an AI can only improve itself at a dimishing rate, so we don’t get a singularity.
https://www.nber.org/papers/w23928.pdf