The reason why we’d expect it to maximize competitiveness is in the sense that: what spreads spreads, what lives lives, what is able to grow grows, what is stable is stable… and not all of this is aligned with humanity’s ultimate values; the methods that sometimes maximize competitiveness (like not internalizing external costs, wiping out competitors, all work and no play) much of the time don’t maximize achieving our values. What is competitive in this sense is however dependent on the circumstances and hopefully we can align it better. I hope this clarifies.
I think I had the same thought as Ozzie, if I’m interpreting his comment correctly. My thought was that this all seems to make sense, but that, from the model itself, I expected the second last sentence to be something like:
Given the above we’d expect that at first competitiveness and the accomplishment of humanity’s ultimate values are both improved but eventually they come apart and the trajectory skates along the Pareto frontier (that roughly speaking happens when we are at maximum technology or technological change becomes sufficiently slow) either towards maximizing competiveness or towards maximizing humanity’s ultimate values (though it doesn’t necessarily reach either extreme, and may skate back and forth).
And then that’d seem to lead to a suggestion like “Therefore, if the world is at this Pareto frontier or expected to reach it, a key task altruists should work on may be figuring out ways to either expand the frontier or increase the chances that, upon reaching it, we skate towards what we value rather than towards competitiveness.”
That is, I don’t see how the model itself indicates that, upon reaching the frontier, we’ll necessarily move towards greater competitiveness, rather than towards humanity’s values. Is that idea based on other considerations from outside of the model? E.g., that self-interest seems more common than altruism, or something like Robin Hanson’s suggestion that evolutionary pressures will tend to favour maximum competitiveness (think I heard Hanson discuss that on a podcast, but here’s a somewhat relevant post).
(And I think your reply is mainly highlighting that, at the frontier, there’d be a tradeoff between competitiveness and humanity’s values, right? Rather than giving a reason why the competitiveness option would necessarily be favoured when we do face that tradeoff?)
Yes, the model in itself doesn’t say that we’ll tend towards competitiveness. That comes from the definition of competitiveness I’m using here and is similar to Robin Hanson’s suggestion. “Competitiveness” as used here just refers to the statistical tendency of systems to evolve in certain ways—it’s similar to the statement that entropy tends to increase. Some of those ways are aligned with our values and others are not. In making the axes orthogonal I was using the, probably true, assumption that most ways of system evolution are not in alignment with our values.
(With the reply I was trying to point in the direction of this increasing entropy like definition.)
I like the first bit, but am a bit confused on the Moloch bit. Why exactly would we expect that it “maximizes competitiveness”?
The reason why we’d expect it to maximize competitiveness is in the sense that: what spreads spreads, what lives lives, what is able to grow grows, what is stable is stable… and not all of this is aligned with humanity’s ultimate values; the methods that sometimes maximize competitiveness (like not internalizing external costs, wiping out competitors, all work and no play) much of the time don’t maximize achieving our values. What is competitive in this sense is however dependent on the circumstances and hopefully we can align it better. I hope this clarifies.
I think I had the same thought as Ozzie, if I’m interpreting his comment correctly. My thought was that this all seems to make sense, but that, from the model itself, I expected the second last sentence to be something like:
And then that’d seem to lead to a suggestion like “Therefore, if the world is at this Pareto frontier or expected to reach it, a key task altruists should work on may be figuring out ways to either expand the frontier or increase the chances that, upon reaching it, we skate towards what we value rather than towards competitiveness.”
That is, I don’t see how the model itself indicates that, upon reaching the frontier, we’ll necessarily move towards greater competitiveness, rather than towards humanity’s values. Is that idea based on other considerations from outside of the model? E.g., that self-interest seems more common than altruism, or something like Robin Hanson’s suggestion that evolutionary pressures will tend to favour maximum competitiveness (think I heard Hanson discuss that on a podcast, but here’s a somewhat relevant post).
(And I think your reply is mainly highlighting that, at the frontier, there’d be a tradeoff between competitiveness and humanity’s values, right? Rather than giving a reason why the competitiveness option would necessarily be favoured when we do face that tradeoff?)
Yes, the model in itself doesn’t say that we’ll tend towards competitiveness. That comes from the definition of competitiveness I’m using here and is similar to Robin Hanson’s suggestion. “Competitiveness” as used here just refers to the statistical tendency of systems to evolve in certain ways—it’s similar to the statement that entropy tends to increase. Some of those ways are aligned with our values and others are not. In making the axes orthogonal I was using the, probably true, assumption that most ways of system evolution are not in alignment with our values.
(With the reply I was trying to point in the direction of this increasing entropy like definition.)