I think I had the same thought as Ozzie, if Iâm interpreting his comment correctly. My thought was that this all seems to make sense, but that, from the model itself, I expected the second last sentence to be something like:
Given the above weâd expect that at first competitiveness and the accomplishment of humanityâs ultimate values are both improved but eventually they come apart and the trajectory skates along the Pareto frontier (that roughly speaking happens when we are at maximum technology or technological change becomes sufficiently slow) either towards maximizing competiveness or towards maximizing humanityâs ultimate values (though it doesnât necessarily reach either extreme, and may skate back and forth).
And then thatâd seem to lead to a suggestion like âTherefore, if the world is at this Pareto frontier or expected to reach it, a key task altruists should work on may be figuring out ways to either expand the frontier or increase the chances that, upon reaching it, we skate towards what we value rather than towards competitiveness.â
That is, I donât see how the model itself indicates that, upon reaching the frontier, weâll necessarily move towards greater competitiveness, rather than towards humanityâs values. Is that idea based on other considerations from outside of the model? E.g., that self-interest seems more common than altruism, or something like Robin Hansonâs suggestion that evolutionary pressures will tend to favour maximum competitiveness (think I heard Hanson discuss that on a podcast, but hereâs a somewhat relevant post).
(And I think your reply is mainly highlighting that, at the frontier, thereâd be a tradeoff between competitiveness and humanityâs values, right? Rather than giving a reason why the competitiveness option would necessarily be favoured when we do face that tradeoff?)
Yes, the model in itself doesnât say that weâll tend towards competitiveness. That comes from the definition of competitiveness Iâm using here and is similar to Robin Hansonâs suggestion. âCompetitivenessâ as used here just refers to the statistical tendency of systems to evolve in certain waysâitâs similar to the statement that entropy tends to increase. Some of those ways are aligned with our values and others are not. In making the axes orthogonal I was using the, probably true, assumption that most ways of system evolution are not in alignment with our values.
(With the reply I was trying to point in the direction of this increasing entropy like definition.)
I think I had the same thought as Ozzie, if Iâm interpreting his comment correctly. My thought was that this all seems to make sense, but that, from the model itself, I expected the second last sentence to be something like:
And then thatâd seem to lead to a suggestion like âTherefore, if the world is at this Pareto frontier or expected to reach it, a key task altruists should work on may be figuring out ways to either expand the frontier or increase the chances that, upon reaching it, we skate towards what we value rather than towards competitiveness.â
That is, I donât see how the model itself indicates that, upon reaching the frontier, weâll necessarily move towards greater competitiveness, rather than towards humanityâs values. Is that idea based on other considerations from outside of the model? E.g., that self-interest seems more common than altruism, or something like Robin Hansonâs suggestion that evolutionary pressures will tend to favour maximum competitiveness (think I heard Hanson discuss that on a podcast, but hereâs a somewhat relevant post).
(And I think your reply is mainly highlighting that, at the frontier, thereâd be a tradeoff between competitiveness and humanityâs values, right? Rather than giving a reason why the competitiveness option would necessarily be favoured when we do face that tradeoff?)
Yes, the model in itself doesnât say that weâll tend towards competitiveness. That comes from the definition of competitiveness Iâm using here and is similar to Robin Hansonâs suggestion. âCompetitivenessâ as used here just refers to the statistical tendency of systems to evolve in certain waysâitâs similar to the statement that entropy tends to increase. Some of those ways are aligned with our values and others are not. In making the axes orthogonal I was using the, probably true, assumption that most ways of system evolution are not in alignment with our values.
(With the reply I was trying to point in the direction of this increasing entropy like definition.)