I agree that both possibilities are very risky. Interesting re belief in hell being a key factor, I wasnât thinking about that.
Even if a future ASI would be able to very efficiently manage todays economy in a fully centralised way, possibly the future economy will be so much more complicated that it will still make sense to have some distributed information processing in the market rather than have all optimisation centrally planned? Seems unclear to me one way or the other, and I assume we wonât be able to know with high confidence in advance what economic model will be most efficient post-ASI. But maybe that just reflects my economic ignorance and others are justifiedly confident.
Interesting re belief in hell being a key factor, I wasnât thinking about that.
It seems like the whole AI x-risk community has latched onto âalign AI with human values/âintentâ as the solution, with few people thinking even a few steps ahead to âwhat if we succeededâ? I have a post related to this if youâre interested.
possibly the future economy will be so much more complicated that it will still make sense to have some distributed information processing in the market rather than have all optimisation centrally planned
I think there will be distributed information processing, but each distributed node/âagent will be a copy of the central AGI (or otherwise aligned to it or shares its values), because this is whatâs economically most efficient, minimizes waste from misaligned incentives and so on. So there wonât be the kind of value pluralism that we see today.
I assume we wonât be able to know with high confidence in advance what economic model will be most efficient post-ASI.
Thereâs probably a lot of other surprises that we canât foresee today. Iâm mostly claiming that post-AGI economics and governance probably wont look very similar to todayâs.
I agree that both possibilities are very risky. Interesting re belief in hell being a key factor, I wasnât thinking about that.
Even if a future ASI would be able to very efficiently manage todays economy in a fully centralised way, possibly the future economy will be so much more complicated that it will still make sense to have some distributed information processing in the market rather than have all optimisation centrally planned? Seems unclear to me one way or the other, and I assume we wonât be able to know with high confidence in advance what economic model will be most efficient post-ASI. But maybe that just reflects my economic ignorance and others are justifiedly confident.
It seems like the whole AI x-risk community has latched onto âalign AI with human values/âintentâ as the solution, with few people thinking even a few steps ahead to âwhat if we succeededâ? I have a post related to this if youâre interested.
I think there will be distributed information processing, but each distributed node/âagent will be a copy of the central AGI (or otherwise aligned to it or shares its values), because this is whatâs economically most efficient, minimizes waste from misaligned incentives and so on. So there wonât be the kind of value pluralism that we see today.
Thereâs probably a lot of other surprises that we canât foresee today. Iâm mostly claiming that post-AGI economics and governance probably wont look very similar to todayâs.