The ethical schools of thought I’m most aligned with—longtermism, sentientism, effective altruism, and utilitarianism—are far more prominent in the West (though still very niche).
I want to point out that the ethical schools of thought that you’re (probably) most anti-aligned with (e.g., that certain behaviors and even thoughts are deserving of eternal divine punishment) are also far more prominent in the West, proportionately even more so than the ones you’re aligned with.
Also the Western model of governance may not last into the post-AGI era regardless of where the transition starts. Aside from the concentration risk mentioned in the linked post, driven by post-AGI economics, I think different sub-cultures in the West breaking off into AI-powered autarkies or space colonies with vast computing power, governed by their own rules, is also a very scary possibility.
I’m pretty torn and may actually slightly prefer a CCP-dominated AI future (despite my family’s past history with the CCP). But more importantly I think both possibilities are incredibly risky if the AI transition occurs in the near future.
I agree that both possibilities are very risky. Interesting re belief in hell being a key factor, I wasn’t thinking about that.
Even if a future ASI would be able to very efficiently manage todays economy in a fully centralised way, possibly the future economy will be so much more complicated that it will still make sense to have some distributed information processing in the market rather than have all optimisation centrally planned? Seems unclear to me one way or the other, and I assume we won’t be able to know with high confidence in advance what economic model will be most efficient post-ASI. But maybe that just reflects my economic ignorance and others are justifiedly confident.
Interesting re belief in hell being a key factor, I wasn’t thinking about that.
It seems like the whole AI x-risk community has latched onto “align AI with human values/intent” as the solution, with few people thinking even a few steps ahead to “what if we succeeded”? I have a post related to this if you’re interested.
possibly the future economy will be so much more complicated that it will still make sense to have some distributed information processing in the market rather than have all optimisation centrally planned
I think there will be distributed information processing, but each distributed node/agent will be a copy of the central AGI (or otherwise aligned to it or shares its values), because this is what’s economically most efficient, minimizes waste from misaligned incentives and so on. So there won’t be the kind of value pluralism that we see today.
I assume we won’t be able to know with high confidence in advance what economic model will be most efficient post-ASI.
There’s probably a lot of other surprises that we can’t foresee today. I’m mostly claiming that post-AGI economics and governance probably wont look very similar to today’s.
I want to point out that the ethical schools of thought that you’re (probably) most anti-aligned with (e.g., that certain behaviors and even thoughts are deserving of eternal divine punishment) are also far more prominent in the West, proportionately even more so than the ones you’re aligned with.
Also the Western model of governance may not last into the post-AGI era regardless of where the transition starts. Aside from the concentration risk mentioned in the linked post, driven by post-AGI economics, I think different sub-cultures in the West breaking off into AI-powered autarkies or space colonies with vast computing power, governed by their own rules, is also a very scary possibility.
I’m pretty torn and may actually slightly prefer a CCP-dominated AI future (despite my family’s past history with the CCP). But more importantly I think both possibilities are incredibly risky if the AI transition occurs in the near future.
I agree that both possibilities are very risky. Interesting re belief in hell being a key factor, I wasn’t thinking about that.
Even if a future ASI would be able to very efficiently manage todays economy in a fully centralised way, possibly the future economy will be so much more complicated that it will still make sense to have some distributed information processing in the market rather than have all optimisation centrally planned? Seems unclear to me one way or the other, and I assume we won’t be able to know with high confidence in advance what economic model will be most efficient post-ASI. But maybe that just reflects my economic ignorance and others are justifiedly confident.
It seems like the whole AI x-risk community has latched onto “align AI with human values/intent” as the solution, with few people thinking even a few steps ahead to “what if we succeeded”? I have a post related to this if you’re interested.
I think there will be distributed information processing, but each distributed node/agent will be a copy of the central AGI (or otherwise aligned to it or shares its values), because this is what’s economically most efficient, minimizes waste from misaligned incentives and so on. So there won’t be the kind of value pluralism that we see today.
There’s probably a lot of other surprises that we can’t foresee today. I’m mostly claiming that post-AGI economics and governance probably wont look very similar to today’s.