It’s obvious that, say, the values of society may make a huge difference to the far future if (as seems likely) early AI uses goal preservation. (Even if the first version of AI doesn’t, it should soon move in that direction.)
Depending how one defines “x-risk”, many ways of shaping AI takeoffs are not work on extinction risk per se but concern the nature of the post-human world that emerges. For instance, whether the takeoff is unipolar or multipolar, what kinds of value loading is used, and how political power is divided. These can all have huge impacts on the outcome without changing the fact of whether or not the galaxy gets colonized.
I agree. I’d be clearer if I said that I think the only credible trajectory changes address the circumstances of catastrophically risky situations e.g. the period where AI takes off, and are managed my organisations that think about x-risk.
I strongly disagree. :)
It’s obvious that, say, the values of society may make a huge difference to the far future if (as seems likely) early AI uses goal preservation. (Even if the first version of AI doesn’t, it should soon move in that direction.)
Depending how one defines “x-risk”, many ways of shaping AI takeoffs are not work on extinction risk per se but concern the nature of the post-human world that emerges. For instance, whether the takeoff is unipolar or multipolar, what kinds of value loading is used, and how political power is divided. These can all have huge impacts on the outcome without changing the fact of whether or not the galaxy gets colonized.
I agree. I’d be clearer if I said that I think the only credible trajectory changes address the circumstances of catastrophically risky situations e.g. the period where AI takes off, and are managed my organisations that think about x-risk.