I think my worry is people who don’t think they need advice about what the future should look like. When I imagine them making the bad decision despite having lots of time to consult superintelligent AIs, I imagine them just not being that interested in making the “right” decision? And therefore their advisors not being proactive in telling them things that are only relevant for making the “right” decision.
That is, assuming the AIs are intent aligned, they’ll only help you in the ways you want to be helped:
Thoughtful people might realise the importance of getting the decision right, and might ask “please help me to get this decision right” in a way that ends up with the advisors pointing out that AI welfare matters and the decision makers will want to take that into account.
But unthoughtful or hubristic people might not ask for help in that way. They might just ask for help in implementing their existing ideas, and not be interested in making the “right” decision or in what they would endorse on reflection.
I do hope that people won’t be so thoughtless as to impose their vision of the future without seeking advice, but I’m not confident.
At some point we’ll send out lightspeed probes to tile the universe with some flavor of computronium. The key question (for scope-sensitive altruists) is what that computronium will compute. Will an unwise agent or incoherent egregore answer that question thoughtlessly? I intuit no.
I can’t easily make this intuition legible. (So I likely won’t reply to messages about this.)
Interesting!
I think my worry is people who don’t think they need advice about what the future should look like. When I imagine them making the bad decision despite having lots of time to consult superintelligent AIs, I imagine them just not being that interested in making the “right” decision? And therefore their advisors not being proactive in telling them things that are only relevant for making the “right” decision.
That is, assuming the AIs are intent aligned, they’ll only help you in the ways you want to be helped:
Thoughtful people might realise the importance of getting the decision right, and might ask “please help me to get this decision right” in a way that ends up with the advisors pointing out that AI welfare matters and the decision makers will want to take that into account.
But unthoughtful or hubristic people might not ask for help in that way. They might just ask for help in implementing their existing ideas, and not be interested in making the “right” decision or in what they would endorse on reflection.
I do hope that people won’t be so thoughtless as to impose their vision of the future without seeking advice, but I’m not confident.
Briefly + roughly (not precise):
At some point we’ll send out lightspeed probes to tile the universe with some flavor of computronium. The key question (for scope-sensitive altruists) is what that computronium will compute. Will an unwise agent or incoherent egregore answer that question thoughtlessly? I intuit no.
I can’t easily make this intuition legible. (So I likely won’t reply to messages about this.)