To bank on that we would need to have established at least some solid theoretical grounds for believing it’s possible—do you know of any? I think in fact we are closer to having the opposite: solid theoretical grounds for believing it’s impossible!
I think we can thread the needle by creating strongly non-superintelligent AI systems which can be robustly aligned or controlled. And I agree that we don’t know how to do that at present, but we can very likely get there, even if the proofs of unalignable ASI hold up.
What level of intelligence are you imagining such a system as being at? Some percentile on the scale of top performing humans? Somewhat above the most intelligent humans?
I think we could do what is required for colonizing the galaxy with systems that are at or under the level of 90th percentile humans, which is the issue raised for the concern that otherwise we “lose out on almost all value because we won’t have the enormous digital workforce needed to settle the stars.”
Agree. But I’m sceptical that we could robustly align or control a large population of such AIs (and how would we cap the population?), especially considering the speed advantage they are likely to have.
To bank on that we would need to have established at least some solid theoretical grounds for believing it’s possible—do you know of any? I think in fact we are closer to having the opposite: solid theoretical grounds for believing it’s impossible!
I think we can thread the needle by creating strongly non-superintelligent AI systems which can be robustly aligned or controlled. And I agree that we don’t know how to do that at present, but we can very likely get there, even if the proofs of unalignable ASI hold up.
What level of intelligence are you imagining such a system as being at? Some percentile on the scale of top performing humans? Somewhat above the most intelligent humans?
I think we could do what is required for colonizing the galaxy with systems that are at or under the level of 90th percentile humans, which is the issue raised for the concern that otherwise we “lose out on almost all value because we won’t have the enormous digital workforce needed to settle the stars.”
Agree. But I’m sceptical that we could robustly align or control a large population of such AIs (and how would we cap the population?), especially considering the speed advantage they are likely to have.