I see a lot of talk with digital people about making copies but wouldn’t a dominant strategy (presuming more compute = more intelligence/ability to multitask) be to just add compute to any given actor? In general, why copy people when you can just make one actor, who you know to be relatively aligned, much more powerful? Seems likely, though not totally clear, that having one mind with 1000 compute units would be strictly better for seeking power than 100 minds with 10 compute units each.
For example, companies might compete with one another to have the smartest/most able CEO by giving them more compute. The marginal benefit of more intelligence might be really high such that Tim Cook being 1% more intelligent than Mark Zuckerberg could mean Apple becomes dominant. This would trigger an intense race for compute. The same should go for governments. At some point we should have a multipolar superintelligence scenario but with human minds.
I think this depends on empirical questions about the returns to more compute for a single mind. If the mind is closely based on a human brain, it might be pretty hard to get much out of more compute, so duplication might have better returns. If the mind is not based on a human brain, it seems hard to say how this shakes out.
I see a lot of talk with digital people about making copies but wouldn’t a dominant strategy (presuming more compute = more intelligence/ability to multitask) be to just add compute to any given actor? In general, why copy people when you can just make one actor, who you know to be relatively aligned, much more powerful? Seems likely, though not totally clear, that having one mind with 1000 compute units would be strictly better for seeking power than 100 minds with 10 compute units each.
For example, companies might compete with one another to have the smartest/most able CEO by giving them more compute. The marginal benefit of more intelligence might be really high such that Tim Cook being 1% more intelligent than Mark Zuckerberg could mean Apple becomes dominant. This would trigger an intense race for compute. The same should go for governments. At some point we should have a multipolar superintelligence scenario but with human minds.
I think this depends on empirical questions about the returns to more compute for a single mind. If the mind is closely based on a human brain, it might be pretty hard to get much out of more compute, so duplication might have better returns. If the mind is not based on a human brain, it seems hard to say how this shakes out.