And yes, neuromorphic AGI seems likely to be safer both because it may be associated with a slow takeoff but also because we understand how humans work, how to balance power with them, and so on. Arbitrary AGIs with alien motivational and behavioral systems are more unpredictable. In the long run, if you want goal preservation, you probably need AGI that’s different from the human brain, but goal preservation is arguably less of a concern in the short run; knowledge of how to do goal preservation will come with greater intelligence. In any case, neuromorphic AGIs are much more likely to have human-like values than arbitrary AGIs. We don’t worry that much about goal preservation with subsequent generations of humans because they’re pretty similar to us (though old conservatives are often upset with the moral degeneration of society caused by young people).
I agree that multipolar power dynamics could be bad, because this might lead to arms races and conflict relative to a quick monopoly by one group. On the other hand, it might allow for more representation by different parties.
Overall, I think the odds of a fast takeoff are sufficiently low that I’m not convinced it makes sense to focus on fast-takeoff work (even if some such exploration is worthwhile). There may be important first-mover advantages to shaping how society approaches slow takeoffs, and if slow takeoff is sufficiently probable, those may dominate in impact. In any case, the fast-slow distinction is not binary, and maybe the best place to focus is on scenarios where human-level AI takes over on a time scale of a few years. (Timescales of months, days, or hours strike me as pretty improbable, unless, say, Skynet gets control of nuclear weapons.)
Which kind of work it’s better to focus depends on the relative leverage you think you have in either case, combined with the likelihoods of the different scenarios. I plan to try a more quantitative analysis, which investigates what ranges of empirical beliefs about these factors correspond to what kind of work now. We could then try to gather some data on estimates (and variance in estimates) of these key values.
It’s great you’re thinking about these issues.
I agree that AGI safety is plausibly the dominating consideration regarding takeoff speed. Thus, whether one wants a faster or slower takeoff depends on whether one wants safe AGI (which is not a completely trivial question, http://foundational-research.org/robots-ai-intelligence-explosion/#Would_a_human_inspired_AI_or_rogue_AI_cause_more_suffering , though I think it’s likely safe AI is better for most human values).
And yes, neuromorphic AGI seems likely to be safer both because it may be associated with a slow takeoff but also because we understand how humans work, how to balance power with them, and so on. Arbitrary AGIs with alien motivational and behavioral systems are more unpredictable. In the long run, if you want goal preservation, you probably need AGI that’s different from the human brain, but goal preservation is arguably less of a concern in the short run; knowledge of how to do goal preservation will come with greater intelligence. In any case, neuromorphic AGIs are much more likely to have human-like values than arbitrary AGIs. We don’t worry that much about goal preservation with subsequent generations of humans because they’re pretty similar to us (though old conservatives are often upset with the moral degeneration of society caused by young people).
I agree that multipolar power dynamics could be bad, because this might lead to arms races and conflict relative to a quick monopoly by one group. On the other hand, it might allow for more representation by different parties.
Overall, I think the odds of a fast takeoff are sufficiently low that I’m not convinced it makes sense to focus on fast-takeoff work (even if some such exploration is worthwhile). There may be important first-mover advantages to shaping how society approaches slow takeoffs, and if slow takeoff is sufficiently probable, those may dominate in impact. In any case, the fast-slow distinction is not binary, and maybe the best place to focus is on scenarios where human-level AI takes over on a time scale of a few years. (Timescales of months, days, or hours strike me as pretty improbable, unless, say, Skynet gets control of nuclear weapons.)
Thanks, good comments.
Which kind of work it’s better to focus depends on the relative leverage you think you have in either case, combined with the likelihoods of the different scenarios. I plan to try a more quantitative analysis, which investigates what ranges of empirical beliefs about these factors correspond to what kind of work now. We could then try to gather some data on estimates (and variance in estimates) of these key values.