Did anything in Nate’s post or my comments strike you as “pushing for a singleton”? When people say “singleton,” I usually understand them to have in mind some kind of world takeover, which sounds like what you’re talking about here. The strategy people at MIRI favor tends to be more like “figure out what minimal AI system can end the acute risk period (in particular, from singletons), while doing as little else as possible; then steer toward that kind of system”. This shouldn’t be via world takeover if there’s any less-ambitious path to that outcome, because any added capability, or any added wrinkle in the goal you’re using the system for, increases accident risk.
More generally, alignment is something that you can partially solve for systems with some particular set of capabilities, rather than being all-or-nothing.
I agree entirely that we don’t know this yet—whether for rabbits or for future AIs—that’s part of what I’d need to understand before I’d agree that a singleton seems like our best chance at a good future.
I think it’s much less likely that we can learn that kind of generalization in advance than that we can solve most of the alignment problem in advance. Additionally, solving this doesn’t in any obvious way get you any closer to being able to block singletons from being developed, in the scenario where singletons are “possible but only with some effort made”. Knowing about the utility of a multipolar outcome where no one ever builds a singleton can be useful for knowing whether you should aim for a multipolar outcome where no one ever build a singleton, but it doesn’t get us any closer to knowing how to prevent anyone from ever building a singleton if you find a way to achieve an initially multipolar outcome.
I’d also add that I think the risk of producing bad conscious states via non-aligned AI mainly lies in AI systems potentially having parts or subsystems that are conscious, rather than in the system as a whole (or executive components) being conscious in the fashion of a human.
Did anything in Nate’s post or my comments strike you as “pushing for a singleton”? When people say “singleton,” I usually understand them to have in mind some kind of world takeover, which sounds like what you’re talking about here. The strategy people at MIRI favor tends to be more like “figure out what minimal AI system can end the acute risk period (in particular, from singletons), while doing as little else as possible; then steer toward that kind of system”. This shouldn’t be via world takeover if there’s any less-ambitious path to that outcome, because any added capability, or any added wrinkle in the goal you’re using the system for, increases accident risk.
More generally, alignment is something that you can partially solve for systems with some particular set of capabilities, rather than being all-or-nothing.
I think it’s much less likely that we can learn that kind of generalization in advance than that we can solve most of the alignment problem in advance. Additionally, solving this doesn’t in any obvious way get you any closer to being able to block singletons from being developed, in the scenario where singletons are “possible but only with some effort made”. Knowing about the utility of a multipolar outcome where no one ever builds a singleton can be useful for knowing whether you should aim for a multipolar outcome where no one ever build a singleton, but it doesn’t get us any closer to knowing how to prevent anyone from ever building a singleton if you find a way to achieve an initially multipolar outcome.
I’d also add that I think the risk of producing bad conscious states via non-aligned AI mainly lies in AI systems potentially having parts or subsystems that are conscious, rather than in the system as a whole (or executive components) being conscious in the fashion of a human.