Up until the last paragraph, I very much found myself nodding along with this. It’s a nice summary of the kinds of reasons I’m puzzled by the theory of change of most digital sentience advocacy.
But in your conclusion, I worry there’s a bit of conflation between 1) pausing creation of artificial minds, full stop, and 2) pausing creation of more advanced AI systems. My understanding is that Pause AI is only realistically aiming for (2) — is that right? I’m happy to grant for the sake of argument that it’s feasible to get labs and governments to coordinate on not advancing the AI frontier. It seems much, much harder to get coordination on reducing the rate of production of artificial minds. For all we know, if weaker AIs suffer to a nontrivial degree, the pause could backfire because people would just many more instances of these AIs to do the same tasks they would’ve otherwise done with a larger model. (An artificial sentience “small animal replacement problem”?)
Yes, you detect correctly that I have some functionalist assumptions in the above. They aren’t strongly held but I had hope then we could simply avoid building conscious systems by pausing generally. Even if it seems less likely now that we can avoid making sentient systems at all, I still think it’s better to stop advancing the frontier. I agree there could in principle be a small animal problem with that, but overwhelmingly I think the benefits of more time, creating fewer possibly sentient models before learning more about how their architecture corresponds to their experience, and pushing a legible story about why it is important to stop without getting into confusing paradoxical effects like the small animal problem (I formed this opinion in the context of animal welfare— people get the motives behind vegetarianism; they do not get why you would eat certain wild-caught fish and not chickens, so you’re missing out on the power of persuasion and norm-setting) mean the right move re:digital sentience is pausing.
Up until the last paragraph, I very much found myself nodding along with this. It’s a nice summary of the kinds of reasons I’m puzzled by the theory of change of most digital sentience advocacy.
But in your conclusion, I worry there’s a bit of conflation between 1) pausing creation of artificial minds, full stop, and 2) pausing creation of more advanced AI systems. My understanding is that Pause AI is only realistically aiming for (2) — is that right? I’m happy to grant for the sake of argument that it’s feasible to get labs and governments to coordinate on not advancing the AI frontier. It seems much, much harder to get coordination on reducing the rate of production of artificial minds. For all we know, if weaker AIs suffer to a nontrivial degree, the pause could backfire because people would just many more instances of these AIs to do the same tasks they would’ve otherwise done with a larger model. (An artificial sentience “small animal replacement problem”?)
Yes, you detect correctly that I have some functionalist assumptions in the above. They aren’t strongly held but I had hope then we could simply avoid building conscious systems by pausing generally. Even if it seems less likely now that we can avoid making sentient systems at all, I still think it’s better to stop advancing the frontier. I agree there could in principle be a small animal problem with that, but overwhelmingly I think the benefits of more time, creating fewer possibly sentient models before learning more about how their architecture corresponds to their experience, and pushing a legible story about why it is important to stop without getting into confusing paradoxical effects like the small animal problem (I formed this opinion in the context of animal welfare— people get the motives behind vegetarianism; they do not get why you would eat certain wild-caught fish and not chickens, so you’re missing out on the power of persuasion and norm-setting) mean the right move re:digital sentience is pausing.