I also wanted to add that as someone who leans more towards valuing happiness and suffering equally, that I still find the moratorium to be a good idea and don’t feel pangs of frustration about the lost positive experience if we were to delay. I would be very concerned if humanity chose now to never produce any new kinds of beings who could feel, as that may forever rule out some of the best futures available to us. But as you say, a 50-year delay in order to make this monumentally importance choice properly would seem to be a wise and patient decision by humanity (given that we can see this is a crucial choice for which we are ill prepared). It is important that our rapid decision to avoid doing something before we know what we’re doing doesn’t build in a bias towards never doing it, so some care might need to be taken with the end condition for a moratorium. Having it simply expire after 20 to 50 years (but where a new one could be added if desired) seems pretty good in this regard.
I think that thoughtful people who lean towards classical utilitarianism should generally agree with this (i.e. I don’t think this is based on my idiosyncrasies). To get it to turn out otherwise would require extreme moral certainty and/or a combination of the total view with impatience in the form of temporal discounting.
Note that I think it is important to avoid biasing a moratorium towards being permanent even for your 5th option (moratorium on creating beings that suffer). c.f. we have babies despite knowing that their lives will invariably include periods of suffering (because we believe that these will usually be outweighed by other periods of love and joy and comfort). And most people (including me) think that allowing this is a good thing and disallowing it would be disastrous. At the moment, we aren’t in a good position to understand the balances of suffering and joy in artificial beings and I’d be inclined to say that a moratorium on creating artificial suffering is a good thing, but when we do understand how to measure this and to tip the scales heavily in favour of positive experience, then a continued ban may be terrible. (That said, we may work also work out how to ensure they have good experiences with zero suffering, in which case a permanent moratorium may well turn out to be a good thing.)
In your piece you focus on artificial sentience. But similar arguments would apply to somewhat broader categories.
Wellbeing
For example, you could expand it to creating entities that can have wellbeing (or negative elements of wellbeing) even if that wellbeing can be determined by things other than conscious experience. If there were ways of creating millions of beings with negative wellbeing, I’d be very disturbed by that regardless of whether it happened by suffering or some other means. I’m sympathetic to views where suffering is the only form of wellbeing, but am by no means sure they are the correct account of wellbeing, so maybe what I really care about is avoiding creating beings that can have (negative) wellbeing.
Interests
One could also go a step further. Wellbeing is a broad category for all kinds of things that count towards how well your life goes. But on many people’s understandings, it might not capture everything about ill treatment. In particular, it might not capture everything to do with deontological wrongs and/or rights violations, which may involve wronging someone in a way that can’t be made up for by improvements in wellbeing and can’t be cashed out purely in terms of its negative effects on wellbeing. So it may be that creating beings with interests or morally relevant interests is the relevant category.
That said, note that these are both steps towards greater abstraction, so even if they better capture what we really care about, they might still lose out on the grounds of being less compelling, more open to interpretation, and harder to operationalise.