This is an incredibly important question. It is also an incredibly dangerous one. There are many real EA whose views on this topic constitute either an X-risk or an S-risk to EAs with only subtly different assessments: people who, given a truly aligned omnipotent AGI would either wipe out the majority of humans or create many lives others view as unhappy. Historically, well-intentioned eugenicists have killed many people who self-identify as having worthwhile lives. I also think there is a miscalibration in the creation test; many humans instinctively view people similar to them as competition, and either like or dislike the idea of clones of themselves for other reasons. The advantage of the suicide test is that you are centering your judgement on a real person who can express their actual preferences in the moment, rather than a hypothetical case. That seems worth a lot of offset error to me.
This is an incredibly important question. It is also an incredibly dangerous one. There are many real EA whose views on this topic constitute either an X-risk or an S-risk to EAs with only subtly different assessments: people who, given a truly aligned omnipotent AGI would either wipe out the majority of humans or create many lives others view as unhappy. Historically, well-intentioned eugenicists have killed many people who self-identify as having worthwhile lives.
I also think there is a miscalibration in the creation test; many humans instinctively view people similar to them as competition, and either like or dislike the idea of clones of themselves for other reasons. The advantage of the suicide test is that you are centering your judgement on a real person who can express their actual preferences in the moment, rather than a hypothetical case. That seems worth a lot of offset error to me.