I guess part of the lack of concern for artificial sentience is explained by people at top labs focussing on aligning AGI with human values, rather than impartial value (relatedly). Ensuring that AI systems are happy seems like a good strategy to increase impartial value. It would lead to good outcomes even in scenarios where humans become disempowered. Actually, the higher the chance of humans becoming disempowered, the more pressing is artificial sentience? I suppose it would make sense for death with dignity strategies to address this (some discussion).
Thanks for sharing, Simeon!
I guess part of the lack of concern for artificial sentience is explained by people at top labs focussing on aligning AGI with human values, rather than impartial value (relatedly). Ensuring that AI systems are happy seems like a good strategy to increase impartial value. It would lead to good outcomes even in scenarios where humans become disempowered. Actually, the higher the chance of humans becoming disempowered, the more pressing is artificial sentience? I suppose it would make sense for death with dignity strategies to address this (some discussion).