Thanks! I share your concern about sadism. Insofar as AI systems have the capacity for welfare, one risk is that humans might mistakenly see them as lacking this capacity and, so, might harm them accidentally, and another risk is that humans might correctly see them as having this capacity and, so, might harm them intentionally. A difficulty is that mitigating these risks might require different strategies. I want to think more about this.
I also share your concern about objectification. I can appreciate why AI labs want to mitigate the risk of false positives / excessive anthropomorphism. But as I note in the post, we also face a risk of false negatives / excessive anthropodenial, and the latter risk is arguably worse (more likely and/or severe) in many contexts. I would love to see AI labs develop a more nuanced approach to this issue that mitigates these risks in a more balanced way.
No, but this would be useful! Some quick thoughts:
A lot depends on our standard for moral inclusion. If we think that we should include all potential moral patients in the moral circle, then we might include a large number of near-term AI systems. If, in contrast, we think that we should include only beings with at least, say, a 0.1% chance of being moral patients, then we might include a smaller number.
With respect to the AI systems we include, one question is how many there will be. This is partly a question about moral individuation. Insofar as digital minds are connected, we might see the world as containing a large number of small moral patients, a small number of large moral patients, or both. Luke Roelofs and I will be releasing work about this soon.
Another question is how much welfare they might have. No matter how we individuate them, they could have a lot, either because a large number of them have a small amount, a small number of them have a large amount, or both. I discuss possible implications here: https://www.tandfonline.com/doi/abs/10.1080/21550085.2023.2200724
It also seems plausible that some digital minds could process welfare more efficiently than biological minds because they lack our evolutionary baggage. But assessing this claim requires developing a framework for making intersubstrate welfare comparisons, which, as I note in the post, will be difficult. Bob Fischer and I will be releasing work about this soon.