No, but this would be useful! Some quick thoughts:
A lot depends on our standard for moral inclusion. If we think that we should include all potential moral patients in the moral circle, then we might include a large number of near-term AI systems. If, in contrast, we think that we should include only beings with at least, say, a 0.1% chance of being moral patients, then we might include a smaller number.
With respect to the AI systems we include, one question is how many there will be. This is partly a question about moral individuation. Insofar as digital minds are connected, we might see the world as containing a large number of small moral patients, a small number of large moral patients, or both. Luke Roelofs and I will be releasing work about this soon.
Another question is how much welfare they might have. No matter how we individuate them, they could have a lot, either because a large number of them have a small amount, a small number of them have a large amount, or both. I discuss possible implications here: https://www.tandfonline.com/doi/abs/10.1080/21550085.2023.2200724
It also seems plausible that some digital minds could process welfare more efficiently than biological minds because they lack our evolutionary baggage. But assessing this claim requires developing a framework for making intersubstrate welfare comparisons, which, as I note in the post, will be difficult. Bob Fischer and I will be releasing work about this soon.
No, but this would be useful! Some quick thoughts:
A lot depends on our standard for moral inclusion. If we think that we should include all potential moral patients in the moral circle, then we might include a large number of near-term AI systems. If, in contrast, we think that we should include only beings with at least, say, a 0.1% chance of being moral patients, then we might include a smaller number.
With respect to the AI systems we include, one question is how many there will be. This is partly a question about moral individuation. Insofar as digital minds are connected, we might see the world as containing a large number of small moral patients, a small number of large moral patients, or both. Luke Roelofs and I will be releasing work about this soon.
Another question is how much welfare they might have. No matter how we individuate them, they could have a lot, either because a large number of them have a small amount, a small number of them have a large amount, or both. I discuss possible implications here: https://www.tandfonline.com/doi/abs/10.1080/21550085.2023.2200724
It also seems plausible that some digital minds could process welfare more efficiently than biological minds because they lack our evolutionary baggage. But assessing this claim requires developing a framework for making intersubstrate welfare comparisons, which, as I note in the post, will be difficult. Bob Fischer and I will be releasing work about this soon.