Nice post—I think I agree that Ben’s argument isn’t particularly sound.
Are you thinking about this primarily in terms of actions that autonomous advanced AI systems will take for the sake of optimisation? If not, I imagine you could look at this with a different lense and consider one historical perspective which says something like “One large driver of humanity’s moral circle expansion/moral improvement has been technological progress which has reduced resource competition and allowed groups to expand concern for others’ suffering without undermining themselves”. This seems fairly plausible to me, and would suggest that you might expect technological progress to correlate with methods involving less suffering.
I wonder if this theory might highlight points of resource contention where one might expect there to be less concern for digital suffering. Examples of this off the top of my head seem like AI arms races, early stage space colonisation, and perhaps some form of partial civilisation collapse.
Are you thinking about this primarily in terms of actions that autonomous advanced AI systems will take for the sake of optimisation?
Hum… not sure. I feel like my claims are very weak and true even in future worlds without autonomous advanced AIs.
“One large driver of humanity’s moral circle expansion/moral improvement has been technological progress which has reduced resource competition and allowed groups to expand concern for others’ suffering without undermining themselves”.
Agreed but this is more similar to argument (A) fleshed out in this footnote, which is not the one I’m assailing in this post.
Nice post—I think I agree that Ben’s argument isn’t particularly sound.
Are you thinking about this primarily in terms of actions that autonomous advanced AI systems will take for the sake of optimisation? If not, I imagine you could look at this with a different lense and consider one historical perspective which says something like “One large driver of humanity’s moral circle expansion/moral improvement has been technological progress which has reduced resource competition and allowed groups to expand concern for others’ suffering without undermining themselves”. This seems fairly plausible to me, and would suggest that you might expect technological progress to correlate with methods involving less suffering.
I wonder if this theory might highlight points of resource contention where one might expect there to be less concern for digital suffering. Examples of this off the top of my head seem like AI arms races, early stage space colonisation, and perhaps some form of partial civilisation collapse.
Thanks!
Hum… not sure. I feel like my claims are very weak and true even in future worlds without autonomous advanced AIs.
Agreed but this is more similar to argument (A) fleshed out in this footnote, which is not the one I’m assailing in this post.