Thank you for writing about this. I am definitely a person whose concerns about AI are primarily about the massive suffering they might cause, especially when it comes to already-marginal entities or potential entities like non-human animals or digital minds.
I’ll note beforehand that I’m suffering-focused, but I’ll also note that I think even a regular utilitarian using EV reasoning could come to the same conclusions as I do.
I’m curious as to why this isn’t a greater focus in the AI Safety community. At least from my vantage point and recollection, over 90% of the people who talk about AI Safety focus exclusively on the threat AI poses to the continued existence of humanity. If they elaborate at all on what’s at stake in the far future, they emphasize the potential good that could come from having massive populations that are in immense states of bliss, which could be destroyed if we are destroyed (again this is my experience).
I think this rests on the assumption that there is a high likelihood (let’s say >90% confidence) that humanity will become a force of net good in the long term future should it survive to see that. I think that, at the very least, this crux should be tested more than it currently is. I would argue that humanity of the current day is almost certainly (>99% confidence) net harmful (even factory farming alone is an immense harm that it’s hard to argue any good humans do outweighs). I would also argue with similar confidence that humanity’s net impact was consistently negative at least from the agricultural revolution onward (mistreatment/exploitation of non-human animals, slavery, war to name a few major things). Suffice it to say that I would be very worried if an AGI was locked-in with the values of a randomly selected person today (I know some AGI timelines are quite short), or even a randomly selected person 100 years from now (assuming we survive that long), especially if they decide to keep us alive. I can’t give an estimate for how confident I am that humanity’s continued existence with AGI would be a good/bad thing. However, I agree that the suffering risk from AGI is not emphasized proportional to its potential expected consequence, and I’m curious to hear EA/AI Safety perspectives regarding this topic.
I’ll also quickly throw in the idea of humans deliberately creating malicious AGI with the intention of serving their own ends, which is an idea I’ve heard around a few times but know practically nothing about. Though I will say that I think the potential for such a scenario to arise and then become an S-risk is non-negligible (though I can’t really give a good estimate or back it with anything more than intuition).
Thank you for writing about this. I am definitely a person whose concerns about AI are primarily about the massive suffering they might cause, especially when it comes to already-marginal entities or potential entities like non-human animals or digital minds.
I’ll note beforehand that I’m suffering-focused, but I’ll also note that I think even a regular utilitarian using EV reasoning could come to the same conclusions as I do.
I’m curious as to why this isn’t a greater focus in the AI Safety community. At least from my vantage point and recollection, over 90% of the people who talk about AI Safety focus exclusively on the threat AI poses to the continued existence of humanity. If they elaborate at all on what’s at stake in the far future, they emphasize the potential good that could come from having massive populations that are in immense states of bliss, which could be destroyed if we are destroyed (again this is my experience).
I think this rests on the assumption that there is a high likelihood (let’s say >90% confidence) that humanity will become a force of net good in the long term future should it survive to see that. I think that, at the very least, this crux should be tested more than it currently is. I would argue that humanity of the current day is almost certainly (>99% confidence) net harmful (even factory farming alone is an immense harm that it’s hard to argue any good humans do outweighs). I would also argue with similar confidence that humanity’s net impact was consistently negative at least from the agricultural revolution onward (mistreatment/exploitation of non-human animals, slavery, war to name a few major things). Suffice it to say that I would be very worried if an AGI was locked-in with the values of a randomly selected person today (I know some AGI timelines are quite short), or even a randomly selected person 100 years from now (assuming we survive that long), especially if they decide to keep us alive. I can’t give an estimate for how confident I am that humanity’s continued existence with AGI would be a good/bad thing. However, I agree that the suffering risk from AGI is not emphasized proportional to its potential expected consequence, and I’m curious to hear EA/AI Safety perspectives regarding this topic.
I’ll also quickly throw in the idea of humans deliberately creating malicious AGI with the intention of serving their own ends, which is an idea I’ve heard around a few times but know practically nothing about. Though I will say that I think the potential for such a scenario to arise and then become an S-risk is non-negligible (though I can’t really give a good estimate or back it with anything more than intuition).