I agree it’s clear that you claim that unaligned AIs are plausibly comparably utilitarian as humans, maybe more.
What I didn’t find was discussion of how contingent utilitarianism is in humans.
Though actually rereading your comment (which I should have done in addition to reading the post) I realize I completely misunderstood what you meant by “contingent”, which explains why I didn’t find it in the post (I thought of it as meaning “historically contingent”). Sorry for the misunderstanding.
I agree it’s clear that you claim that unaligned AIs are plausibly comparably utilitarian as humans, maybe more.
What I didn’t find was discussion of how contingent utilitarianism is in humans.
Though actually rereading your comment (which I should have done in addition to reading the post) I realize I completely misunderstood what you meant by “contingent”, which explains why I didn’t find it in the post (I thought of it as meaning “historically contingent”). Sorry for the misunderstanding.
Let me backtrack like 5 comments and retry again.