With intelligence comes reverence for life and increased awareness, altruism.
This isn’t always true—see in humans, intelligent sociopaths and mass murderers. It’s unlikely to be true with AI either, unless moral realism is true AND the AI discovers the true morality of the universe AND said morality compatible with human flourishing. See: Othogonality Thesis.
It’s not always true , there will be outliers. In general, increased intelligence tends to improve judgement. Humans inherently prefer to feel good than bad, prefer to live than die. Thus, intelligence would help a person to find ways to feel good and live. Rationally, feeling good is facilitated by a sense of safety, ample resources, relationships, and enjoyable activities. I think all that is keystoned by liking oneself, which seems to require good intentions and esteemable conduct. So if intelligence moves humans in a positive direction, generally speaking, it should theoretically do the same for AI.
This isn’t always true—see in humans, intelligent sociopaths and mass murderers. It’s unlikely to be true with AI either, unless moral realism is true AND the AI discovers the true morality of the universe AND said morality compatible with human flourishing. See: Othogonality Thesis.
It’s not always true , there will be outliers. In general, increased intelligence tends to improve judgement. Humans inherently prefer to feel good than bad, prefer to live than die. Thus, intelligence would help a person to find ways to feel good and live. Rationally, feeling good is facilitated by a sense of safety, ample resources, relationships, and enjoyable activities. I think all that is keystoned by liking oneself, which seems to require good intentions and esteemable conduct. So if intelligence moves humans in a positive direction, generally speaking, it should theoretically do the same for AI.