[Question] What are the effects of considering that (human) morality emerges from evolutionary and other functional pressures?

Findings from psychology and related fields suggest that some aspects of human morality may have evolved and/​or serves a social purpose among human groups (e.g., Awad et al., 2020). Thus, what is “good” depends to some extent on one’s cultural background, to a greater extent on being human, and perhaps to an even greater extent on being a primate/​mammal/​vertebrate/​etc (i.e., in terms of how features of our minds have been shaped over an even longer evolutionary time span). What happens if we try to think outside these evolutionary and social pressures? Many EAs try to think outside of social pressures, for instance, when they espouse utilitarism. If so, shouldn’t we also try to “escape the shackles of evolution”? I’m hoping this question will lead to recommendations for readings that discuss what dimensions of “good” would be relevant from non-human points of view (including from other animal species, from ecosystems, from AIs, etc.)

No comments.