I am wondering what you think about the notion that persons develop their values in response to the systems that they exist in, which may be suboptimal; then, suboptimal values could be developed. For example, if there is a situation of scarcity or external abuse, persons may seek to dominate others to keep safe, whereas, in the scenario of abundance and overall consideration, persons may seek to develop considerate relationships with others to increase their and others wellbeing. Assuming that currently, some perceived scarcity and abuse exists in various environments, it could be suboptimal, from the long-term potential of humanity perspective, if that is measured by overall enjoyment in pursuing ‘the most good’ objectives, to extrapolate values now, if these are reinforced by AI. A solution can be to offer individuals an understanding of various situations and let them decide which ones they would prefer (e. g. a person in scarcity offered an understanding of abundance can instead of choosing threatening ability select ability to enjoy being enjoyed). This could work if all individuals are asked and all possibilities shareably understood. Since this is challenging, an alternative is to entertain persons on their perspective of an optimal system that they would like to see exist (rather than one which would benefit them personally), considering the objectives, under perfect alternatives awareness, of all individuals. What do you think about some of these thoughts on gathering values to extrapolate? Are you going to implement it or look for research in this area of values understanding under overall consideration and perfect alternatives understanding? I will also appreciate any comments on my Widespread values brainstorming draft which was developed using this reasoning.
A problem here is that values that are instrumentally useful, can become terminal values that humans value for their own sake.
For example, equality under the law is very useful in many societies, especially modern capitalistic ones; but a lot of people (me included) feel it has strong intrinsic value. In more traditional and low-trust societies, the tradition of hospitality is necessary for trade and other exchanges; yet people come to really value it for its own sake. Family love is evolutionarily adaptive, yet also something we value.
So just because some value has developed from a suboptimal system does not mean that it isn’t worth keeping.
Ok, that makes sense. Rhetorically, how would one differentiate the terminal values worth keeping from those worth updating. For example, hospitality ‘requirement’ from the free ability to choose to be hospitable from the ability to choose environments of various hospitability attitudes. I would really offer the emotional understanding of all options and let individuals freely decide. This should resolve the issue of persons favoring their environments due to limited awareness of alternatives or the fear of consequences of choosing an alternative. Then, you could get to more fundamental terminal values, such as the perception of living in a truly fair system (instead of equality under the law, which can still perpetuate some unfairness), ability to interact only with those with whom one wishes to (instead of hospitality), and understanding others’ preferences for interactions related to oxytocin, dopamine, and serotonin release and choosing to interact with those where preferences are mutual (instead of family love), for example. Anyway, thank you.
I am wondering what you think about the notion that persons develop their values in response to the systems that they exist in, which may be suboptimal; then, suboptimal values could be developed. For example, if there is a situation of scarcity or external abuse, persons may seek to dominate others to keep safe, whereas, in the scenario of abundance and overall consideration, persons may seek to develop considerate relationships with others to increase their and others wellbeing. Assuming that currently, some perceived scarcity and abuse exists in various environments, it could be suboptimal, from the long-term potential of humanity perspective, if that is measured by overall enjoyment in pursuing ‘the most good’ objectives, to extrapolate values now, if these are reinforced by AI. A solution can be to offer individuals an understanding of various situations and let them decide which ones they would prefer (e. g. a person in scarcity offered an understanding of abundance can instead of choosing threatening ability select ability to enjoy being enjoyed). This could work if all individuals are asked and all possibilities shareably understood. Since this is challenging, an alternative is to entertain persons on their perspective of an optimal system that they would like to see exist (rather than one which would benefit them personally), considering the objectives, under perfect alternatives awareness, of all individuals. What do you think about some of these thoughts on gathering values to extrapolate? Are you going to implement it or look for research in this area of values understanding under overall consideration and perfect alternatives understanding? I will also appreciate any comments on my Widespread values brainstorming draft which was developed using this reasoning.
A problem here is that values that are instrumentally useful, can become terminal values that humans value for their own sake.
For example, equality under the law is very useful in many societies, especially modern capitalistic ones; but a lot of people (me included) feel it has strong intrinsic value. In more traditional and low-trust societies, the tradition of hospitality is necessary for trade and other exchanges; yet people come to really value it for its own sake. Family love is evolutionarily adaptive, yet also something we value.
So just because some value has developed from a suboptimal system does not mean that it isn’t worth keeping.
Ok, that makes sense. Rhetorically, how would one differentiate the terminal values worth keeping from those worth updating. For example, hospitality ‘requirement’ from the free ability to choose to be hospitable from the ability to choose environments of various hospitability attitudes. I would really offer the emotional understanding of all options and let individuals freely decide. This should resolve the issue of persons favoring their environments due to limited awareness of alternatives or the fear of consequences of choosing an alternative. Then, you could get to more fundamental terminal values, such as the perception of living in a truly fair system (instead of equality under the law, which can still perpetuate some unfairness), ability to interact only with those with whom one wishes to (instead of hospitality), and understanding others’ preferences for interactions related to oxytocin, dopamine, and serotonin release and choosing to interact with those where preferences are mutual (instead of family love), for example. Anyway, thank you.