I tried to find some objective ground for ethical considerations given metaphysical premises of the Universal Darwinism. And the relevant part can be summarized to the following quote from the #evaluating-terminal-values section of the article:
How to evaluate terminal values of humans (defined like on lesswrong)? Quote:
A terminal value (also known as an intrinsic value) is an ultimate goal, an end-in-itself. … In an artificial general intelligence with a utility or reward function, the terminal value is the maximization of that function.
Values are subjective but the question asks for some objective perspective. This question is of interest as “Humans’ terminal values are often mutually contradictory, inconsistent, and changeable”.
Obviousness of natural selection (NS) can pose some constraints, albeit weak ones, as all known systems with sentient agents abide NS. But weak constraints are still better than no constraints at all.
Terminal goals are being split by natural selection into ones that fail to reproduce / maintain themselves and ones that survive (together with their bearers of cource). And sometimes we can even predict whether some terminal goals would go extinct or at least range their probability of survival (we already had put aside instrumental goals that “die” when they lose their purpose.).
So that’s it. That’s the only way to objectively judge terminal values I’m aware of. And judgment part comes from a feeling that I don’t want to be invested in terminal goals that would most likely go extinct. At least they should be “mutated” in way to balance minimization of their change and maximization of their survival probability to be appealing.
End quote.
Hence what fails the “extinction criteria”:
Goals and values that cannot be reformulated as survival of some quasi-immortal entities are meaningless and would be eliminated via natural selection with time.
But there are still infinite number of goals and values that pass “extinction criteria” but contradict each other. So it is moral nauralism of the “extinction criteria” plus moral non-cognitivism as a best bet for what is left.
I tried to find some objective ground for ethical considerations given metaphysical premises of the Universal Darwinism. And the relevant part can be summarized to the following quote from the #evaluating-terminal-values section of the article:
How to evaluate terminal values of humans (defined like on lesswrong)? Quote:
Values are subjective but the question asks for some objective perspective. This question is of interest as “Humans’ terminal values are often mutually contradictory, inconsistent, and changeable”.
Obviousness of natural selection (NS) can pose some constraints, albeit weak ones, as all known systems with sentient agents abide NS. But weak constraints are still better than no constraints at all.
Terminal goals are being split by natural selection into ones that fail to reproduce / maintain themselves and ones that survive (together with their bearers of cource). And sometimes we can even predict whether some terminal goals would go extinct or at least range their probability of survival (we already had put aside instrumental goals that “die” when they lose their purpose.).
So that’s it. That’s the only way to objectively judge terminal values I’m aware of. And judgment part comes from a feeling that I don’t want to be invested in terminal goals that would most likely go extinct. At least they should be “mutated” in way to balance minimization of their change and maximization of their survival probability to be appealing.
End quote.
Hence what fails the “extinction criteria”:
But there are still infinite number of goals and values that pass “extinction criteria” but contradict each other. So it is moral nauralism of the “extinction criteria” plus moral non-cognitivism as a best bet for what is left.