Interesting, I wonder if AGI will have a process for deciding it’s values(like a constitution). But then the question is how it decides on what that process is(if there is one).
I thought there might be a connection between having a nuanced process for an agi to pick it’s values and problem solving ability(ex. How to end the world), such that having the ability to end the world must mean that they have a good ability to work through nuance on their values and think it may not be valuable. Possibly this connection might not always exist in which case, epic sussyness may occur
Yeah, there might be a correlation in practice, but I think intelligent agents could have basically any random values. There are no fundamentally incorrect values, just some values that we don’t like or that you’d say lack importance nuance. Even under moral realism, intelligent systems don’t necessarily have to care about the moral truth (even if they’re smart enough to figure out what the moral truth is). Cf. the orthogonality thesis.
Interesting, I wonder if AGI will have a process for deciding it’s values(like a constitution). But then the question is how it decides on what that process is(if there is one).
I thought there might be a connection between having a nuanced process for an agi to pick it’s values and problem solving ability(ex. How to end the world), such that having the ability to end the world must mean that they have a good ability to work through nuance on their values and think it may not be valuable. Possibly this connection might not always exist in which case, epic sussyness may occur
Yeah, there might be a correlation in practice, but I think intelligent agents could have basically any random values. There are no fundamentally incorrect values, just some values that we don’t like or that you’d say lack importance nuance. Even under moral realism, intelligent systems don’t necessarily have to care about the moral truth (even if they’re smart enough to figure out what the moral truth is). Cf. the orthogonality thesis.