I’ve so far not seen it, but is anyone taking a broader axiological approach to AI alignment rather than a decision theory specific approach? Obviously the decision theory approach is a more bounded problem that is likely easier to solve since it is a special case of dealing with processes we can apply decision theory to, but I wonder if we might not gain insights and better intuitions from studying more general cases.
I’ve so far not seen it, but is anyone taking a broader axiological approach to AI alignment rather than a decision theory specific approach? Obviously the decision theory approach is a more bounded problem that is likely easier to solve since it is a special case of dealing with processes we can apply decision theory to, but I wonder if we might not gain insights and better intuitions from studying more general cases.