Commenting on my post to add to it: In Ajeya Cotra’s paper on Holden’s Cold Takes: “Why AI alignment could be hard with modern deep learning”She speaks of the three paths deep learning could go down, Saint, Schemer, Sycophant...and both humans guiding the process as well as possibly other AI’s also keeping other AI’s ” in check”. But they relate to each other at odds and with an assumption of manipulativeness...why not place them in an orientation of seeking the others benefit? This is an aspect of my definition of love...love is seeking to benefit another. The opposite of love is seeking to benefit yourself at the expense of another.
Commenting on my post to add to it: In Ajeya Cotra’s paper on Holden’s Cold Takes: “Why AI alignment could be hard with modern deep learning” She speaks of the three paths deep learning could go down, Saint, Schemer, Sycophant...and both humans guiding the process as well as possibly other AI’s also keeping other AI’s ” in check”. But they relate to each other at odds and with an assumption of manipulativeness...why not place them in an orientation of seeking the others benefit? This is an aspect of my definition of love...love is seeking to benefit another. The opposite of love is seeking to benefit yourself at the expense of another.