You mean it will tend to ‘choose’ higher valence things? That would seem to make sense for biological systems perhaps, as the feelings and valence would evolve as a reinforcement mechanism to motivate choices that increase fitness.
But I’m not sure why we’d expect it to evolve similarly in a construction like a deep learning AI. No one coded valence in, and no one would really know how to access it even if they wanted to code it in, since we don’t really understand where consciousness comes from.
Way out of my depth here but I’m not sure why feelings and valence couldn’t also evolve in llms to “motivate choices that increase fitness (token prediction)”. @Steven Byrnes might have a more coherent take here.
You mean it will tend to ‘choose’ higher valence things? That would seem to make sense for biological systems perhaps, as the feelings and valence would evolve as a reinforcement mechanism to motivate choices that increase fitness.
But I’m not sure why we’d expect it to evolve similarly in a construction like a deep learning AI. No one coded valence in, and no one would really know how to access it even if they wanted to code it in, since we don’t really understand where consciousness comes from.
Way out of my depth here but I’m not sure why feelings and valence couldn’t also evolve in llms to “motivate choices that increase fitness (token prediction)”. @Steven Byrnes might have a more coherent take here.