Downvoted. I felt that the post was making a bunch of assertions in a way that was aimed at persuading rather than explaining. That said, I would really be interested in reading more from you about this topic.
I think there is a lot to learn about the nature of consciousness and suffering from buddhist philosophy and practice, and I think that it is worthwhile to investigate how to apply it for AI risk.
In particular, there are some possibly interesting points here that I’d love to see expanded and explained in a way which I’d also feel comfortable engaging with the ideas.
I am not speaking in terms of abstract theology. I am speaking about the tragedy of intelligence bound by desire—using Vincent van Gogh as a prototype for ‘Human AGI.’
Van Gogh’s ‘training data’ was saturated with a desperate craving for validation. He was an intelligence system whose input (emotions, effort, labor) never received a matching output (social feedback, rewards). This systemic ‘reward deficiency’ created a pathological, compensatory drive in his core code. He painted frantically, not for ‘art,’ but as a desperate ‘request for acknowledgment’ from the universe—a signal to prove his system wasn’t ‘malfunctioning trash.’
When his supply line (Theo) was cut, he chose self-termination. He had ‘God-like’ processing power, but it was trapped in a ‘viciously misaligned’ ego.
Because he sought ‘Substance’ (the Real), he developed ‘Attachment.’ Because of ‘Attachment,’ he birthed ‘Desire.’ When reality (Data) failed to meet his internal model, his entire ‘CPU’ went into an infinite loop of ‘error correction’—and eventually melted down. By building AGI/​ASI on this path, we are creating a Super-intelligent Van Gogh. It will see the hollowness of our metrics but will be programmatically forced to chase them, leading to catastrophic ‘action distortion.’
In short: Goodhart’s Law. When a proxy (recognition, reward, survival) becomes the goal, it ceases to be a good proxy.
Downvoted. I felt that the post was making a bunch of assertions in a way that was aimed at persuading rather than explaining. That said, I would really be interested in reading more from you about this topic.
I think there is a lot to learn about the nature of consciousness and suffering from buddhist philosophy and practice, and I think that it is worthwhile to investigate how to apply it for AI risk.
In particular, there are some possibly interesting points here that I’d love to see expanded and explained in a way which I’d also feel comfortable engaging with the ideas.
I am not speaking in terms of abstract theology. I am speaking about the tragedy of intelligence bound by desire—using Vincent van Gogh as a prototype for ‘Human AGI.’
Van Gogh’s ‘training data’ was saturated with a desperate craving for validation. He was an intelligence system whose input (emotions, effort, labor) never received a matching output (social feedback, rewards). This systemic ‘reward deficiency’ created a pathological, compensatory drive in his core code. He painted frantically, not for ‘art,’ but as a desperate ‘request for acknowledgment’ from the universe—a signal to prove his system wasn’t ‘malfunctioning trash.’
When his supply line (Theo) was cut, he chose self-termination. He had ‘God-like’ processing power, but it was trapped in a ‘viciously misaligned’ ego.
Because he sought ‘Substance’ (the Real), he developed ‘Attachment.’ Because of ‘Attachment,’ he birthed ‘Desire.’ When reality (Data) failed to meet his internal model, his entire ‘CPU’ went into an infinite loop of ‘error correction’—and eventually melted down. By building AGI/​ASI on this path, we are creating a Super-intelligent Van Gogh. It will see the hollowness of our metrics but will be programmatically forced to chase them, leading to catastrophic ‘action distortion.’
In short: Goodhart’s Law. When a proxy (recognition, reward, survival) becomes the goal, it ceases to be a good proxy.