Great talk. I think it breaks down the problem of AI alignment well. It also reminds me of the more recent breakdown by Dan Hendryks which decomposes ML safety into three problems: robustness, monitoring and alignment.
I’ve noticed that a lot of good ideas seem to come from talks. For example, Richard Hamming’s famous talk on working on important problems. Maybe there should be more of them.
Great talk. I think it breaks down the problem of AI alignment well. It also reminds me of the more recent breakdown by Dan Hendryks which decomposes ML safety into three problems: robustness, monitoring and alignment.
I’ve noticed that a lot of good ideas seem to come from talks. For example, Richard Hamming’s famous talk on working on important problems. Maybe there should be more of them.