The host, Jim Rutt, is actually the former chairman of the Sante Fe institute, so he gets complexity theory (which is core to the argument, but not deeply understood in terms of implications in the alignment community, so I tried conveying those in other ways in this post).
The interview questions jump around a lot, which makes it harder to follow.
Forrest Landry on Jim Rutt show: podcast discussion of the AI risk trough substrate-need convergence argument.
https://www.jimruttshow.com/forrest-landry-4/
https://www.jimruttshow.com/forrest-landry-5/
Nice, thanks for sharing.
The host, Jim Rutt, is actually the former chairman of the Sante Fe institute, so he gets complexity theory (which is core to the argument, but not deeply understood in terms of implications in the alignment community, so I tried conveying those in other ways in this post).
The interview questions jump around a lot, which makes it harder to follow.
Forrest’s answers on Rice Theorem also need more explanation: https://mflb.com/ai_alignment_1/si_safety_qanda_out.html#p6