Quoting Nate’s supplement from OpenPhil’s review of “Proof-producing reflection for HOL” (PPRHOL) :
there are basic gaps in our models of what it means to do good reasoning (especially when it comes to things like long-running computations, and doubly so when those computations are the reasoner’s source code)
How far along the way are you towards narrowing these gaps, now that “Logical Induction” is a thing people can talk about? Are there variants of it that narrow these gaps, or are there planned follow-ups to PPRHOL that might improve our models? What kinds of experiments seem valuable for this subgoal?
Thanks for doing this AMA! Which of the points in your strategy have you seen a need to update on, based on the unexpected progress of having published the “Logical Induction” paper (which I’m currently perusing)?