Thanks for this; it made me notice that I was analyzing Chris’s work more in far mode and Redwood’s more in near mode. Maybe you’re right about these comparisons. I’d be be interested to understand whether/how you think the adversarial training work could most plausibly be directly applied (or if you just mean “fewer intermediate steps till eventual application”, or something else).
Thanks for this; it made me notice that I was analyzing Chris’s work more in far mode and Redwood’s more in near mode. Maybe you’re right about these comparisons. I’d be be interested to understand whether/how you think the adversarial training work could most plausibly be directly applied (or if you just mean “fewer intermediate steps till eventual application”, or something else).