Maybe we should add: Does working on pre-ASI risks improve our prospects of solving ASI (I think that’s the core of the conciliation between near-term and long-term concerns about AI… but up to what point?), or does it worsen it?
Maybe we should add: Does working on pre-ASI risks improve our prospects of solving ASI (I think that’s the core of the conciliation between near-term and long-term concerns about AI… but up to what point?), or does it worsen it?