In my extreme case, I think it looks something like we’ve got really promising solutions in the work for alignment and they will be on time to actually be implemented.
Or perhaps, in the case where solutions aren’t forthcoming, we have some really robust structures (international govs and big AI labs or something coordinating) to avoid developing AGI.
I think we should keep “neglectedness” referring to the amount of resources invested in the problem, not P(success). This seems a better fit for the “tractability” bucket.
In my extreme case, I think it looks something like we’ve got really promising solutions in the work for alignment and they will be on time to actually be implemented.
Or perhaps, in the case where solutions aren’t forthcoming, we have some really robust structures (international govs and big AI labs or something coordinating) to avoid developing AGI.
I think we should keep “neglectedness” referring to the amount of resources invested in the problem, not P(success). This seems a better fit for the “tractability” bucket.