B. They lose sight of the terminal goal. The real goal is not to skill-up in ML. The real goal is not to replicate the results of a paper. The real goal is not even to “solve inner alignment.” The real goal is to not die & not lose the value of the far-future.
I’d argue that if they solved inner alignment totally, then the rest of the alignment problems becomes far easier if not trivial to solve.
I’d argue that if they solved inner alignment totally, then the rest of the alignment problems becomes far easier if not trivial to solve.