Rob puts it well in his comment as “social coordination”. If someone tries “buying time” interventions and fails, I think that because of largely social effects, poorly done “buying time” interventions have potential to both fail at buying time and preclude further coordination with mainstream ML. So net negative effect.
On the other hand, technical alignment does not have this risk.
I agree that technical alignment has the risk of accelerating timelines though.
But if someone tries technical alignment and fails to produce results, that has no impact compared to a counterfactual where they just did web dev or something.
My reference point here is the anecdotal disdain (from Twitter, YouTube, can dm if you want) some in the ML community have for anyone who they perceive to be slowing them down.
An addendum is then:
If Buying time interventions are conjunctive (ie. one can cancel out the effect of the others); but technical alignment is disjunctive
If the distribution of people performing both kinds of intervention is mostly towards the lower end of thoughtfulness/competence, (which we should imo expect)
Then technical alignment is a better recommendation for most people.
In fact it suggests that the graph in the post should be reversed (but the axis at the bottom should be social competence rather than technical competence)