FWIW, I think my median future includes humanity solving AI alignment but messing up reflection/coordination in some way that makes us lose out on most possible value. I think this means that longtermists should think more about reflection/coordination-issues than we’re currently doing. But technical AI alignment seems more tractable than reflection/coordination, so I think it’s probably correct for more total effort to go towards alignment (which is the status quo).
I’m undecided about whether these reflection/coordination-issues are best framed as “AI risk” or not. They’ll certainly interact a lot with AI, but we would face similar problems without AI.
FWIW, I think my median future includes humanity solving AI alignment but messing up reflection/coordination in some way that makes us lose out on most possible value. I think this means that longtermists should think more about reflection/coordination-issues than we’re currently doing. But technical AI alignment seems more tractable than reflection/coordination, so I think it’s probably correct for more total effort to go towards alignment (which is the status quo).
I’m undecided about whether these reflection/coordination-issues are best framed as “AI risk” or not. They’ll certainly interact a lot with AI, but we would face similar problems without AI.