[Question] Is there any research or forecasts of how likely AI Alignment is going to be a hard vs. easy problem relative to capabilities?

I believe it was Paul Christiano who said in an 80,000 hours interview that there is a surprisingly high chance that AI alignment might end up not actually being difficult.

I’m curious if anyone has done any research or tried to forecast the likelihood that AI Alignment is a difficult vs. ends up being an easy problem to solve relative to progress in creating advanced AI.

Specifically, by the time we reach transformative AI, how likely is it that AI Alignment will occur naturally if current trends in AI capabilities and AI safety research continue, so that we are able to robustly, sustainably prevent x-risk from AI on our current trajectory?

No answers.