Non X-risks from AI are still intrinsically important AI safety issues
Sure but I think they are less intrinically important for the standard ITN reasons.
I think that your statement implies that we should care about them a similar amount to longtermist motivated safety which might be true but you don’t make a case for why we should care. I don’t think the reasons for prioritising LT AIS are strongly correlated with the reasons for prioritising NT AIS so it would be somewhat surprising if this were true.
Sure but I think they are less intrinically important for the standard ITN reasons.
I think that your statement implies that we should care about them a similar amount to longtermist motivated safety which might be true but you don’t make a case for why we should care. I don’t think the reasons for prioritising LT AIS are strongly correlated with the reasons for prioritising NT AIS so it would be somewhat surprising if this were true.