[Question] Debates on reducing long-term s-risks?

I need know why “AI” tends to be the top cause area in reducing s-risks among human health/​ animal welfare which also seems potential to me. Is it because of “longtermism”? I’ve read some essays on longtermism, but most of them are talking on x-risks, not s-risks. There are lots of debating on if reducing x-risks are right. But, I haven’t seen any discussing on long-term s-risks. Reducing digital sentience seems very abstract to me, I don’t know how to persuade others that reducing AI s-risks is important, especially when AI still don’t have sentience.(Should we research AI s-risks later until AI has sentience? )
(My naive opinion) Although the future might be vast(around 10^50 lives), I don’t think it means the expected value of longtermism is higher, because the traceability might be low to nearly 0. Future is vast because a lot of livings, a wide range of time will affect the future, AI may evolve by themselves, unless there’s a value-lock in. The future also has lots of factors, ex: Maybe AI will be killed by aliens, the universe will explode early… The future is super unpredictable. I don’t feel like we can affect 1/​10^50 of the future, so the expected value might be low. Longtermism sounds a little self-confident on the ability to influence the future, are there some essays arguing/​proving why long-term suffering is more important?

No comments.