Maybe a few on s-risks, which are not only of concern for those with suffering-focused views? These might be good places to start:
https://​​longtermrisk.org/​​risks-of-astronomical-future-suffering/​​
https://​​longtermrisk.org/​​reducing-risks-of-astronomical-suffering-a-neglected-priority/​​
https://​​longtermrisk.org/​​altruists-should-prioritize-artificial-intelligence/​​
Maybe a few on s-risks, which are not only of concern for those with suffering-focused views? These might be good places to start:
https://​​longtermrisk.org/​​risks-of-astronomical-future-suffering/​​
https://​​longtermrisk.org/​​reducing-risks-of-astronomical-suffering-a-neglected-priority/​​
https://​​longtermrisk.org/​​altruists-should-prioritize-artificial-intelligence/​​