I’d like to suggest including an article on reducing s-risks (e.g. https://foundational-research.org/reducing-risks-of-astronomical-suffering-a-neglected-priority/ or http://s-risks.org/intro/) as another possible perspective on longtermism, in addition to AI alignment and x-risk reduction.
This introduction might in some ways be more accessible: S-risks: Why they are the worst existential risks, and how to prevent them (EAG Boston 2017)
I’d like to suggest including an article on reducing s-risks (e.g. https://foundational-research.org/reducing-risks-of-astronomical-suffering-a-neglected-priority/ or http://s-risks.org/intro/) as another possible perspective on longtermism, in addition to AI alignment and x-risk reduction.
This introduction might in some ways be more accessible: S-risks: Why they are the worst existential risks, and how to prevent them (EAG Boston 2017)