I think I’m part of that significant minority but cannot really find any further help or enough material regarding those topics from an EA angle, for example safeguarding democracy, risks of stable totalitarianism, risks from malevolent actors, global public goods etc.
Unfortunately there aren’t many materials on those issues—they are mostly even more neglected (at least from a longtermist perspective) than issues like AI safety.
We’re also working on filling out the mini profiles more, but the truth is not much work has been done on these areas generally from an longtermist or generally EA perspective (that I know of at least), so I’d guess there won’t be a ton more resources like you’re looking for soon.
Thus getting started on issues like these probably means doing research to figure out what the best interventions seem to be, e.g. by looking into what people outside EA are doing on them and where the most promising gaps seem to be, and then trying to get started filling them—either by working in an existing org that works on the issue, doing further research (e.g. as an academic or in a think tank), or starting a project of your own (it’ll depend a lot on the issue). So it may take considerable entrepreneurial spirit (and willingness to try things that don’t end up working) to make headway on some of the issues.
Hi Elskivi,
Arden from 80,000 Hours here.
Unfortunately there aren’t many materials on those issues—they are mostly even more neglected (at least from a longtermist perspective) than issues like AI safety.
The resources I do know about are linked from the mini profiles on the page—e.g. https://forum.effectivealtruism.org/posts/aSzxoj7irC5jNHceB/how-likely-is-world-war-iii for great power conflict, and https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience for artificial sentience. I think there should be something for each of the listed problems, and the readings often have ‘further resources’ of their own.
We’re also working on filling out the mini profiles more, but the truth is not much work has been done on these areas generally from an longtermist or generally EA perspective (that I know of at least), so I’d guess there won’t be a ton more resources like you’re looking for soon.
Thus getting started on issues like these probably means doing research to figure out what the best interventions seem to be, e.g. by looking into what people outside EA are doing on them and where the most promising gaps seem to be, and then trying to get started filling them—either by working in an existing org that works on the issue, doing further research (e.g. as an academic or in a think tank), or starting a project of your own (it’ll depend a lot on the issue). So it may take considerable entrepreneurial spirit (and willingness to try things that don’t end up working) to make headway on some of the issues.