The Future of Life Institute created this topic map (I think this might be an older version?) They also have this research priorities document and this list of research topics they are interested in funding.
Here is a literature review of recent AGI safety work (discussion).
MIRI created this wiki-like thing which can serve as an overview of problems they consider important.
Theseposts have info about highly reliable agent design.
This blog about ML security provides another perspective on the friendliness problem.
Here are some more AI safety problem lists which don’t appear in the main post (there is probably lots of redundancy between these lists):
This list of research problems just got posted.
The appendix on this page has a list of topics.
This paper by Francesca Rossi (talk on this page; more talks from the same event here).
https://ai-alignment.com/ is a good site, here is a topic overview post (there might be others).
The Future of Life Institute created this topic map (I think this might be an older version?) They also have this research priorities document and this list of research topics they are interested in funding.
Here is a literature review of recent AGI safety work (discussion).
MIRI created this wiki-like thing which can serve as an overview of problems they consider important.
These posts have info about highly reliable agent design.
This blog about ML security provides another perspective on the friendliness problem.
The AI Safety Gridworlds paper offers 8 relatively concrete problems.
Aligning Superintelligence with Human Interests: An Annotated Bibliography.
Another research overview.
The Learning-Theoretic AI Alignment Research Agenda just came out recently.
Here is a new AI governance research agenda.
I agree with Jessica Taylor that one should additionally aim to acquire one’s own perspective about how to solve the alignment problem.