For someone interested in doing research, especially if they’re comfortable formulating their own research question, I think just having a list of topics can be helpful. Here is a list of lists of EA topics:
The Future of Life Institute created this topic map (I think this might be an older version?) They also have this research priorities document and this list of research topics they are interested in funding.
Here is a literature review of recent AGI safety work (discussion).
MIRI created this wiki-like thing which can serve as an overview of problems they consider important.
Theseposts have info about highly reliable agent design.
This blog about ML security provides another perspective on the friendliness problem.
Thanks for doing this!
A few relatively recent, apparently relevant links (I’m writing in Sept 2019):
https://forum.effectivealtruism.org/posts/kFmFLcdSFKo2GFJkc/cause-x-guide
https://forum.effectivealtruism.org/posts/ZhsARZtWEjdg35Ke3/request-for-proposal-ea-animal-welfare-fund
https://forum.effectivealtruism.org/posts/67SuuuWJvDC383usF/list-of-possible-ea-meta-charities-and-projects
https://forum.effectivealtruism.org/posts/n6dtnP5babfNaz2bW/69-things-that-might-be-pretty-effective-to-fund
Some more relevant links:
https://forum.effectivealtruism.org/posts/8R2NffQiCsn3F7hpv/how-to-generate-research-proposals
https://forum.effectivealtruism.org/posts/r5W78AGjzY9wwPn8K/bottlenecks-and-solutions-for-the-x-risk-ecosystem
http://www.nickbeckstead.com/advice/ea-research-topics
http://foundational-research.org/open-research-questions/
http://effective-altruism.com/ea/tu/how_you_can_contribute_to_the_broader_ea_research/
http://effective-altruism.com/ea/1bb/projects_id_like_to_see/
http://effective-altruism.com/ea/xg/improving_longrun_civilisational_robustness/
http://effective-altruism.com/ea/oe/systematically_under_explored_project_areas/
http://effective-altruism.com/ea/fc/a_form_for_people_interested_in_ea_projects_or/
http://effective-altruism.com/ea/1bj/introducing_the_ea_involvement_guide/
https://forum.effectivealtruism.org/posts/gaAreYEEHSXQhJcbm/2018-list-of-half-baked-volunteer-research-ideas
https://aiimpacts.org/promising-research-projects/
For someone interested in doing research, especially if they’re comfortable formulating their own research question, I think just having a list of topics can be helpful. Here is a list of lists of EA topics:
The Effective Altruism Concepts site, or effectivealtruism.org more broadly even, e.g. this resources page.
Effective Altruism Syllabi.
EA database
Alexey Turchin has compiled lots of maps like this map of organizations, sites and people involved in x-risks prevention.
Facebook discussion of causes the community may be neglecting.
CEA’s Effective Altruism Handbook.
Map of Open Spaces in Effective Altruism.
80k lists.
Cause Prioritization Wiki. (Now there are two!)
Effective Altruism Facebook groups.
Then there are books.
Even more links.
Other directories of EA content:
EA Global talks.
EA blog directory.
Here are some more AI safety problem lists which don’t appear in the main post (there is probably lots of redundancy between these lists):
This list of research problems just got posted.
The appendix on this page has a list of topics.
This paper by Francesca Rossi (talk on this page; more talks from the same event here).
https://ai-alignment.com/ is a good site, here is a topic overview post (there might be others).
The Future of Life Institute created this topic map (I think this might be an older version?) They also have this research priorities document and this list of research topics they are interested in funding.
Here is a literature review of recent AGI safety work (discussion).
MIRI created this wiki-like thing which can serve as an overview of problems they consider important.
These posts have info about highly reliable agent design.
This blog about ML security provides another perspective on the friendliness problem.
The AI Safety Gridworlds paper offers 8 relatively concrete problems.
Aligning Superintelligence with Human Interests: An Annotated Bibliography.
Another research overview.
The Learning-Theoretic AI Alignment Research Agenda just came out recently.
Here is a new AI governance research agenda.
I agree with Jessica Taylor that one should additionally aim to acquire one’s own perspective about how to solve the alignment problem.
This comment also has some interesting links.
Here are a couple more:
https://www.lesswrong.com/posts/CmRxryEbvAHcuaPuR/information-generating-research-projects
https://guzey.com/personal/what-should-you-do-with-your-life/