What are some key research directions/topics that are not currently being looked into enough by the EA movement (either at all or in sufficient depth)?
Longtermism in its nascent form relies on a lot guesstimates and abstractions that I think could be made more empirical and solid. Personally, I am very interested in asking whether people at x time in the past had the information they needed to avoid later disasters that occurred. What kinds of catastrophes have humans been able to foresee, and when we were able to but didn’t, what obstacles were in the way? History is the only evidence available in a lot of longtermist domains and I don’t see EA exploiting it enough.
As is probably the case with many researchers, I have a bunch of thoughts on this, most of which aren’t written up in nice, clear, detailed ways. But I do have a draft post with nuclear risk research project ideas, and a doc of rough notes on AI governance survey ideas, so if someone is interested in executing projects like that please message me and I can probably send you links.
(I’m not saying those are the two areas I think are most impactful to do research on on the current margin; I just happen to have docs on those things. I also have other ideas less easily shareable right now.)
What are some key research directions/topics that are not currently being looked into enough by the EA movement (either at all or in sufficient depth)?
Longtermism in its nascent form relies on a lot guesstimates and abstractions that I think could be made more empirical and solid. Personally, I am very interested in asking whether people at x time in the past had the information they needed to avoid later disasters that occurred. What kinds of catastrophes have humans been able to foresee, and when we were able to but didn’t, what obstacles were in the way? History is the only evidence available in a lot of longtermist domains and I don’t see EA exploiting it enough.
As is probably the case with many researchers, I have a bunch of thoughts on this, most of which aren’t written up in nice, clear, detailed ways. But I do have a draft post with nuclear risk research project ideas, and a doc of rough notes on AI governance survey ideas, so if someone is interested in executing projects like that please message me and I can probably send you links.
(I’m not saying those are the two areas I think are most impactful to do research on on the current margin; I just happen to have docs on those things. I also have other ideas less easily shareable right now.)
People might also find my central directory for open research questions useful, but that’s not filtered for my own beliefs about how important-on-the-margin these questions are.