I basically agree but in this proposal I was really referring to such things as “Professor X is using probabilistic programming to model regularities in human moral preferences. How can that backfire and result in the destruction of our world? What other risks can we find? Can X mitigate them?”
I also think that the category that you’re referring to is very valuable but I think those are “simply” contributions to priorities research as they are published by the Global Priorities Institute (e.g., working papers by Greaves and Tarsney come to mind). Rethink Priorities, Open Phil, FHI, and various individuals also occasionally publish articles that I would class that way. I think priorities research is one of the most important fields of EA and much broader than my proposal, but it is also well-known. Hence why my proposal is not meant to be about that.
I basically agree but in this proposal I was really referring to such things as “Professor X is using probabilistic programming to model regularities in human moral preferences. How can that backfire and result in the destruction of our world? What other risks can we find? Can X mitigate them?”
I also think that the category that you’re referring to is very valuable but I think those are “simply” contributions to priorities research as they are published by the Global Priorities Institute (e.g., working papers by Greaves and Tarsney come to mind). Rethink Priorities, Open Phil, FHI, and various individuals also occasionally publish articles that I would class that way. I think priorities research is one of the most important fields of EA and much broader than my proposal, but it is also well-known. Hence why my proposal is not meant to be about that.