I had a similar idea, and I think that a few more things need to be included in the discussion of this.
There are multiple levels of ideas in EA, and I think that a red team becomes much more valuable when they are engaging with issues that are applicable to the whole of EA.
I think ideas like the institutional critique of EA, the other heavy tail, and others are often not read and internalized by EAs. I think it is worth having a team that makes arguments like this, then breaks them down and provides methods for avoiding the pitfalls pointed out in them.
Things brought up in critique of EA should be specifically recognized and talked about as good. These ideas should be recognized, held up to be examined, then passed out to our community so that we can grow and overcome the objections.
I’m almost always lurking on the forum, and I don’t often see posts talking about EA critiques.
I basically agree but in this proposal I was really referring to such things as “Professor X is using probabilistic programming to model regularities in human moral preferences. How can that backfire and result in the destruction of our world? What other risks can we find? Can X mitigate them?”
I also think that the category that you’re referring to is very valuable but I think those are “simply” contributions to priorities research as they are published by the Global Priorities Institute (e.g., working papers by Greaves and Tarsney come to mind). Rethink Priorities, Open Phil, FHI, and various individuals also occasionally publish articles that I would class that way. I think priorities research is one of the most important fields of EA and much broader than my proposal, but it is also well-known. Hence why my proposal is not meant to be about that.
I had a similar idea, and I think that a few more things need to be included in the discussion of this.
There are multiple levels of ideas in EA, and I think that a red team becomes much more valuable when they are engaging with issues that are applicable to the whole of EA.
I think ideas like the institutional critique of EA, the other heavy tail, and others are often not read and internalized by EAs. I think it is worth having a team that makes arguments like this, then breaks them down and provides methods for avoiding the pitfalls pointed out in them.
Things brought up in critique of EA should be specifically recognized and talked about as good. These ideas should be recognized, held up to be examined, then passed out to our community so that we can grow and overcome the objections.
I’m almost always lurking on the forum, and I don’t often see posts talking about EA critiques.
That should change.
I basically agree but in this proposal I was really referring to such things as “Professor X is using probabilistic programming to model regularities in human moral preferences. How can that backfire and result in the destruction of our world? What other risks can we find? Can X mitigate them?”
I also think that the category that you’re referring to is very valuable but I think those are “simply” contributions to priorities research as they are published by the Global Priorities Institute (e.g., working papers by Greaves and Tarsney come to mind). Rethink Priorities, Open Phil, FHI, and various individuals also occasionally publish articles that I would class that way. I think priorities research is one of the most important fields of EA and much broader than my proposal, but it is also well-known. Hence why my proposal is not meant to be about that.