This is a good idea, but I think you mind find that there’s surprisingly little EA consensus. What’s the likelihood that this is the most important century? Should we be funding near-term health treatments for the global poor, or does nothing really matter aside from AI Safety? Is the right ethics utilitarian? Person-affecting? Should you even be a moral realist?
As far as I can tell, EAs (meaning both the general population of uni club attendees and EA Forum readers, alongside the “EA elite” who hold positions of influence at top EA orgs) disagree substantially amongst themselves on all of these really fundamental and critical issues.
What EAs really seems to have in common is an interest in doing the most good, thinking seriously and critically about what that entails, and then actually taking those ideas seriously and executing. As Helen once put it, Effective Altruism is a question, not an ideology.
So I think this could be valuable in theory, but I don’t think your off-the-cuff examples do a good job of illustrating the potential here. For pretty much everything you list, I’m pretty confident that many EAs already disagree, and that these are not actually matters of group-think or even local consensus.
Finally, I think there are questions which are tricky to red-team because of how much conversation around them is private, undocumented, or otherwise obscured. So if you were conducting this exercise, I don’t think it would make sense as an entry-level thing, I think you would have to find people who are already fairly knowledgeable.
Thanks those are good points, especially when the focus is on making progress on issues that might be affected by group-think. Relatedly, I also like your idea of getting outside experts to scrutinize EA ideas. I’ve seen OpenPhil pay for expert feedback on at least one occasion, which seems pretty useful.
We were thinking about writing a question post along something like “Which ideas, assumptions, programmes, interventions, priorities, etc. would you like to see a red-teaming effort for?”. What do you think about the idea, and would you add something to the question to make it more useful?
And I think what your comment neglects is the value of:
having this fellowship only as a first stepping-stone for bigger projects in the future (by installing habits & skills and highlighting the value of similar investigations)
have fellows work on a more serious research project together will build stronger ties among them relative to discussion groups, and I expect will lead to deeper engagement with the ideas
This is a good idea, but I think you mind find that there’s surprisingly little EA consensus. What’s the likelihood that this is the most important century? Should we be funding near-term health treatments for the global poor, or does nothing really matter aside from AI Safety? Is the right ethics utilitarian? Person-affecting? Should you even be a moral realist?
As far as I can tell, EAs (meaning both the general population of uni club attendees and EA Forum readers, alongside the “EA elite” who hold positions of influence at top EA orgs) disagree substantially amongst themselves on all of these really fundamental and critical issues.
What EAs really seems to have in common is an interest in doing the most good, thinking seriously and critically about what that entails, and then actually taking those ideas seriously and executing. As Helen once put it, Effective Altruism is a question, not an ideology.
So I think this could be valuable in theory, but I don’t think your off-the-cuff examples do a good job of illustrating the potential here. For pretty much everything you list, I’m pretty confident that many EAs already disagree, and that these are not actually matters of group-think or even local consensus.
Finally, I think there are questions which are tricky to red-team because of how much conversation around them is private, undocumented, or otherwise obscured. So if you were conducting this exercise, I don’t think it would make sense as an entry-level thing, I think you would have to find people who are already fairly knowledgeable.
Thanks those are good points, especially when the focus is on making progress on issues that might be affected by group-think. Relatedly, I also like your idea of getting outside experts to scrutinize EA ideas. I’ve seen OpenPhil pay for expert feedback on at least one occasion, which seems pretty useful.
We were thinking about writing a question post along something like “Which ideas, assumptions, programmes, interventions, priorities, etc. would you like to see a red-teaming effort for?”. What do you think about the idea, and would you add something to the question to make it more useful?
And I think what your comment neglects is the value of:
having this fellowship only as a first stepping-stone for bigger projects in the future (by installing habits & skills and highlighting the value of similar investigations)
have fellows work on a more serious research project together will build stronger ties among them relative to discussion groups, and I expect will lead to deeper engagement with the ideas