Hi Evan, Here’s my response to your comments (including another post of yours from above). By the way, that’s a nice example of an industry-compatible research, I agree that such and similar cases can indeed fall into what EAs wish to fund, as long as they are assessed as effective and efficient. I think this is an important debate, so let me challenge some of your points.
Your arguments seem to be based on the assumption that EAs can do EA-related topics more effectively and efficiently than a non-explicitly EA-affiliated academics (but please correct me if I’ve misunderstood you!), and I think this is a prevalent assumption across this forum (at least when it comes to the topic of AI risks & safety). While I agree that being an EA can contribute to one’s motivation for the given research topic, I don’t see any rationale for the claim that EAs are more qualified to do scientific research relevant for EA than non-explicit-EAs. That would mean that, say, Christians are a priori more qualified to do research that goes towards some Christian values. I think this is a non sequitur.
Whether a certain group of people can conduct a given project in an effective and efficient way shouldn’t primarily depend on their ethical and political mindset (though this may play a motivating role as I’ve mentioned above), but on the methodological prospects of the given project, on its programmatic character and the capacity of the given scientific group to make an impact. I don’t see why EAs—as such—would qualify for such values anymore than an expert in the given domain can, when placed within the framework of the given project. It is important to keep in mind that we are not talking here about a political activity of spreading EA ideas, but about scientific research which has to be conducted with a necessary rigor in order to make an impact in the scientific community and wider (otherwise nobody will care about the output of the given researchers). This is the kind of criteria that I wished would be present in the assessment of the given grants, rather than who is an EA and who not.
Second, by prioritizing a certain type of group in the given domain of research, the danger of confirmation bias gets increased. This is why feminist epistemologists have been arguing for diversity across the scientific community (rather than for the claim that only feminists should do feminist-compatible scientific research).
Finally, if there is a worry that academic projects focus too much on other issues, the call for funding can always be formulated in such a way that it specifies the desired topics. In this way, academic project proposals can be formulated having EA goals in mind.
Hi Max! I agree, it indeed provides information, but the problem is that the information is too vague, and it may easily reflect a sheer bias (as in: “I don’t like any posts that question the work of OpPhil”). I think this is a strong sentiment in this community and as an academic who is not affiliated with OpPhil or any other EA organization, I’ve noticed numerous cases of silent rejection of a certain problem. I don’t think the issues are relevant for any “mainstream” EA topic (points on which the majority here agrees). But as soon as it comes to the polarized issues (say, the funding of non-academic institutions to conduct academic research), the majority that downvotes doesn’t say a word. I found it quite entertaining (but also disappointing) when I made a longer post on this topic, only to find bunch of downvotes without concrete engagement in the topic. My interpretation of what’s happened there: people dislike someone making waves in their little pond.
I understand you may wish to proceed as you’ve suggested, but eventually this community will push away dissenters, who are very fond of EA, but who just don’t see any point in presenting critical arguments on this platform.