If you completely disagree that people consistently producing bad work should not be allocated scare funds, I’m not sure we can have a productive conversation.
I theoretically agree, but I think it’s hard to separate judgements about research quality from disagreement with its conclusions, or even unrelated opinions on the authors.
For example, I don’t think the average research quality by non-tenured professors (who are supposedly judged by the merits of their work) is better than that of tenured professors.
Like, it seems clear that funders shouldn’t fund work they strongly & rationally believe is low-quality and/or harmful. It also seems clear that reasonably free and open dissent is crucial for the epistemic health of a research community. But how exactly funders should go about managing that conflict seems very unclear – especially in domains in which infohazards are real and serious (but in which all the standard problems with secrecy still absolutely apply).
I would be interested in concrete proposals for how to do this well, given all the considerations raised in this discussion, as well as case studies of how other communities have maintained epistemic diversity and their relevance to our use case. The main example of the latter would be academia, but that has its own set of severe pathologies – especially in the humanities, which seem to be the fields one would most naturally use as control groups for existential risk studies. I think working on concrete ideas and examples of how we can do better would be more useful than arguing round and round the quality-vs-epistemic-diversity circle.
I agree that more diversity in funders would help with these problems. It would inevitably bring in some epistemic diversity, and make funding less dependent on maintaining a few specific interpersonal relationships. If funders were more numerous and less inclined to defer to each others’ analyses, then someone persistently failing to get funding would represent somewhat stronger evidence that the quality of their work is actually low.
That said, I don’t think this solves the problem. Distinguishing between bad (negatively-valuable) work and work you’re biased against because you disagree with it will still be hard in many cases, and more & better thinking about concrete approaches to dealing with this still seems preferable to throwing broad intuitions at each other.
Plus, adding more funders to EA adds its own tricky balancing acts. If your funders are too aligned with each other, you’re basically in the same position you were with only one funder. If they’re too unaligned, you end up with some funders regularly funding work that other funders think is bad for the world, and the movement either splinters or becomes too broad and vague to be useful.
I agree, but you said that it would be good to have concrete ideas, and this is as concrete as I can imagine. And so on the meta level, I think it’s a bit unreasonable to criticize the paper’s concrete suggestion by saying that it’s a hard problem, and their ideas would help, but they wouldn’t be a panacea—clearly, if “fixes everything” is the bar for concrete ideas, we should all go home.
On the object level, I agree that there are challenges, but I think the dichotomy between too few / too many is overstated in two ways. First, I think that multiple funders who are aligned are still far less likely to end up not funding people because they upset someone specific. Even in overlapping social circles, this problem is mitigated. And second, I think that “too unaligned” is the current situation, where much funding goes to things that are approximately useless, some fraction goes to good but non-optimal things, and some goes to things that actively increases dangers. And having more semi-aligned money seems unlikely to lead to EA being to broad and vague than the status quo, where EA is at least 3 different movements (Global poverty/human happiness, Animal welfare, and longtermist existential risk,) and probably more, since if you look into those groups you could easily split them further. So I’d actually think that splitting things into distinct funders would be a great improvement, allowing for greater diversity and clarity about what is being funded and pursued.
Firstly, I wasn’t responding to the OP, I was responding several levels into a conversation between two different commentators about the responsibility of individual funders to reward critical work. I think this is an important and difficult problem that doesn’t go away even in a world with more diverse funding. You brought up “diversify funding” as a solution to that problem, and I responded that it is helpful but insufficient. I didn’t say anything critical of the OP’s proposal in my response. Unless you think the only reasonable response would be to stop talking about an issue as soon as one partial solution is raised, I don’t understand your accusation of unreasonableness here at all.
Secondly, “have more diversity in funders” is not remotely a concrete proposal. It’s a vague desideratum that could be effected in many different ways, many of which are probably bad. If that is “as concrete as [you] can imagine” then we are operating under different definitions of “concrete”.
I don’t really want the discussion to focus entirely on the meta-level, but the conversation went something like “we can condition funding on the quality of the researcher’s past work ” → “I think it’s hard to separate judgements about research quality from disagreement with its conclusions, or even unrelated opinions on the authors.” → “more diversity in funders would help” (which was the original claim in the post!) → “I don’t think this solves the problem...more & better thinking about concrete approaches to dealing with this still seems preferable to throwing broad intuitions at each other.” So I pointed out that more diversity, which was the post’s suggestion, that I was referring back to, was as concrete a solution to the general issue of ” it’s hard to separate judgements about research quality from disagreement with its conclusions” as I can imagine. But I don’t think we’re using different definitions at all. At this point, it seems clear you wanted something more concrete (“have Openphil split it’s budget in the following way,”) but it wouldn’t have solved the general problem which was being discussed. Which was why I said I can’t imagine a more concrete solution to the problem you were discussing.
In any case, I’m much more interested in the object level discussion of what would help, or not, and why.
I theoretically agree, but I think it’s hard to separate judgements about research quality from disagreement with its conclusions, or even unrelated opinions on the authors.
For example, I don’t think the average research quality by non-tenured professors (who are supposedly judged by the merits of their work) is better than that of tenured professors.
I think this might just be unavoidably hard.
Like, it seems clear that funders shouldn’t fund work they strongly & rationally believe is low-quality and/or harmful. It also seems clear that reasonably free and open dissent is crucial for the epistemic health of a research community. But how exactly funders should go about managing that conflict seems very unclear – especially in domains in which infohazards are real and serious (but in which all the standard problems with secrecy still absolutely apply).
I would be interested in concrete proposals for how to do this well, given all the considerations raised in this discussion, as well as case studies of how other communities have maintained epistemic diversity and their relevance to our use case. The main example of the latter would be academia, but that has its own set of severe pathologies – especially in the humanities, which seem to be the fields one would most naturally use as control groups for existential risk studies. I think working on concrete ideas and examples of how we can do better would be more useful than arguing round and round the quality-vs-epistemic-diversity circle.
The paper points out, among many other things, that more diversity in funders would help accomplish most of these goals.
I agree that more diversity in funders would help with these problems. It would inevitably bring in some epistemic diversity, and make funding less dependent on maintaining a few specific interpersonal relationships. If funders were more numerous and less inclined to defer to each others’ analyses, then someone persistently failing to get funding would represent somewhat stronger evidence that the quality of their work is actually low.
That said, I don’t think this solves the problem. Distinguishing between bad (negatively-valuable) work and work you’re biased against because you disagree with it will still be hard in many cases, and more & better thinking about concrete approaches to dealing with this still seems preferable to throwing broad intuitions at each other.
Plus, adding more funders to EA adds its own tricky balancing acts. If your funders are too aligned with each other, you’re basically in the same position you were with only one funder. If they’re too unaligned, you end up with some funders regularly funding work that other funders think is bad for the world, and the movement either splinters or becomes too broad and vague to be useful.
I agree, but you said that it would be good to have concrete ideas, and this is as concrete as I can imagine. And so on the meta level, I think it’s a bit unreasonable to criticize the paper’s concrete suggestion by saying that it’s a hard problem, and their ideas would help, but they wouldn’t be a panacea—clearly, if “fixes everything” is the bar for concrete ideas, we should all go home.
On the object level, I agree that there are challenges, but I think the dichotomy between too few / too many is overstated in two ways. First, I think that multiple funders who are aligned are still far less likely to end up not funding people because they upset someone specific. Even in overlapping social circles, this problem is mitigated. And second, I think that “too unaligned” is the current situation, where much funding goes to things that are approximately useless, some fraction goes to good but non-optimal things, and some goes to things that actively increases dangers. And having more semi-aligned money seems unlikely to lead to EA being to broad and vague than the status quo, where EA is at least 3 different movements (Global poverty/human happiness, Animal welfare, and longtermist existential risk,) and probably more, since if you look into those groups you could easily split them further. So I’d actually think that splitting things into distinct funders would be a great improvement, allowing for greater diversity and clarity about what is being funded and pursued.
Firstly, I wasn’t responding to the OP, I was responding several levels into a conversation between two different commentators about the responsibility of individual funders to reward critical work. I think this is an important and difficult problem that doesn’t go away even in a world with more diverse funding. You brought up “diversify funding” as a solution to that problem, and I responded that it is helpful but insufficient. I didn’t say anything critical of the OP’s proposal in my response. Unless you think the only reasonable response would be to stop talking about an issue as soon as one partial solution is raised, I don’t understand your accusation of unreasonableness here at all.
Secondly, “have more diversity in funders” is not remotely a concrete proposal. It’s a vague desideratum that could be effected in many different ways, many of which are probably bad. If that is “as concrete as [you] can imagine” then we are operating under different definitions of “concrete”.
I don’t really want the discussion to focus entirely on the meta-level, but the conversation went something like “we can condition funding on the quality of the researcher’s past work ” → “I think it’s hard to separate judgements about research quality from disagreement with its conclusions, or even unrelated opinions on the authors.” → “more diversity in funders would help” (which was the original claim in the post!) → “I don’t think this solves the problem...more & better thinking about concrete approaches to dealing with this still seems preferable to throwing broad intuitions at each other.” So I pointed out that more diversity, which was the post’s suggestion, that I was referring back to, was as concrete a solution to the general issue of ” it’s hard to separate judgements about research quality from disagreement with its conclusions” as I can imagine. But I don’t think we’re using different definitions at all. At this point, it seems clear you wanted something more concrete (“have Openphil split it’s budget in the following way,”) but it wouldn’t have solved the general problem which was being discussed. Which was why I said I can’t imagine a more concrete solution to the problem you were discussing.
In any case, I’m much more interested in the object level discussion of what would help, or not, and why.