That puts a huge and dictatorial responsibility on funders in ways that are exactly what the paper argued are inappropriate.
If not the funders, do you believe anyone should be responsible for ensuring harmful and wrong ideas are not widely circulated? I can certainly see the case that even wrong, harmful ideas should only be addressed by counterargument. However, I’m not saying that resources should be spent censoring wrong ideas harmful to EA, just that resources should not be spent actively promoting them. Funding is a privilege, consistently making bad arguments should eventually lead to the withdrawal of funding, and if on top of that those bad arguments are harmful to EA causes that should expedite the decision.
To be clear, that is absolutely not to say that publishing Democratizing Risk is/was justification for firing or cut funding, I am still very much talking abstractly.
If not the funders, do you believe anyone should be responsible for ensuring harmful and wrong ideas are not widely circulated?
I can’t speak for David, but personally I think it’s important that no one does this. Freedom of speach and freedom of research are important, and as long as someone doesn’t call to intentionally harm or discriminate against another, it’s important that we don’t condition funding on agreement with the funders’ views.
So,
Funding is a privilege, consistently making bad arguments should eventually lead to the withdrawal of funding, and if on top of that those bad arguments are harmful to EA causes that should expedite the decision.
Freedom of speach and freedom of research are important, and as long as someone doesn’t call to intentionally harm or discriminate against another, it’s important that we don’t condition funding on agreement with the funders’ views.
This seems a very strange view. Surely an animal rights grantmaking organisation can discriminate in favour of applicants who also care about animals? Surely a Christian grantmaking organisation can condition its funding on agreement with the Bible? Surely a civil rights grantmaking organisation can decide to only donate to people who agree about civil rights?
I am not sure that there is actually a disagreement between you and Guy. If I understand correctly, Guy says that in so far as the funder wants research to be conducted to deepen our understanding of a specific topic, the funders should not judge researchers based on their conclusions about the topic, but based on the quality and rigor of their work in the field and their contributions to the relevant research community. This does not seem to conflict what you said, as the focus is still on work on that specific topic.
When you say “surely”, what do you mean? It would certainly be legal and moral. Would a body of research generated only by people who agree with a specific assumption be better in terms of truth-seeking than that of researchers receiving unconditional funding? Of that I’m not sure.
And now suppose it’s hard to measure whether a researcher conforms with the initial assumption, and in practice it is done by continual qualitative evaluation by the funder—is it now really only that initial assumption (e.g. animals deserve moral consideration) that’s the condition for funding, or is it now a measure of how much the research conforms with the funder’s specific conclusions from that assumption (e.g. that welfarism is good)? In this case I have a serious doubt about whether the research produces valuable results (cf. publication bias).
I suspect I disagree with the users that are downvoting this comment. The considerations Guy raises in the first half of this comment are real and important, and the strong form of the opposing view (that anyone should be “responsible for ensuring harmful and wrong ideas are not widely circulated” through anything other than counterargument) is seriously problematic and, in my view, prone to lead to some pretty dark places.
A couple of commenters here have edged closer to this strong view than I’m comfortable with, and I’m happy to see pushback against that. If strong forms of this view are more prevalent among the community than I currently think, that would for me be an update in favour of the claims made in this post/paper.
That said, I do agree that “consistently making bad arguments should eventually lead to the withdrawal of funding”, and that this problem is hard (see my other reply to Guy below).
I also agree with you. I would find it very problematic if anyone was trying to “ensure harmful and wrong ideas are not widely circulated”. Ideas should be argued against, not suppressed.
All ideas? Instructions for how to make contact poisons that aren’t traceable? Methods for identifying vulnerabilities in nuclear weapons arsenals’ command and control systems? Or, concretely and relevantly, ideas about which ways to make omnicidal bioweapons are likely to succeed.
You can tell me that making information more available is good, and I agree in almost all cases. But only almost all.
It seems clear that none of the content in the paper comes anywhere close to your examples. These are also more like “instructions” than “arguments”, and Rubi was calling for suppressing arguments on the danger that they would be believed.
The claim was a general one—I certainly don’t think that the paper was an infohazard, but the idea that this implies that there is no reason for funders to be careful about what they fund seems obviously wrong.
The original question was: “If not the funders, do you believe anyone should be responsible for ensuring harmful and wrong ideas are not widely circulated?”
And I think we need to be far more nuanced about the question than a binary response about all responsibility for funding.
>it’s important that we don’t condition funding on agreement with the funders’ views.
Surely we can condition funding on the quality of the researcher’s past work though? Freedom of speech and freedom of research are both important, but taking a heterodox approach shouldn’t guarantee a sinecure either.
If you completely disagree that people consistently producing bad work should not be allocated scare funds, I’m not sure we can have a productive conversation.
If you completely disagree that people consistently producing bad work should not be allocated scare funds, I’m not sure we can have a productive conversation.
I theoretically agree, but I think it’s hard to separate judgements about research quality from disagreement with its conclusions, or even unrelated opinions on the authors.
For example, I don’t think the average research quality by non-tenured professors (who are supposedly judged by the merits of their work) is better than that of tenured professors.
Like, it seems clear that funders shouldn’t fund work they strongly & rationally believe is low-quality and/or harmful. It also seems clear that reasonably free and open dissent is crucial for the epistemic health of a research community. But how exactly funders should go about managing that conflict seems very unclear – especially in domains in which infohazards are real and serious (but in which all the standard problems with secrecy still absolutely apply).
I would be interested in concrete proposals for how to do this well, given all the considerations raised in this discussion, as well as case studies of how other communities have maintained epistemic diversity and their relevance to our use case. The main example of the latter would be academia, but that has its own set of severe pathologies – especially in the humanities, which seem to be the fields one would most naturally use as control groups for existential risk studies. I think working on concrete ideas and examples of how we can do better would be more useful than arguing round and round the quality-vs-epistemic-diversity circle.
I agree that more diversity in funders would help with these problems. It would inevitably bring in some epistemic diversity, and make funding less dependent on maintaining a few specific interpersonal relationships. If funders were more numerous and less inclined to defer to each others’ analyses, then someone persistently failing to get funding would represent somewhat stronger evidence that the quality of their work is actually low.
That said, I don’t think this solves the problem. Distinguishing between bad (negatively-valuable) work and work you’re biased against because you disagree with it will still be hard in many cases, and more & better thinking about concrete approaches to dealing with this still seems preferable to throwing broad intuitions at each other.
Plus, adding more funders to EA adds its own tricky balancing acts. If your funders are too aligned with each other, you’re basically in the same position you were with only one funder. If they’re too unaligned, you end up with some funders regularly funding work that other funders think is bad for the world, and the movement either splinters or becomes too broad and vague to be useful.
I agree, but you said that it would be good to have concrete ideas, and this is as concrete as I can imagine. And so on the meta level, I think it’s a bit unreasonable to criticize the paper’s concrete suggestion by saying that it’s a hard problem, and their ideas would help, but they wouldn’t be a panacea—clearly, if “fixes everything” is the bar for concrete ideas, we should all go home.
On the object level, I agree that there are challenges, but I think the dichotomy between too few / too many is overstated in two ways. First, I think that multiple funders who are aligned are still far less likely to end up not funding people because they upset someone specific. Even in overlapping social circles, this problem is mitigated. And second, I think that “too unaligned” is the current situation, where much funding goes to things that are approximately useless, some fraction goes to good but non-optimal things, and some goes to things that actively increases dangers. And having more semi-aligned money seems unlikely to lead to EA being to broad and vague than the status quo, where EA is at least 3 different movements (Global poverty/human happiness, Animal welfare, and longtermist existential risk,) and probably more, since if you look into those groups you could easily split them further. So I’d actually think that splitting things into distinct funders would be a great improvement, allowing for greater diversity and clarity about what is being funded and pursued.
Firstly, I wasn’t responding to the OP, I was responding several levels into a conversation between two different commentators about the responsibility of individual funders to reward critical work. I think this is an important and difficult problem that doesn’t go away even in a world with more diverse funding. You brought up “diversify funding” as a solution to that problem, and I responded that it is helpful but insufficient. I didn’t say anything critical of the OP’s proposal in my response. Unless you think the only reasonable response would be to stop talking about an issue as soon as one partial solution is raised, I don’t understand your accusation of unreasonableness here at all.
Secondly, “have more diversity in funders” is not remotely a concrete proposal. It’s a vague desideratum that could be effected in many different ways, many of which are probably bad. If that is “as concrete as [you] can imagine” then we are operating under different definitions of “concrete”.
I don’t really want the discussion to focus entirely on the meta-level, but the conversation went something like “we can condition funding on the quality of the researcher’s past work ” → “I think it’s hard to separate judgements about research quality from disagreement with its conclusions, or even unrelated opinions on the authors.” → “more diversity in funders would help” (which was the original claim in the post!) → “I don’t think this solves the problem...more & better thinking about concrete approaches to dealing with this still seems preferable to throwing broad intuitions at each other.” So I pointed out that more diversity, which was the post’s suggestion, that I was referring back to, was as concrete a solution to the general issue of ” it’s hard to separate judgements about research quality from disagreement with its conclusions” as I can imagine. But I don’t think we’re using different definitions at all. At this point, it seems clear you wanted something more concrete (“have Openphil split it’s budget in the following way,”) but it wouldn’t have solved the general problem which was being discussed. Which was why I said I can’t imagine a more concrete solution to the problem you were discussing.
In any case, I’m much more interested in the object level discussion of what would help, or not, and why.
If not the funders, do you believe anyone should be responsible for ensuring harmful and wrong ideas are not widely circulated? I can certainly see the case that even wrong, harmful ideas should only be addressed by counterargument. However, I’m not saying that resources should be spent censoring wrong ideas harmful to EA, just that resources should not be spent actively promoting them. Funding is a privilege, consistently making bad arguments should eventually lead to the withdrawal of funding, and if on top of that those bad arguments are harmful to EA causes that should expedite the decision.
To be clear, that is absolutely not to say that publishing Democratizing Risk is/was justification for firing or cut funding, I am still very much talking abstractly.
I can’t speak for David, but personally I think it’s important that no one does this. Freedom of speach and freedom of research are important, and as long as someone doesn’t call to intentionally harm or discriminate against another, it’s important that we don’t condition funding on agreement with the funders’ views.
So,
I completely disagree with this.
This seems a very strange view. Surely an animal rights grantmaking organisation can discriminate in favour of applicants who also care about animals? Surely a Christian grantmaking organisation can condition its funding on agreement with the Bible? Surely a civil rights grantmaking organisation can decide to only donate to people who agree about civil rights?
I am not sure that there is actually a disagreement between you and Guy.
If I understand correctly, Guy says that in so far as the funder wants research to be conducted to deepen our understanding of a specific topic, the funders should not judge researchers based on their conclusions about the topic, but based on the quality and rigor of their work in the field and their contributions to the relevant research community.
This does not seem to conflict what you said, as the focus is still on work on that specific topic.
When you say “surely”, what do you mean? It would certainly be legal and moral. Would a body of research generated only by people who agree with a specific assumption be better in terms of truth-seeking than that of researchers receiving unconditional funding? Of that I’m not sure.
And now suppose it’s hard to measure whether a researcher conforms with the initial assumption, and in practice it is done by continual qualitative evaluation by the funder—is it now really only that initial assumption (e.g. animals deserve moral consideration) that’s the condition for funding, or is it now a measure of how much the research conforms with the funder’s specific conclusions from that assumption (e.g. that welfarism is good)? In this case I have a serious doubt about whether the research produces valuable results (cf. publication bias).
I suspect I disagree with the users that are downvoting this comment. The considerations Guy raises in the first half of this comment are real and important, and the strong form of the opposing view (that anyone should be “responsible for ensuring harmful and wrong ideas are not widely circulated” through anything other than counterargument) is seriously problematic and, in my view, prone to lead to some pretty dark places.
A couple of commenters here have edged closer to this strong view than I’m comfortable with, and I’m happy to see pushback against that. If strong forms of this view are more prevalent among the community than I currently think, that would for me be an update in favour of the claims made in this post/paper.
That said, I do agree that “consistently making bad arguments should eventually lead to the withdrawal of funding”, and that this problem is hard (see my other reply to Guy below).
I also agree with you. I would find it very problematic if anyone was trying to “ensure harmful and wrong ideas are not widely circulated”. Ideas should be argued against, not suppressed.
All ideas? Instructions for how to make contact poisons that aren’t traceable? Methods for identifying vulnerabilities in nuclear weapons arsenals’ command and control systems? Or, concretely and relevantly, ideas about which ways to make omnicidal bioweapons are likely to succeed.
You can tell me that making information more available is good, and I agree in almost all cases. But only almost all.
It seems clear that none of the content in the paper comes anywhere close to your examples. These are also more like “instructions” than “arguments”, and Rubi was calling for suppressing arguments on the danger that they would be believed.
The claim was a general one—I certainly don’t think that the paper was an infohazard, but the idea that this implies that there is no reason for funders to be careful about what they fund seems obviously wrong.
The original question was: “If not the funders, do you believe anyone should be responsible for ensuring harmful and wrong ideas are not widely circulated?”
And I think we need to be far more nuanced about the question than a binary response about all responsibility for funding.
>it’s important that we don’t condition funding on agreement with the funders’ views.
Surely we can condition funding on the quality of the researcher’s past work though? Freedom of speech and freedom of research are both important, but taking a heterodox approach shouldn’t guarantee a sinecure either.
If you completely disagree that people consistently producing bad work should not be allocated scare funds, I’m not sure we can have a productive conversation.
I theoretically agree, but I think it’s hard to separate judgements about research quality from disagreement with its conclusions, or even unrelated opinions on the authors.
For example, I don’t think the average research quality by non-tenured professors (who are supposedly judged by the merits of their work) is better than that of tenured professors.
I think this might just be unavoidably hard.
Like, it seems clear that funders shouldn’t fund work they strongly & rationally believe is low-quality and/or harmful. It also seems clear that reasonably free and open dissent is crucial for the epistemic health of a research community. But how exactly funders should go about managing that conflict seems very unclear – especially in domains in which infohazards are real and serious (but in which all the standard problems with secrecy still absolutely apply).
I would be interested in concrete proposals for how to do this well, given all the considerations raised in this discussion, as well as case studies of how other communities have maintained epistemic diversity and their relevance to our use case. The main example of the latter would be academia, but that has its own set of severe pathologies – especially in the humanities, which seem to be the fields one would most naturally use as control groups for existential risk studies. I think working on concrete ideas and examples of how we can do better would be more useful than arguing round and round the quality-vs-epistemic-diversity circle.
The paper points out, among many other things, that more diversity in funders would help accomplish most of these goals.
I agree that more diversity in funders would help with these problems. It would inevitably bring in some epistemic diversity, and make funding less dependent on maintaining a few specific interpersonal relationships. If funders were more numerous and less inclined to defer to each others’ analyses, then someone persistently failing to get funding would represent somewhat stronger evidence that the quality of their work is actually low.
That said, I don’t think this solves the problem. Distinguishing between bad (negatively-valuable) work and work you’re biased against because you disagree with it will still be hard in many cases, and more & better thinking about concrete approaches to dealing with this still seems preferable to throwing broad intuitions at each other.
Plus, adding more funders to EA adds its own tricky balancing acts. If your funders are too aligned with each other, you’re basically in the same position you were with only one funder. If they’re too unaligned, you end up with some funders regularly funding work that other funders think is bad for the world, and the movement either splinters or becomes too broad and vague to be useful.
I agree, but you said that it would be good to have concrete ideas, and this is as concrete as I can imagine. And so on the meta level, I think it’s a bit unreasonable to criticize the paper’s concrete suggestion by saying that it’s a hard problem, and their ideas would help, but they wouldn’t be a panacea—clearly, if “fixes everything” is the bar for concrete ideas, we should all go home.
On the object level, I agree that there are challenges, but I think the dichotomy between too few / too many is overstated in two ways. First, I think that multiple funders who are aligned are still far less likely to end up not funding people because they upset someone specific. Even in overlapping social circles, this problem is mitigated. And second, I think that “too unaligned” is the current situation, where much funding goes to things that are approximately useless, some fraction goes to good but non-optimal things, and some goes to things that actively increases dangers. And having more semi-aligned money seems unlikely to lead to EA being to broad and vague than the status quo, where EA is at least 3 different movements (Global poverty/human happiness, Animal welfare, and longtermist existential risk,) and probably more, since if you look into those groups you could easily split them further. So I’d actually think that splitting things into distinct funders would be a great improvement, allowing for greater diversity and clarity about what is being funded and pursued.
Firstly, I wasn’t responding to the OP, I was responding several levels into a conversation between two different commentators about the responsibility of individual funders to reward critical work. I think this is an important and difficult problem that doesn’t go away even in a world with more diverse funding. You brought up “diversify funding” as a solution to that problem, and I responded that it is helpful but insufficient. I didn’t say anything critical of the OP’s proposal in my response. Unless you think the only reasonable response would be to stop talking about an issue as soon as one partial solution is raised, I don’t understand your accusation of unreasonableness here at all.
Secondly, “have more diversity in funders” is not remotely a concrete proposal. It’s a vague desideratum that could be effected in many different ways, many of which are probably bad. If that is “as concrete as [you] can imagine” then we are operating under different definitions of “concrete”.
I don’t really want the discussion to focus entirely on the meta-level, but the conversation went something like “we can condition funding on the quality of the researcher’s past work ” → “I think it’s hard to separate judgements about research quality from disagreement with its conclusions, or even unrelated opinions on the authors.” → “more diversity in funders would help” (which was the original claim in the post!) → “I don’t think this solves the problem...more & better thinking about concrete approaches to dealing with this still seems preferable to throwing broad intuitions at each other.” So I pointed out that more diversity, which was the post’s suggestion, that I was referring back to, was as concrete a solution to the general issue of ” it’s hard to separate judgements about research quality from disagreement with its conclusions” as I can imagine. But I don’t think we’re using different definitions at all. At this point, it seems clear you wanted something more concrete (“have Openphil split it’s budget in the following way,”) but it wouldn’t have solved the general problem which was being discussed. Which was why I said I can’t imagine a more concrete solution to the problem you were discussing.
In any case, I’m much more interested in the object level discussion of what would help, or not, and why.