I thought the paper itself was poorly argued, largely as a function of biting off too much at once. Several times the case against the TUA was not actually argued, merely asserted to exist along with one or two citations for which it is hard to evaluate if they represent a consensus. Then, while I thought the original description of TUA was accurate, the TUA response to criticisms was entirely ignored. Statements like “it is unclear why a precise slowing and speeding up of different technologies...across the world is more feasible or effective than the simpler approach of outright bans and moratoriums” were egregious, and made it seem like you did not do your research. You spoke to 20+ reviewers, half of which were sought out to disagree with you, and not a single one could provide a case for differential technology? Not a single mention of the difficulty of incorporating future generations into the democratic process?
Ultimately, I think the paper would have been better served by focusing on a single section, leaving the rest to future work. The style of assertions rather than argument and skipping over potential responses comes across as more polemical than evidence-seeking. I believe that was the major contributor to blowback you have received.
I agree that more diversity in funders would be beneficial. It is harmful to all researchers if access to future funding is dependent on the results of their work. Overall, it is unclear from your post the actual extent of the blowback. What does “tried to prevent the paper being published” mean? Is the threat of withdrawn funding real or imagined? Were the authors whose work was criticized angry, and did they take any actions to retaliate?
Finally, I would like to abstract away from this specific paper. Criticisms of the dominant paradigm limiting future funding and career opportunities is a sign of terrible epistemics in a field. However, poor criticisms of the dominant paradigm limiting future funding and career opportunities is completely valid. The one line you wrote that I think all EAs would agree with is “This is not a game. Fucking it up could end really badly”. If the wrong arguments being made would cause harm when believed, it is not only the right but the responsibility of funders to reduce their reach. Of course, the difficulty is in differentiating wrong criticisms from criticisms against the current paradigm, while within the current paradigm. The responsibility of the researcher is to make their case as bulletproofed as possible, and designed to convince believers in the current paradigm. Otherwise, even if their claims are correct, they won’t make an impact. The “Effective” part of EA includes making the right arguments to convince the right people, rather than the argument that is cathartic to unleash.
I would agree that the article is too wide-ranging. There’s a whole host of content ranging from criticisms of expected value theory, arguments for degrowth, arguments for democracy, and then criticisms of specific risk estimates. I agreed with some parts of the paper, but it is hard to engage with such a wide range of topics.
The paper doesn’t explicitly mention economic growth, but it does discuss technological progress, and at points seems to argue or insinuate against it.
”For others who value virtue, freedom, or equality, it is unclear why a long-term future without industrialisation is abhorrent: it all depends on one’s notion of potential.” Personally, I consider a long-term future with a 48.6% child and infant mortality rate abhorrent and opposed to human potential, but the authors don’t seem bothered by this. But they have little enough space to explain how their implied society would handle the issue, and I will not critique it excessively.
There is also a repeated implication that halting technological progress is, at a minimum, possible and possibly desirable. ”Since halting the technological juggernaut is considered impossible, an approach of differential technological development is advocated” “The TUA rarely examines the drivers of risk generation. Instead, key texts contend that regulating or stopping technological progress is either deeply difficult, undesirable, or outright impossible” ”regressing, relinquishing, or stopping the development of many technologies is often disregarded as a feasible option” implies to me that one of those three options is a feasible option, or is at least worth investigating.
While they don’t explicitly advocate degrowth, I think it is reasonable to read them as doing such, as John does.
“For others who value virtue, freedom, or equality, it is unclear why a long-term future without industrialisation is abhorrent: it all depends on one’s notion of potential.”
Point taken. Thank you for pointing this out.
“The TUA rarely examines the drivers of risk generation. Instead, key texts contend that regulating or stopping technological progress is either deeply difficult, undesirable, or outright impossible” ″regressing, relinquishing, or stopping the development of many technologies is often disregarded as a feasible option” implies to me that one of those three options is a feasible option, or is at least worth investigating.
I think this is more about stopping the development of specific technologies—for example, they suggest that stopping AGI from being developed is an option. Stopping the development of certain technologies isn’t necessarily related to degrowth—for example, many jurisdictions now ban government use of facial recognition technology, and there have been calls to abolish its use, but these are motivated by civil liberties concerns.
I think this conflates the criticism of the idea of unitary and unstoppable technological progress with opposition to any and all technological progress.
Suggesting that a future without industrialization is morally tolerable does not imply opposition to “any and all” technological progress, but the amount of space left is very small. I don’t think they’re taking an opinion on the value of better fishhooks.
Several times the case against the TUA was not actually argued
I think that they didn’t try to oppose the TUA in the paper, or make the argument against it themselves. To quote: “We focus on the techno-utopian approach to existential risk for three reasons. First, it serves as an example of how moral values are embedded in the analysis of risks. Second, a critical perspective towards the techno-utopian approach allows us to trace how this meshing of moral values and scientific analysis in ERS can lead to conclusions, which, from a different perspective, look like they in fact increase catastrophic risk. Third, it is the original and by far most influential approach within the field.”
I also think that they don’t need to prove that others are wrong to show that the lack of diversity has harms—as you agreed.
If the wrong arguments being made would cause harm when believed, it is not only the right but the responsibility of funders to reduce their reach.
That puts a huge and dictatorial responsibility on funders in ways that are exactly what the paper argued are inappropriate.
The responsibility of the researcher is to make their case as bulletproofed as possible, and designed to convince believers in the current paradigm… The “Effective” part of EA includes making the right arguments to convince the right people, rather than the argument that is cathartic to unleash.
That puts a huge and dictatorial responsibility on funders in ways that are exactly what the paper argued are inappropriate.
If not the funders, do you believe anyone should be responsible for ensuring harmful and wrong ideas are not widely circulated? I can certainly see the case that even wrong, harmful ideas should only be addressed by counterargument. However, I’m not saying that resources should be spent censoring wrong ideas harmful to EA, just that resources should not be spent actively promoting them. Funding is a privilege, consistently making bad arguments should eventually lead to the withdrawal of funding, and if on top of that those bad arguments are harmful to EA causes that should expedite the decision.
To be clear, that is absolutely not to say that publishing Democratizing Risk is/was justification for firing or cut funding, I am still very much talking abstractly.
If not the funders, do you believe anyone should be responsible for ensuring harmful and wrong ideas are not widely circulated?
I can’t speak for David, but personally I think it’s important that no one does this. Freedom of speach and freedom of research are important, and as long as someone doesn’t call to intentionally harm or discriminate against another, it’s important that we don’t condition funding on agreement with the funders’ views.
So,
Funding is a privilege, consistently making bad arguments should eventually lead to the withdrawal of funding, and if on top of that those bad arguments are harmful to EA causes that should expedite the decision.
Freedom of speach and freedom of research are important, and as long as someone doesn’t call to intentionally harm or discriminate against another, it’s important that we don’t condition funding on agreement with the funders’ views.
This seems a very strange view. Surely an animal rights grantmaking organisation can discriminate in favour of applicants who also care about animals? Surely a Christian grantmaking organisation can condition its funding on agreement with the Bible? Surely a civil rights grantmaking organisation can decide to only donate to people who agree about civil rights?
I am not sure that there is actually a disagreement between you and Guy. If I understand correctly, Guy says that in so far as the funder wants research to be conducted to deepen our understanding of a specific topic, the funders should not judge researchers based on their conclusions about the topic, but based on the quality and rigor of their work in the field and their contributions to the relevant research community. This does not seem to conflict what you said, as the focus is still on work on that specific topic.
When you say “surely”, what do you mean? It would certainly be legal and moral. Would a body of research generated only by people who agree with a specific assumption be better in terms of truth-seeking than that of researchers receiving unconditional funding? Of that I’m not sure.
And now suppose it’s hard to measure whether a researcher conforms with the initial assumption, and in practice it is done by continual qualitative evaluation by the funder—is it now really only that initial assumption (e.g. animals deserve moral consideration) that’s the condition for funding, or is it now a measure of how much the research conforms with the funder’s specific conclusions from that assumption (e.g. that welfarism is good)? In this case I have a serious doubt about whether the research produces valuable results (cf. publication bias).
I suspect I disagree with the users that are downvoting this comment. The considerations Guy raises in the first half of this comment are real and important, and the strong form of the opposing view (that anyone should be “responsible for ensuring harmful and wrong ideas are not widely circulated” through anything other than counterargument) is seriously problematic and, in my view, prone to lead to some pretty dark places.
A couple of commenters here have edged closer to this strong view than I’m comfortable with, and I’m happy to see pushback against that. If strong forms of this view are more prevalent among the community than I currently think, that would for me be an update in favour of the claims made in this post/paper.
That said, I do agree that “consistently making bad arguments should eventually lead to the withdrawal of funding”, and that this problem is hard (see my other reply to Guy below).
I also agree with you. I would find it very problematic if anyone was trying to “ensure harmful and wrong ideas are not widely circulated”. Ideas should be argued against, not suppressed.
All ideas? Instructions for how to make contact poisons that aren’t traceable? Methods for identifying vulnerabilities in nuclear weapons arsenals’ command and control systems? Or, concretely and relevantly, ideas about which ways to make omnicidal bioweapons are likely to succeed.
You can tell me that making information more available is good, and I agree in almost all cases. But only almost all.
It seems clear that none of the content in the paper comes anywhere close to your examples. These are also more like “instructions” than “arguments”, and Rubi was calling for suppressing arguments on the danger that they would be believed.
The claim was a general one—I certainly don’t think that the paper was an infohazard, but the idea that this implies that there is no reason for funders to be careful about what they fund seems obviously wrong.
The original question was: “If not the funders, do you believe anyone should be responsible for ensuring harmful and wrong ideas are not widely circulated?”
And I think we need to be far more nuanced about the question than a binary response about all responsibility for funding.
>it’s important that we don’t condition funding on agreement with the funders’ views.
Surely we can condition funding on the quality of the researcher’s past work though? Freedom of speech and freedom of research are both important, but taking a heterodox approach shouldn’t guarantee a sinecure either.
If you completely disagree that people consistently producing bad work should not be allocated scare funds, I’m not sure we can have a productive conversation.
If you completely disagree that people consistently producing bad work should not be allocated scare funds, I’m not sure we can have a productive conversation.
I theoretically agree, but I think it’s hard to separate judgements about research quality from disagreement with its conclusions, or even unrelated opinions on the authors.
For example, I don’t think the average research quality by non-tenured professors (who are supposedly judged by the merits of their work) is better than that of tenured professors.
Like, it seems clear that funders shouldn’t fund work they strongly & rationally believe is low-quality and/or harmful. It also seems clear that reasonably free and open dissent is crucial for the epistemic health of a research community. But how exactly funders should go about managing that conflict seems very unclear – especially in domains in which infohazards are real and serious (but in which all the standard problems with secrecy still absolutely apply).
I would be interested in concrete proposals for how to do this well, given all the considerations raised in this discussion, as well as case studies of how other communities have maintained epistemic diversity and their relevance to our use case. The main example of the latter would be academia, but that has its own set of severe pathologies – especially in the humanities, which seem to be the fields one would most naturally use as control groups for existential risk studies. I think working on concrete ideas and examples of how we can do better would be more useful than arguing round and round the quality-vs-epistemic-diversity circle.
I agree that more diversity in funders would help with these problems. It would inevitably bring in some epistemic diversity, and make funding less dependent on maintaining a few specific interpersonal relationships. If funders were more numerous and less inclined to defer to each others’ analyses, then someone persistently failing to get funding would represent somewhat stronger evidence that the quality of their work is actually low.
That said, I don’t think this solves the problem. Distinguishing between bad (negatively-valuable) work and work you’re biased against because you disagree with it will still be hard in many cases, and more & better thinking about concrete approaches to dealing with this still seems preferable to throwing broad intuitions at each other.
Plus, adding more funders to EA adds its own tricky balancing acts. If your funders are too aligned with each other, you’re basically in the same position you were with only one funder. If they’re too unaligned, you end up with some funders regularly funding work that other funders think is bad for the world, and the movement either splinters or becomes too broad and vague to be useful.
I agree, but you said that it would be good to have concrete ideas, and this is as concrete as I can imagine. And so on the meta level, I think it’s a bit unreasonable to criticize the paper’s concrete suggestion by saying that it’s a hard problem, and their ideas would help, but they wouldn’t be a panacea—clearly, if “fixes everything” is the bar for concrete ideas, we should all go home.
On the object level, I agree that there are challenges, but I think the dichotomy between too few / too many is overstated in two ways. First, I think that multiple funders who are aligned are still far less likely to end up not funding people because they upset someone specific. Even in overlapping social circles, this problem is mitigated. And second, I think that “too unaligned” is the current situation, where much funding goes to things that are approximately useless, some fraction goes to good but non-optimal things, and some goes to things that actively increases dangers. And having more semi-aligned money seems unlikely to lead to EA being to broad and vague than the status quo, where EA is at least 3 different movements (Global poverty/human happiness, Animal welfare, and longtermist existential risk,) and probably more, since if you look into those groups you could easily split them further. So I’d actually think that splitting things into distinct funders would be a great improvement, allowing for greater diversity and clarity about what is being funded and pursued.
Firstly, I wasn’t responding to the OP, I was responding several levels into a conversation between two different commentators about the responsibility of individual funders to reward critical work. I think this is an important and difficult problem that doesn’t go away even in a world with more diverse funding. You brought up “diversify funding” as a solution to that problem, and I responded that it is helpful but insufficient. I didn’t say anything critical of the OP’s proposal in my response. Unless you think the only reasonable response would be to stop talking about an issue as soon as one partial solution is raised, I don’t understand your accusation of unreasonableness here at all.
Secondly, “have more diversity in funders” is not remotely a concrete proposal. It’s a vague desideratum that could be effected in many different ways, many of which are probably bad. If that is “as concrete as [you] can imagine” then we are operating under different definitions of “concrete”.
I don’t really want the discussion to focus entirely on the meta-level, but the conversation went something like “we can condition funding on the quality of the researcher’s past work ” → “I think it’s hard to separate judgements about research quality from disagreement with its conclusions, or even unrelated opinions on the authors.” → “more diversity in funders would help” (which was the original claim in the post!) → “I don’t think this solves the problem...more & better thinking about concrete approaches to dealing with this still seems preferable to throwing broad intuitions at each other.” So I pointed out that more diversity, which was the post’s suggestion, that I was referring back to, was as concrete a solution to the general issue of ” it’s hard to separate judgements about research quality from disagreement with its conclusions” as I can imagine. But I don’t think we’re using different definitions at all. At this point, it seems clear you wanted something more concrete (“have Openphil split it’s budget in the following way,”) but it wouldn’t have solved the general problem which was being discussed. Which was why I said I can’t imagine a more concrete solution to the problem you were discussing.
In any case, I’m much more interested in the object level discussion of what would help, or not, and why.
I mostly agree with your comments, but I think we need to stop referring to specific people as leaders of the movement. Will MacAskill’s opinion is not really more important than anyone else’s.
I disagree pragmatically and conceptually. First, people pay more attention to Will than to me about this, and that’s good, since he’s spent more time thinking about it, is smarter, and has more insight into what is happening. Second, in fact, movements have leaders, and egalitarianism is great for rights, but direct democracy a really bad solution to running anything which wants to get anything done. (Which seems to be a major thing I disagree with the authors of the article on.)
Saying thepaper is poorly argued is not particularly helpful or convincing. Could you highlight where and why Rubi? Breadth does not de-facto mean poorly argued. If that was the case then most of the key texts in x-risk would all be poorly argued.
Importantly, breadth was necessary to make a critique. There are simply many interrelated matters that are worth critical analysis.
Several times the case against the TUA was not actually argued, merely asserted to exist along with one or two citations for which it is hard to evaluate if they represent a consensus.
As David highlights in his response: we do not argue against the TUA, but point out the unanswered questions we observed. We do not argue against the TUA , but highlight assumptions that may be incorrect or smuggle in values. Interestingly, it’s hard to find how you believe the piece is both polemic but also not directly critiquing the TUA sufficiently. Those two criticisms are in tension.
If you check our references, you will see that we cite many published papers that treat common criticisms and open questions of the TUA (mostly by advancing the research).
You spoke to 20+ reviewers, half of which were sought out to disagree with you, and not a single one could provide a case for differential technology?
Of course there are arguments for it, some of which are discussed in the forum. Our argument is that there is a lack of peer-review evidence to support differential technological development as a cornerstone of a policy approach to x-risk. Asking that we articulate and address every hypothetical counterargument is an incredibly high-bar, and one that is not applied to any other literature in the field (certainly not the key articles of the TUA we focus on). It would also make the paper far longer and broader. Again, your points are in tension.
I think the paper would have been better served by focusing on a single section, leaving the rest to future work. The style of assertions rather than argument and skipping over potential responses comes across as more polemical than evidence-seeking.
Then it wouldn’t be a critique of the TUA. It would be a piece on differential tech development or hazard-centrism.
This is remarkably similar to a critique we got from Seán Ó hÉigeartaigh. He both said that we should zoom in an focus on one section and said that we should zoom out and compare the TUA against all (?) potential alternatives. The recommendations are in tension and obnly share the commonality of making sure we write a paper that isn’t a criqitue.
We see many remaining problems in x-risk. This paper is an attempt to list those issues and point out their weaknesses and areas for improvement. It should be read similar to a research agenda.
The abstract and conclusion clearly spell-out the areas where we take a clear position, such as the need for diversity in the field, taking lessons from complex risk assessments in other areas, and democratisting policy recommendations. We do not articulate a position on degrowth, differential tech development etc. We highlight that the existing evidence basis and arguments for them are weak.
We do not position ourselves in many cases, because we believe they require further detailed work and deliberation. In that sense I agree with you that we’re covering too much—but only if the goal was to present clear positions on all these points. Since this was not the goal, I think it’s fine to list many remaining questions and point out that indeed they still are questions that require answers. If you have strong opinion on any of the questions we mention, then go ahead write a paper that argues for one side, publish it, and let’s get on with the science.
Seán also called the paper polemic several times. (Per definition = strong verbal written attack, hostile, critical). This is not necessarily an insult (Orwell’s Animal Farm is considered a polemic against totalitarianism), but I’m guessing it’s not meant in that way.
We are somewhat disappointed that one of the most upvoted responses on the forum to our piece is so vague and unhelpful. We would expect a community that has such high epistemic standards to reward comments that articulate clear, specific criticisms grounded in evidence and capable of being acted on.
Finally, the ‘speaking abstractly’ about funding. It is hard not see to see this as an insinuation that this we have consistently produced such poor scholarship that it would justify withdrawn funding. Again, this does not signal anything positive aboput the epistemics, or just sheer civility, of the community.
Thanks for taking the time to engage with my reply. I’d like to engage with a few of the points you made.
First of all, my point prefaced with ‘speaking abstractly’ was genuinely that. I thought your paper was poorly argued, but certainly within acceptable limits that it should not result in withdrawn funding. On a sufficient timeframe, everybody will put out some duds, and your organizations certainly have a track record of producing excellent work. My point was about avoiding an overcorrection, where consistently low quality work is guaranteed some share of scarce funding merely out of fear that withdrawing such funding would be seen as censorship. It’s a sign of healthy epistemics (in a dimension orthogonal to the criticisms of your post) for a community to be able to jump from a specific discussion to the general case, but I’m sorry you saw my abstraction as a personal attack.
You saw “we do not argue against the TUA, but point out the unanswered questions we observed. .. but highlight assumptions that may be incorrect or smuggle in values”. Pointing out unanswered questions and incorrect assumptions is how you argue against something! What makes your paper polemical is that you do not sufficiently check whether the questions really are unanswered, or if the assumptions really are incorrect. There is no tension between calling your paper polemical and saying you do not sufficiently critique the TUA. A more thorough critique that took counterarguments seriously and tried to address them would not be a polemic, as it would more clearly be driven by truth-seeking than hostility.
I was not “asking that we [you] articulate and address every hypothetical counterargument”, I was asking that you address any, especially the most obvious ones. Don’t just state “it is unclear why” they are believed to skip over a counterargument.
I am disappointed that you used my original post to further attack the epistemics of this community, and doubly so for claiming it failed to articulate clear, specific criticisms. The post was clear that the main failing I saw in your paper was a lack of engagement with counterarguments, specifically the case for technological differentiation and the case for avoiding the disenfranchisement of future generations through a limited democracy. I do not believe that my criticism of the paper jumping around too much rather than engaging deeply on fewer issues was ambiguous either. Ignoring these clear, specific criticisms to use the post as evidence of poor epistemics in the EA community makes me think you may be interpreting any disagreement as evidence for your point.
You spoke to 20+ reviewers, half of which were sought out to disagree with you, and not a single one could provide a case for differential technology?
Personally I read this as a straightforward accusation of dishonesty—something I would expect moderators to object to if the comment was critical (rather than supportive) of EA orthodoxy.
This is remarkably similar to a critique we got from Seán Ó hÉigeartaigh
I also found it suspicious that Rubi felt the need to comment using an anonymous throwaway account despite speaking in favor of established power structures.
To clarify, that’s not to say Rubi is necessarily Seán Ó hÉigeartaigh. I have no idea and I don’t know Seán.
However, this situation is very strange. Almost everyone on the EAforum uses their real name or a very thin pseudonym.
I am anonymous because vocally disagreeing with the status quo would probably destroy any prospects of getting hired or funded by EA orgs (see my heavily downvoted comment about my experiences somewhere at the bottom of this thread).
Seán Ó hÉigeartaigh here. Since I have been named specifically, I would like to make it clear that when I write here, I do so under Sean_o_h, and have only ever done so. I am not Rubi, and I don’t know who Rubi is. I ask that the moderators check IP addresses, and reach out to me for any information that can help confirm this.
I am on leave and have not read the rest of this discussion, or the current paper (which I imagine is greatly improved from the draft I saw), so I will not participate further in this discussion at this time.
To clear up my identity, I am not Seán and do not know him. I go by Rubi in real life, although it is a nickname rather than my given name. I did not mean for my account to be an anonymous throwaway, and I intend to keep on using this account on the EA Forum. I can understand how that would not be obvious as this was my first post, but that is coincidental. The original post generated a lot of controversy, which is why I saw it and decided to comment.
You spoke to 20+ reviewers, half of which were sought out to disagree with you, and not a single one could provide a case for differential technology?
I would have genuinely liked an answer to this. If none of the reviewers made the case, that is useful information about the selection of the reviewers. If some reviewers did, but were ignored by the authors, then it reflects negatively on the authors not to address this and say that the case for differential technology is unclear.
I am anonymous because vocally disagreeing with the status quo would probably destroy any prospects of getting hired or funded by EA orgs (see my heavily downvoted comment about my experiences somewhere at the bottom of this thread).
This clearly doesn’t apply to Rubi, so what’s up?
There are many reasons for people to use pseudonyms on the Forum, and we allow it with few restrictions. It’s also fine to have multiple accounts.
To clarify, that’s not to say Rubi is necessarily Seán Ó hÉigeartaigh. I have no idea and I don’t know Seán.
However, this situation is very strange.
What exactly is “suspicious” or “strange” here? What is the thing you suspect, and is that thing against the Forum’s rules? If not, do you think it should be?
Using vague insinuations instead of straightforwardly accusing someone doesn’t change the result — which is that Seán understandably feels like he’s been called out and needs to deny the “non-accusation”. What were you trying to accomplish by talking about Seán here?
*****
You’ve now made several comments in this thread that were rude or insulting towards other users. That’s not okay, whether or not your position happens to align with any “status quo”. (See theseexamples of comments being moderated for exactly this reason despite their position on the “popular” side of whatever thread they were a part of.)
If you want to object to someone’s argument, state your objection. Explain why they’re wrong, or what they’ve missed. This is almost always better than “I find this user suspicious” or “this user is acting in bad faith”.
Several of your comments on this thread were good. I appreciated the links here and some of the questions here. But if you continue posting rude or insulting comments, the moderation team may take action.
Personally I read this as a straightforward accusation of dishonesty—something I would expect moderators to object to if the comment was critical (rather than supportive) of EA orthodoxy.
As a moderator, I wouldn’t object to this comment no matter who made it. I see it as a criticism of someone’s work, not an accusation that the person was dishonest.
If someone wrote a paper critiquing the differential technology paradigm and spoke to lots of reviewers about it — including many who were known to be pro-DT — but didn’t cite any pro-DT arguments, it would be fine for someone to ask: “Did you really not hear any cases for the DT paradigm?”
The question doesn’t have to mean “you deliberately acted like there were no good pro-DT arguments and hoped we would believe you”. That would frankly be a silly thing to say, since Carla and Luke are obviously familiar with these arguments and know that many of their readers would also be familiar with these arguments.
It could also imply:
“You didn’t ask the kinds of questions of reviewers that would lead them to spell out their cases for DT”
“You didn’t make room in your paper to discuss the pro-DT arguments you heard, and I think you should have”
Or, more straightforwardly, you could avoid assuming any particular implication and just read the question as a question: “Why were there no pro-DT arguments in your piece?”
I personally read implication (1), because of the statement ”...made it seem like you did not do your research”.
Carla’s response read to me as a response to implication (2): “We chose not to discuss pro-DT arguments, because trying to give that kind of space to counterarguments for all of our points would be beyond the scope of our paper.” Which is a fine, reasonable response.
I think Rubi’s comment should have been more clear; it’s more important for questioners to ask good questions than for respondents to correctly guess at what the questioner meant.
Overall, as a moderator, my response to this part of Rubi’s comment is “this is unclear and could mean many things — perhaps one of these things is uncivil, but Carla answered a civil version of the question, and I’m not going to deliberately choose to interpret the question as the most uncivil version of itself.”
*****
On the level of meta-moderation, these are the things I personally look for*, in rough priority order (other mods may differ):
Comments that clearly insult another user
Comments that include an information hazard or advocate for seriously harmful action
Comments that interfere with good discourse in other ways
If you say “Rubi’s comment is unclear, which means it’s in category (3)” — you’d be right, but there are a lot of comments that are unclear, and it isn’t realistic for moderators to respond to more than a tiny fraction of them, which means I focus on comments in the first two categories.
If you say “Rubi’s comment could be taken to imply an insult, which means it’s in category (1)” — I disagree, because I don’t see any insulting read as “clear”, and there are plenty of other ways to interpret the comment.
And of course, the specific position someone takes in a debate has no bearing on how we moderate, unless a particular position is in category (2) (“we should release a plague to kill everyone”).
*I should also mention that I’m a human with limited human attention. So I’m not going to see every comment on every post. That’s why every post comes with a “report” option, which people should really use if they think a comment should be moderated:
If you report a post or comment, one or more mods will definitely look at it and at least consider your argument for why it was reportable.
Something not being moderated doesn’t imply that it’s definitely fine — it could also mean the mods haven’t read it, or that the mods didn’t read it with “moderator vision” on. There have been times I read a comment in my off time, then saw the same comment reported later and said “oh, huh, this probably should be moderated”.
I thought the paper itself was poorly argued, largely as a function of biting off too much at once. Several times the case against the TUA was not actually argued, merely asserted to exist along with one or two citations for which it is hard to evaluate if they represent a consensus. Then, while I thought the original description of TUA was accurate, the TUA response to criticisms was entirely ignored. Statements like “it is unclear why a precise slowing and speeding up of different technologies...across the world is more feasible or effective than the simpler approach of outright bans and moratoriums” were egregious, and made it seem like you did not do your research. You spoke to 20+ reviewers, half of which were sought out to disagree with you, and not a single one could provide a case for differential technology? Not a single mention of the difficulty of incorporating future generations into the democratic process?
Ultimately, I think the paper would have been better served by focusing on a single section, leaving the rest to future work. The style of assertions rather than argument and skipping over potential responses comes across as more polemical than evidence-seeking. I believe that was the major contributor to blowback you have received.
I agree that more diversity in funders would be beneficial. It is harmful to all researchers if access to future funding is dependent on the results of their work. Overall, it is unclear from your post the actual extent of the blowback. What does “tried to prevent the paper being published” mean? Is the threat of withdrawn funding real or imagined? Were the authors whose work was criticized angry, and did they take any actions to retaliate?
Finally, I would like to abstract away from this specific paper. Criticisms of the dominant paradigm limiting future funding and career opportunities is a sign of terrible epistemics in a field. However, poor criticisms of the dominant paradigm limiting future funding and career opportunities is completely valid. The one line you wrote that I think all EAs would agree with is “This is not a game. Fucking it up could end really badly”. If the wrong arguments being made would cause harm when believed, it is not only the right but the responsibility of funders to reduce their reach. Of course, the difficulty is in differentiating wrong criticisms from criticisms against the current paradigm, while within the current paradigm. The responsibility of the researcher is to make their case as bulletproofed as possible, and designed to convince believers in the current paradigm. Otherwise, even if their claims are correct, they won’t make an impact. The “Effective” part of EA includes making the right arguments to convince the right people, rather than the argument that is cathartic to unleash.
I would agree that the article is too wide-ranging. There’s a whole host of content ranging from criticisms of expected value theory, arguments for degrowth, arguments for democracy, and then criticisms of specific risk estimates. I agreed with some parts of the paper, but it is hard to engage with such a wide range of topics.
Where? The paper doesn’t mention economic growth at all.
The paper doesn’t explicitly mention economic growth, but it does discuss technological progress, and at points seems to argue or insinuate against it.
”For others who value virtue, freedom, or equality, it is unclear why a long-term future without industrialisation is abhorrent: it all depends on one’s notion of potential.” Personally, I consider a long-term future with a 48.6% child and infant mortality rate abhorrent and opposed to human potential, but the authors don’t seem bothered by this. But they have little enough space to explain how their implied society would handle the issue, and I will not critique it excessively.
There is also a repeated implication that halting technological progress is, at a minimum, possible and possibly desirable.
”Since halting the technological juggernaut is considered impossible, an approach of differential technological development is advocated”
“The TUA rarely examines the drivers of risk generation. Instead, key texts contend that regulating or stopping technological progress is either deeply difficult, undesirable, or outright impossible”
”regressing, relinquishing, or stopping the development of many technologies is often disregarded as a feasible option” implies to me that one of those three options is a feasible option, or is at least worth investigating.
While they don’t explicitly advocate degrowth, I think it is reasonable to read them as doing such, as John does.
Point taken. Thank you for pointing this out.
I think this is more about stopping the development of specific technologies—for example, they suggest that stopping AGI from being developed is an option. Stopping the development of certain technologies isn’t necessarily related to degrowth—for example, many jurisdictions now ban government use of facial recognition technology, and there have been calls to abolish its use, but these are motivated by civil liberties concerns.
I think this conflates the criticism of the idea of unitary and unstoppable technological progress with opposition to any and all technological progress.
Suggesting that a future without industrialization is morally tolerable does not imply opposition to “any and all” technological progress, but the amount of space left is very small. I don’t think they’re taking an opinion on the value of better fishhooks.
It is morally tenable under some moral codes but not others. That’s the point.
I think that they didn’t try to oppose the TUA in the paper, or make the argument against it themselves. To quote: “We focus on the techno-utopian approach to existential risk for three reasons. First, it serves as an example of how moral values are embedded in the analysis of risks. Second, a critical perspective towards the techno-utopian approach allows us to trace how this meshing of moral values and scientific analysis in ERS can lead to conclusions, which, from a different perspective, look like they in fact increase catastrophic risk. Third, it is the original and by far most influential approach within the field.”
I also think that they don’t need to prove that others are wrong to show that the lack of diversity has harms—as you agreed.
That puts a huge and dictatorial responsibility on funders in ways that are exactly what the paper argued are inappropriate.
That’s not how at least some people who lead the movement think about it.
If not the funders, do you believe anyone should be responsible for ensuring harmful and wrong ideas are not widely circulated? I can certainly see the case that even wrong, harmful ideas should only be addressed by counterargument. However, I’m not saying that resources should be spent censoring wrong ideas harmful to EA, just that resources should not be spent actively promoting them. Funding is a privilege, consistently making bad arguments should eventually lead to the withdrawal of funding, and if on top of that those bad arguments are harmful to EA causes that should expedite the decision.
To be clear, that is absolutely not to say that publishing Democratizing Risk is/was justification for firing or cut funding, I am still very much talking abstractly.
I can’t speak for David, but personally I think it’s important that no one does this. Freedom of speach and freedom of research are important, and as long as someone doesn’t call to intentionally harm or discriminate against another, it’s important that we don’t condition funding on agreement with the funders’ views.
So,
I completely disagree with this.
This seems a very strange view. Surely an animal rights grantmaking organisation can discriminate in favour of applicants who also care about animals? Surely a Christian grantmaking organisation can condition its funding on agreement with the Bible? Surely a civil rights grantmaking organisation can decide to only donate to people who agree about civil rights?
I am not sure that there is actually a disagreement between you and Guy.
If I understand correctly, Guy says that in so far as the funder wants research to be conducted to deepen our understanding of a specific topic, the funders should not judge researchers based on their conclusions about the topic, but based on the quality and rigor of their work in the field and their contributions to the relevant research community.
This does not seem to conflict what you said, as the focus is still on work on that specific topic.
When you say “surely”, what do you mean? It would certainly be legal and moral. Would a body of research generated only by people who agree with a specific assumption be better in terms of truth-seeking than that of researchers receiving unconditional funding? Of that I’m not sure.
And now suppose it’s hard to measure whether a researcher conforms with the initial assumption, and in practice it is done by continual qualitative evaluation by the funder—is it now really only that initial assumption (e.g. animals deserve moral consideration) that’s the condition for funding, or is it now a measure of how much the research conforms with the funder’s specific conclusions from that assumption (e.g. that welfarism is good)? In this case I have a serious doubt about whether the research produces valuable results (cf. publication bias).
I suspect I disagree with the users that are downvoting this comment. The considerations Guy raises in the first half of this comment are real and important, and the strong form of the opposing view (that anyone should be “responsible for ensuring harmful and wrong ideas are not widely circulated” through anything other than counterargument) is seriously problematic and, in my view, prone to lead to some pretty dark places.
A couple of commenters here have edged closer to this strong view than I’m comfortable with, and I’m happy to see pushback against that. If strong forms of this view are more prevalent among the community than I currently think, that would for me be an update in favour of the claims made in this post/paper.
That said, I do agree that “consistently making bad arguments should eventually lead to the withdrawal of funding”, and that this problem is hard (see my other reply to Guy below).
I also agree with you. I would find it very problematic if anyone was trying to “ensure harmful and wrong ideas are not widely circulated”. Ideas should be argued against, not suppressed.
All ideas? Instructions for how to make contact poisons that aren’t traceable? Methods for identifying vulnerabilities in nuclear weapons arsenals’ command and control systems? Or, concretely and relevantly, ideas about which ways to make omnicidal bioweapons are likely to succeed.
You can tell me that making information more available is good, and I agree in almost all cases. But only almost all.
It seems clear that none of the content in the paper comes anywhere close to your examples. These are also more like “instructions” than “arguments”, and Rubi was calling for suppressing arguments on the danger that they would be believed.
The claim was a general one—I certainly don’t think that the paper was an infohazard, but the idea that this implies that there is no reason for funders to be careful about what they fund seems obviously wrong.
The original question was: “If not the funders, do you believe anyone should be responsible for ensuring harmful and wrong ideas are not widely circulated?”
And I think we need to be far more nuanced about the question than a binary response about all responsibility for funding.
>it’s important that we don’t condition funding on agreement with the funders’ views.
Surely we can condition funding on the quality of the researcher’s past work though? Freedom of speech and freedom of research are both important, but taking a heterodox approach shouldn’t guarantee a sinecure either.
If you completely disagree that people consistently producing bad work should not be allocated scare funds, I’m not sure we can have a productive conversation.
I theoretically agree, but I think it’s hard to separate judgements about research quality from disagreement with its conclusions, or even unrelated opinions on the authors.
For example, I don’t think the average research quality by non-tenured professors (who are supposedly judged by the merits of their work) is better than that of tenured professors.
I think this might just be unavoidably hard.
Like, it seems clear that funders shouldn’t fund work they strongly & rationally believe is low-quality and/or harmful. It also seems clear that reasonably free and open dissent is crucial for the epistemic health of a research community. But how exactly funders should go about managing that conflict seems very unclear – especially in domains in which infohazards are real and serious (but in which all the standard problems with secrecy still absolutely apply).
I would be interested in concrete proposals for how to do this well, given all the considerations raised in this discussion, as well as case studies of how other communities have maintained epistemic diversity and their relevance to our use case. The main example of the latter would be academia, but that has its own set of severe pathologies – especially in the humanities, which seem to be the fields one would most naturally use as control groups for existential risk studies. I think working on concrete ideas and examples of how we can do better would be more useful than arguing round and round the quality-vs-epistemic-diversity circle.
The paper points out, among many other things, that more diversity in funders would help accomplish most of these goals.
I agree that more diversity in funders would help with these problems. It would inevitably bring in some epistemic diversity, and make funding less dependent on maintaining a few specific interpersonal relationships. If funders were more numerous and less inclined to defer to each others’ analyses, then someone persistently failing to get funding would represent somewhat stronger evidence that the quality of their work is actually low.
That said, I don’t think this solves the problem. Distinguishing between bad (negatively-valuable) work and work you’re biased against because you disagree with it will still be hard in many cases, and more & better thinking about concrete approaches to dealing with this still seems preferable to throwing broad intuitions at each other.
Plus, adding more funders to EA adds its own tricky balancing acts. If your funders are too aligned with each other, you’re basically in the same position you were with only one funder. If they’re too unaligned, you end up with some funders regularly funding work that other funders think is bad for the world, and the movement either splinters or becomes too broad and vague to be useful.
I agree, but you said that it would be good to have concrete ideas, and this is as concrete as I can imagine. And so on the meta level, I think it’s a bit unreasonable to criticize the paper’s concrete suggestion by saying that it’s a hard problem, and their ideas would help, but they wouldn’t be a panacea—clearly, if “fixes everything” is the bar for concrete ideas, we should all go home.
On the object level, I agree that there are challenges, but I think the dichotomy between too few / too many is overstated in two ways. First, I think that multiple funders who are aligned are still far less likely to end up not funding people because they upset someone specific. Even in overlapping social circles, this problem is mitigated. And second, I think that “too unaligned” is the current situation, where much funding goes to things that are approximately useless, some fraction goes to good but non-optimal things, and some goes to things that actively increases dangers. And having more semi-aligned money seems unlikely to lead to EA being to broad and vague than the status quo, where EA is at least 3 different movements (Global poverty/human happiness, Animal welfare, and longtermist existential risk,) and probably more, since if you look into those groups you could easily split them further. So I’d actually think that splitting things into distinct funders would be a great improvement, allowing for greater diversity and clarity about what is being funded and pursued.
Firstly, I wasn’t responding to the OP, I was responding several levels into a conversation between two different commentators about the responsibility of individual funders to reward critical work. I think this is an important and difficult problem that doesn’t go away even in a world with more diverse funding. You brought up “diversify funding” as a solution to that problem, and I responded that it is helpful but insufficient. I didn’t say anything critical of the OP’s proposal in my response. Unless you think the only reasonable response would be to stop talking about an issue as soon as one partial solution is raised, I don’t understand your accusation of unreasonableness here at all.
Secondly, “have more diversity in funders” is not remotely a concrete proposal. It’s a vague desideratum that could be effected in many different ways, many of which are probably bad. If that is “as concrete as [you] can imagine” then we are operating under different definitions of “concrete”.
I don’t really want the discussion to focus entirely on the meta-level, but the conversation went something like “we can condition funding on the quality of the researcher’s past work ” → “I think it’s hard to separate judgements about research quality from disagreement with its conclusions, or even unrelated opinions on the authors.” → “more diversity in funders would help” (which was the original claim in the post!) → “I don’t think this solves the problem...more & better thinking about concrete approaches to dealing with this still seems preferable to throwing broad intuitions at each other.” So I pointed out that more diversity, which was the post’s suggestion, that I was referring back to, was as concrete a solution to the general issue of ” it’s hard to separate judgements about research quality from disagreement with its conclusions” as I can imagine. But I don’t think we’re using different definitions at all. At this point, it seems clear you wanted something more concrete (“have Openphil split it’s budget in the following way,”) but it wouldn’t have solved the general problem which was being discussed. Which was why I said I can’t imagine a more concrete solution to the problem you were discussing.
In any case, I’m much more interested in the object level discussion of what would help, or not, and why.
I mostly agree with your comments, but I think we need to stop referring to specific people as leaders of the movement. Will MacAskill’s opinion is not really more important than anyone else’s.
I disagree pragmatically and conceptually. First, people pay more attention to Will than to me about this, and that’s good, since he’s spent more time thinking about it, is smarter, and has more insight into what is happening. Second, in fact, movements have leaders, and egalitarianism is great for rights, but direct democracy a really bad solution to running anything which wants to get anything done. (Which seems to be a major thing I disagree with the authors of the article on.)
Saying thepaper is poorly argued is not particularly helpful or convincing. Could you highlight where and why Rubi? Breadth does not de-facto mean poorly argued. If that was the case then most of the key texts in x-risk would all be poorly argued.
Importantly, breadth was necessary to make a critique. There are simply many interrelated matters that are worth critical analysis.
Several times the case against the TUA was not actually argued, merely asserted to exist along with one or two citations for which it is hard to evaluate if they represent a consensus.
As David highlights in his response: we do not argue against the TUA, but point out the unanswered questions we observed. We do not argue against the TUA , but highlight assumptions that may be incorrect or smuggle in values. Interestingly, it’s hard to find how you believe the piece is both polemic but also not directly critiquing the TUA sufficiently. Those two criticisms are in tension.
If you check our references, you will see that we cite many published papers that treat common criticisms and open questions of the TUA (mostly by advancing the research).
You spoke to 20+ reviewers, half of which were sought out to disagree with you, and not a single one could provide a case for differential technology?
Of course there are arguments for it, some of which are discussed in the forum. Our argument is that there is a lack of peer-review evidence to support differential technological development as a cornerstone of a policy approach to x-risk. Asking that we articulate and address every hypothetical counterargument is an incredibly high-bar, and one that is not applied to any other literature in the field (certainly not the key articles of the TUA we focus on). It would also make the paper far longer and broader. Again, your points are in tension.
I think the paper would have been better served by focusing on a single section, leaving the rest to future work. The style of assertions rather than argument and skipping over potential responses comes across as more polemical than evidence-seeking.
Then it wouldn’t be a critique of the TUA. It would be a piece on differential tech development or hazard-centrism.
This is remarkably similar to a critique we got from Seán Ó hÉigeartaigh. He both said that we should zoom in an focus on one section and said that we should zoom out and compare the TUA against all (?) potential alternatives. The recommendations are in tension and obnly share the commonality of making sure we write a paper that isn’t a criqitue.
We see many remaining problems in x-risk. This paper is an attempt to list those issues and point out their weaknesses and areas for improvement. It should be read similar to a research agenda.
The abstract and conclusion clearly spell-out the areas where we take a clear position, such as the need for diversity in the field, taking lessons from complex risk assessments in other areas, and democratisting policy recommendations. We do not articulate a position on degrowth, differential tech development etc. We highlight that the existing evidence basis and arguments for them are weak.
We do not position ourselves in many cases, because we believe they require further detailed work and deliberation. In that sense I agree with you that we’re covering too much—but only if the goal was to present clear positions on all these points. Since this was not the goal, I think it’s fine to list many remaining questions and point out that indeed they still are questions that require answers. If you have strong opinion on any of the questions we mention, then go ahead write a paper that argues for one side, publish it, and let’s get on with the science.
Seán also called the paper polemic several times. (Per definition = strong verbal written attack, hostile, critical). This is not necessarily an insult (Orwell’s Animal Farm is considered a polemic against totalitarianism), but I’m guessing it’s not meant in that way.
We are somewhat disappointed that one of the most upvoted responses on the forum to our piece is so vague and unhelpful. We would expect a community that has such high epistemic standards to reward comments that articulate clear, specific criticisms grounded in evidence and capable of being acted on.
Finally, the ‘speaking abstractly’ about funding. It is hard not see to see this as an insinuation that this we have consistently produced such poor scholarship that it would justify withdrawn funding. Again, this does not signal anything positive aboput the epistemics, or just sheer civility, of the community.
Hi Carla,
Thanks for taking the time to engage with my reply. I’d like to engage with a few of the points you made.
First of all, my point prefaced with ‘speaking abstractly’ was genuinely that. I thought your paper was poorly argued, but certainly within acceptable limits that it should not result in withdrawn funding. On a sufficient timeframe, everybody will put out some duds, and your organizations certainly have a track record of producing excellent work. My point was about avoiding an overcorrection, where consistently low quality work is guaranteed some share of scarce funding merely out of fear that withdrawing such funding would be seen as censorship. It’s a sign of healthy epistemics (in a dimension orthogonal to the criticisms of your post) for a community to be able to jump from a specific discussion to the general case, but I’m sorry you saw my abstraction as a personal attack.
You saw “we do not argue against the TUA, but point out the unanswered questions we observed. .. but highlight assumptions that may be incorrect or smuggle in values”. Pointing out unanswered questions and incorrect assumptions is how you argue against something! What makes your paper polemical is that you do not sufficiently check whether the questions really are unanswered, or if the assumptions really are incorrect. There is no tension between calling your paper polemical and saying you do not sufficiently critique the TUA. A more thorough critique that took counterarguments seriously and tried to address them would not be a polemic, as it would more clearly be driven by truth-seeking than hostility.
I was not “asking that we [you] articulate and address every hypothetical counterargument”, I was asking that you address any, especially the most obvious ones. Don’t just state “it is unclear why” they are believed to skip over a counterargument.
I am disappointed that you used my original post to further attack the epistemics of this community, and doubly so for claiming it failed to articulate clear, specific criticisms. The post was clear that the main failing I saw in your paper was a lack of engagement with counterarguments, specifically the case for technological differentiation and the case for avoiding the disenfranchisement of future generations through a limited democracy. I do not believe that my criticism of the paper jumping around too much rather than engaging deeply on fewer issues was ambiguous either. Ignoring these clear, specific criticisms to use the post as evidence of poor epistemics in the EA community makes me think you may be interpreting any disagreement as evidence for your point.
Personally I read this as a straightforward accusation of dishonesty—something I would expect moderators to object to if the comment was critical (rather than supportive) of EA orthodoxy.
I also found it suspicious that Rubi felt the need to comment using an anonymous throwaway account despite speaking in favor of established power structures.
To clarify, that’s not to say Rubi is necessarily Seán Ó hÉigeartaigh. I have no idea and I don’t know Seán.
However, this situation is very strange. Almost everyone on the EAforum uses their real name or a very thin pseudonym.
I am anonymous because vocally disagreeing with the status quo would probably destroy any prospects of getting hired or funded by EA orgs (see my heavily downvoted comment about my experiences somewhere at the bottom of this thread).
This clearly doesn’t apply to Rubi, so what’s up?
Seán Ó hÉigeartaigh here. Since I have been named specifically, I would like to make it clear that when I write here, I do so under Sean_o_h, and have only ever done so. I am not Rubi, and I don’t know who Rubi is. I ask that the moderators check IP addresses, and reach out to me for any information that can help confirm this.
I am on leave and have not read the rest of this discussion, or the current paper (which I imagine is greatly improved from the draft I saw), so I will not participate further in this discussion at this time.
To clear up my identity, I am not Seán and do not know him. I go by Rubi in real life, although it is a nickname rather than my given name. I did not mean for my account to be an anonymous throwaway, and I intend to keep on using this account on the EA Forum. I can understand how that would not be obvious as this was my first post, but that is coincidental. The original post generated a lot of controversy, which is why I saw it and decided to comment.
I would have genuinely liked an answer to this. If none of the reviewers made the case, that is useful information about the selection of the reviewers. If some reviewers did, but were ignored by the authors, then it reflects negatively on the authors not to address this and say that the case for differential technology is unclear.
There are many reasons for people to use pseudonyms on the Forum, and we allow it with few restrictions. It’s also fine to have multiple accounts.
What exactly is “suspicious” or “strange” here? What is the thing you suspect, and is that thing against the Forum’s rules? If not, do you think it should be?
Using vague insinuations instead of straightforwardly accusing someone doesn’t change the result — which is that Seán understandably feels like he’s been called out and needs to deny the “non-accusation”. What were you trying to accomplish by talking about Seán here?
*****
You’ve now made several comments in this thread that were rude or insulting towards other users. That’s not okay, whether or not your position happens to align with any “status quo”. (See these examples of comments being moderated for exactly this reason despite their position on the “popular” side of whatever thread they were a part of.)
If you want to object to someone’s argument, state your objection. Explain why they’re wrong, or what they’ve missed. This is almost always better than “I find this user suspicious” or “this user is acting in bad faith”.
Several of your comments on this thread were good. I appreciated the links here and some of the questions here. But if you continue posting rude or insulting comments, the moderation team may take action.
As a moderator, I wouldn’t object to this comment no matter who made it. I see it as a criticism of someone’s work, not an accusation that the person was dishonest.
If someone wrote a paper critiquing the differential technology paradigm and spoke to lots of reviewers about it — including many who were known to be pro-DT — but didn’t cite any pro-DT arguments, it would be fine for someone to ask: “Did you really not hear any cases for the DT paradigm?”
The question doesn’t have to mean “you deliberately acted like there were no good pro-DT arguments and hoped we would believe you”. That would frankly be a silly thing to say, since Carla and Luke are obviously familiar with these arguments and know that many of their readers would also be familiar with these arguments.
It could also imply:
“You didn’t ask the kinds of questions of reviewers that would lead them to spell out their cases for DT”
“You didn’t make room in your paper to discuss the pro-DT arguments you heard, and I think you should have”
Or, more straightforwardly, you could avoid assuming any particular implication and just read the question as a question: “Why were there no pro-DT arguments in your piece?”
I personally read implication (1), because of the statement ”...made it seem like you did not do your research”.
Carla’s response read to me as a response to implication (2): “We chose not to discuss pro-DT arguments, because trying to give that kind of space to counterarguments for all of our points would be beyond the scope of our paper.” Which is a fine, reasonable response.
I think Rubi’s comment should have been more clear; it’s more important for questioners to ask good questions than for respondents to correctly guess at what the questioner meant.
Overall, as a moderator, my response to this part of Rubi’s comment is “this is unclear and could mean many things — perhaps one of these things is uncivil, but Carla answered a civil version of the question, and I’m not going to deliberately choose to interpret the question as the most uncivil version of itself.”
*****
On the level of meta-moderation, these are the things I personally look for*, in rough priority order (other mods may differ):
Comments that clearly insult another user
Comments that include an information hazard or advocate for seriously harmful action
Comments that interfere with good discourse in other ways
If you say “Rubi’s comment is unclear, which means it’s in category (3)” — you’d be right, but there are a lot of comments that are unclear, and it isn’t realistic for moderators to respond to more than a tiny fraction of them, which means I focus on comments in the first two categories.
If you say “Rubi’s comment could be taken to imply an insult, which means it’s in category (1)” — I disagree, because I don’t see any insulting read as “clear”, and there are plenty of other ways to interpret the comment.
And of course, the specific position someone takes in a debate has no bearing on how we moderate, unless a particular position is in category (2) (“we should release a plague to kill everyone”).
*I should also mention that I’m a human with limited human attention. So I’m not going to see every comment on every post. That’s why every post comes with a “report” option, which people should really use if they think a comment should be moderated:
If you report a post or comment, one or more mods will definitely look at it and at least consider your argument for why it was reportable.
Something not being moderated doesn’t imply that it’s definitely fine — it could also mean the mods haven’t read it, or that the mods didn’t read it with “moderator vision” on. There have been times I read a comment in my off time, then saw the same comment reported later and said “oh, huh, this probably should be moderated”.
Honestly, fair enough.