Moderation update: Weâre issuing dstudiocode a one-month ban for breaking Forum norms in their recent post and subsequent behavior. Specifically:
Posting content that could be interpreted as promoting violence or illegal activities.
The post in question, which asked whether murdering meat-eaters could be considered âethical,â crosses a line in terms of promoting potential violence.
As a reminder, the ban affects the user, not just the account. During their ban period, the user will not be permitted to rejoin the Forum under another account name. If they return to the Forum after the ban period, weâll expect a higher standard of norm-following and compliance with moderator instructions.
You can reach out to forum-moderation@effectivealtruism.org with any questions. You can appeal this decision here.
I suggest editing this comment to note the partial reversal on appeal and/âor retracting the comment, to avoid the risk of people seeing only it and reading it as vaguely precedential.
Strong +1 to Richard, this seems a clear incorrect moderation call and I encourage you to reverse it.
Iâm personally very strongly opposed to killing people because they eat meat, and the general ethos behind that. I donât feel in the slightest offended or bothered by that post, itâs just one in a string of hypothetical questions, and it clearly is not intended as a call to action or to encourage action.
If the EA Forum isnât somewhere where you can ask a perfectly legitimate hypothetical question like that, what are we even doing here?
The moderators have reviewed the decision to ban @dstudioscode after users appealed the decision. Tl;dr: We are revoking the ban, and are instead rate-limiting dstudioscode and warning them to avoid posting content that could be perceived as advocating for major harm or illegal activities. The rate limit is due to dstudiocodeâs pattern of engagement on the Forum, not simply because of their most recent postâfor more on this, see the âthird considerationâ listed below.
More details:
Three moderators,[1] none of whom was involved in the original decision to ban dstudiocode, discussed this case.
The first consideration was âDoes the cited norm make sense?â For reference, the norm cited in the original ban decision was âMaterials advocating major harm or illegal activities, or materials that may be easily perceived as suchâ (under âWhat we discourage (and may delete or edit out)â in our âGuide to norms on the Forumâ). The panel of three unanimously agreed that having some kind of Forum norm in this vein makes sense.
The second consideration was âDoes the post that triggered the ban actually break the cited norm?â For reference, the post ended with the question âshould murdering a meat eater be considered âethicalâ?â (Since the post was rejected by moderators, users cannot see it.[2] We regret the confusion caused by us not making this point clearer in the original ban message.)
There was disagreement amongst the moderators involved in the appeal process about whether or not the given post breaks the norm cited above. I personally think that the post is acceptable since it does not constitute a call to action. The other two moderators see the post as breaking the norm; they see the fact that it is âjustâ a philosophical question as not changing the assessment.[3] (Note: The âmeat-eater problemâ hasbeendiscussed elsewhere on the Forum. Unlike the post in question, in the eyes of the given two moderators, these posts did not break the âadvocating for major harm or illegal activitiesâ norm because they framed the question as about whether to donate to save the life of a meat-eating person, rather than as about actively murdering people.)
Amongst the two appeals-panelist moderators who see the post as norm-breaking, there was disagreement about whether the correct response would be a temporary ban or just a warning.
The third consideration was around dstudiocodeâs other actions and general standing on the Forum. dstudiocode currently sits at â38 karma following 8 posts and 30 comments. This indicates that their contributions to the discourse have generally not been helpful.[4] Accordingly, all three moderators agreed that we should be more willing to (temporarily) ban dstudiocode for a potential norm violation.
dstudiocode has also tried posting very similar, low-quality (by our lights) content multiple times. The post that triggered the ban was similar to, though more âintenseâ than, this other post of theirs from five months ago. Additionally, they tried posting similar content through an alt account just before their ban. When a Forum team member asked them about their alt, they appeared to lie.[5] All three moderators agreed that this repeated posting of very similar, low-quality content warrants at least a rate limit (i.e., a cap on how much the user in question can post or comment).[6] (For context, eight months ago, dstudiocode published five posts inaneight-dayspan, all of which were low quality, in our view. We would like to avoid a repeat of that situation: a rate limit or a ban are the tools we could employ to this end.) Lying about their alt also makes us worried that the user is trying to skirt the rules.
Overall, the appeals panel is revoking dstudiocodeâs ban, and is replacing the ban with a warning (instructing them to avoid posting content that could be perceived as advocating for major harm of illegal activities) and a rate limit. dstudiocode will be limited to at most one comment every three days and one post per week for the next three weeksâi.e., until when their original ban would have ended. Moderators will be keeping an eye on their posting, and will remove their posting rights entirely if they continue to publish content that we consider sufficiently low quality or norm-bending.
We would like to thank @richard_ngo and @Neel Nanda for appealing the original decision, as well as @Jason and @dirk for contributing to the discussion. We apologize that the original ban notice was rushed, and failed to lay out all the factors that went into the decision.[7] (Reasoning along the lines of the âthird considerationâ given above went into the original decision, but we failed to communicate that.)
If anyone has questions or concerns about how we have handled the appeals process, feel free to comment below or reach out.
Technically, two moderators and one moderation advisor. (I write âthree moderatorsâ in the main text because that makes referring to them, as I do throughout the text, less cumbersome.)
The three of us discussed whether or not to quote the full version of the post that triggered the ban in this moderator comment, to allow users to see exactly what is being ruled on. By split decision (with me as the dissenting minority), we have decided not to do so: in general, we will probably avoid republishing content that is objectionable enough to get taken down in the first place.
Iâm not certain, but my guess is that the disagreement here is related to the high vs. low decoupling spectrum (where high decouplers, like myself, are fine with entertaining philosophical questions like these, whereas low decouplers tend to see such questions as crossing a line).
We donât see karma as a perfect measure of a userâs value by any means, but we do consider a userâs total karma being negative to be a strong signal that something is awry.
Looking through dstudiocodeâs post and comment history, I do think that they are trying to engage in good faith (as opposed to being a troll, say). However, the EA Forum exists for a particular purpose, and has particular standards in place to serve that purpose, and this means that the Forum is not necessarily a good place for everyone who is trying to contribute. (For what itâs worth, I feel a missing mood in writing this.)
In response to our request that they stop publishing similar content from multiple accounts, they said: âPosted from multiple accounts? I feel it is possible that the same post may have been created because maybe the topic is popular?â However, we are >99% confident, based on our usual checks for multiple account use, that the other account that tried to publish this similar content is an alt controlled by them. (They did subsequently stop trying to publish from other accounts.)
We do not have an official policy on rate limits, at present, although we have used rate limits on occasion. We aim to improve our process here. In short, rate limits may be a more appropriate intervention than bans are for users who arenât clearly breaking norms, but who are nonetheless posting low-quality content or repeatedly testing the edges of the norms.
Notwithstanding the notice we published, which was a mistake, I am not sure if the ban decision itself was a mistake. It turns out that different moderators have different views on the post in question, and I think the difference between the original decision to ban and the present decision to instead warn and rate limit can mostly be chalked up to reasonable disagreement between different moderators. (We are choosing to override the original decision since we spent significantly longer on the review, and we therefore have more confidence in the review decision being âcorrectâ. We put substantial effort into the review because established users, in their appeal, made some points that we felt deserved to be taken seriously. However, this level of effort would not be tenable for most âregularâ moderation callsâi.e., those involving unestablished or not-in-great-standing users, like dstudiocodeâgiven the tradeoffs we face.)
I appreciate the thought that went into this. I also think that using rate-limits as a tool, instead of bans, is in general a good idea. I continue to strongly disagree with the decisions on a few points:
I still think including the âmaterials that may be easily perceived as suchâ clause has a chilling effect.
I also remember someoneâs comment that the things youâre calling ânormsâ are actually rules, and itâs a little disingenuous to not call them that; I continue to agree with this.
The fact that youâre not even willing to quote the parts of the post that were objectionable feels like an indication of a mindset that I really disagree with. Itâs like⊠treating words as inherently dangerous? Not thinking at all about the use-mention distinction? I mean, hereâs a quote from the Hamas charter: âThere is no solution for the Palestinian question except through Jihad.â Clearly this is way way more of an incitement to violence than any quote of dstudiocodeâs, which youâre apparently not willing to quote. (I am deliberately not expressing any opinion about whether the Hamas quote is correct; Iâm just quoting them.) Whatâs the difference?
âThey see the fact that it is âjustâ a philosophical question as not changing the assessment.â Okay, let me now quote Singer. âHuman babies are not born self-aware, or capable of grasping that they exist over time. They are not persons⊠the life of a newborn is of less value than the life of a pig, a dog, or a chimpanzee.â Will you warn/âban me from the EA forum for quoting Singer, without endorsing that statement? What if I asked, philosophically, âIf Singer were right, would it be morally acceptable to kill a baby to save a dogâs life?â I mean, there are whole subfields of ethics based on asking about who you would kill in order to save whom (which is why Iâm pushing on this so strongly: the thing you are banning from the forum is one of the key ways people have had philosophical debates over foundational EA ideas). What if I defended Singerâs argument in a post of my own?
As I say this, I feel some kind of twinge of concern that people will find this and use it to attack me, or that crazy people will act badly inspired by my questions. I hypothesize that the moderators are feeling this kind of twinge more generally. I think this is the sort of twinge that should and must be overridden, because listening to it means that your discourse will forever be at the mercy of whoever is most hostile to you, or whoever is craziest. You canât figure out true things in that situation.
(On a personal level, I apologize to the moderators for putting them in difficult situations by saying things that are deliberately in the grey areas of their moderation policy. Nevertheless I think itâs important enough that I will continue doing this. EA is not just a group of nerds on the internet any more, itâs a force that shapes the world in a bunch of ways, and so it is crucial that we donât echo-chamber ourselves into doing crazy stuff (including, or especially, when the crazy stuff matches mainstream consensus). If you would like to warn/âban me, then I would harbor no personal ill-will about it, though of course I will consider that evidence that I and others should be much more wary about the quality of discourse on the forum.)
Iâm pretty sure we could come up with various individuals and groups of people that some users of this forum would prefer not to exist. Thereâs no clear and unbiased way to decide which of those individuals and groups could be the target of âphilosophical questionsâ about the desirability of murdering them and which could not. Unless weâre going to allow the question as applied to any individual or group (which I think is untenable for numerous reasons), the line has to be drawn somewhere. Would it be ethical to get rid of this meddlesome priest should be suspendable or worse (except that the meddlesome priest in question has been dead for over eight hundred years).
And I think drawing the line at weâre not going to allow hypotheticals about murdering discernable people[1] is better (and poses less risk of viewpoint suppression) than expecting the mods to somehow devise a rule for when that content will be allowed and consistently apply it. I think the effect of a bright-line no-murder-talk rule on expression of ideas is modest because (1) posters can get much of the same result by posing non-violent scenarios (e.g., leaving someone to drown in a pond is neither an act of violence nor generally illegal in the United States) and (2) there are other places to have discussions if the murder content is actually important to the philosophical point.[2]
By âdiscernable people,â I mean those with some sort of salient real-world characteristic as opposed to being 99-100% generic abstractions (especially if in a clearly unrealistic scenario, like the people in the trolley problem).
And I think drawing the line at weâre not going to allow hypotheticals about murdering discernible people
Do you think it is acceptable to discuss the death penalty on the forum? Intuitively this seems within scopeâhistorically we have discussed criminal justice reform on the forum, and capital punishment is definitely part of that.
If so, is the distinction state violence vs individual violence? This seems not totally implausible to me, though it does suggest that the offending poster could simply re-word their post to be about state-sanctioned executions and leave the rest of the content untouched.
Iâve weak karma downvoted and disagreed with this, then hit the âinsightfulâ button. Definitely made me think and learn.
I agree that this is really tricky question, and some of those philosophical conversations (including this one) are important and should happen, but I donât think this particular EA forum is the best place for them, for a few reasons.
1) I think there are better places to have these often awkward, fraught conversations. I think they are often better had in-person where you can connect, preface, soften and easily retract. I recently got into a mini online-tiff, when a wise onlooker noted...
âOnline discussions can turn that way with a few misinterpretations creating a doom loop that wouldnât happen with a handshake and a drinkâ
Or alternatively perhaps in a more academic/ânarrow forum where people have similar discussion norms and understandings. This forum has a particularly wide range of users, from nerds to philosophers to practitioners to managers to donors so thereâs a very wide range of norms and understandings.
2) Thereâs potential reputational damage for all the people doing great EA work across the spectrum here. These kinds of discussions could lead to more hit-pieces and reduced funding. It would be a pity if the AI apocalypse hit us because of funding cuts due to these discussions. (OK now Iâm strawmanning a bit :D)
3) The forum might be an entry-point for some people into EA things. I donât think its a good idea for this to be these discussions to be the first thing someone looking into EA sees on the internet.
4) It might be a bit of a strawman to say our âdiscourse will forever be at the mercy of whoever is most hostile to you, or whoever is craziest.â I think people hostile to EA donât like many things said here on the forum, but we arenât forever at the mercy of them and we keep talking. I think there are just a few particular topics which give people more ammunition for public take-downs, and there is wisdom in sometimes avoiding loading balls into your opponents cannons.
5) I think if you (like Singer) write your own opinion in their own book its a different situationâyou are the one writing and take full responsibility for your workâon a public forum it at least feels like there is at least a smidgeon of shared accountability for what is said. Forms of this debate that has been going on for sometime about what is posted on Twitter /â Facebook etc.
6) I agree with you the quote from the Hamas charter is more dangerousâand think we shouldnât be publishing or discussing that on the forum either.
I have great respect for these free speech arguments, and think this is a super hard question where the âbestâ thing to do might well change a lot over time, but right now I donât think allowing these discussions and arguments on this particular EA forum will lead to more good in the long run.
I think there are better places to have these often awkward, fraught conversations.
You are literally talking about the sort of conversations that created EA. If people donât have these conversations on the forum (the single best way to create common knowledge in the EA commmunity), then it will be much harder to course-correct places where fundamental ideas are mistaken. I think your comment proceeds from the implicit assumption that weâre broadly right about stuff, and mostly just need to keep our heads down and do the work. I personally think that a version of EA that doesnât have the ability to course-correct in big ways would be net negative for the world. In general it is not possible to e.g. identify ongoing moral catastrophes when youâre optimizing your main venue of conversations for avoiding seeming weird.
I agree with you the quote from the Hamas charter is more dangerousâand think we shouldnât be publishing or discussing that on the forum either.
If youâre not able to talk about evil people and their ideologies, then you will not be able to account for them in reasoning about how to steer the world. I think EA is already far too naive about how power dynamics work at large scales, given how much influence weâre wielding; this makes it worse.
Thereâs potential reputational damage for all the people doing great EA work across the spectrum here.
I think there are just a few particular topics which give people more ammunition for public take-downs, and there is wisdom in sometimes avoiding loading balls into your opponents cannons.
Insofar as youâre thinking about this as a question of coalitional politics, I can phrase it in those terms too: the more censorious EA becomes, the more truth-seeking people will disaffiliate from it. Habryka, who was one of the most truth-seeking people involved in EA, has already done so; I wouldnât say it was directly because of EA not being truth-seeking enough, but I think that was one big issue for him amongst a cluster of related issues. I donât currently plan to, but Iâve considered the possibility, and the quality of EAâs epistemic norms is one of my major considerations (of course, the forumâs norms are only a small part of that).
However, having said this, I donât think you should support more open forum norms mostly as a concession to people like me, but rather in order to pursue your own goals more effectively. Movements that arenât able to challenge foundational assumptions end up like environmentalists: actively harming the causes theyâre trying to support.
Just to narrow in on a single pointâI have found the âEA fundamentally depends on uncomfortable conversationsâ point to be a bit unnuanced in the past. It seems like we could be more productive by delineating which kinds of discomfort we want to defendâfor example, most people here donât want to have uncomfortable conversations about age of consent laws (thankfully), but do want to have them about factory farming.
When I think about the founding myths of EA, most of them seem to revolve around the discomfort of applying utilitarianism in practice, or on how far we should expand our moral circles. I think EA wouldâve broadly survived intact by lightly moderating other kinds of discomfort (or it may have even expanded).
Iâm not keen to take a stance on whether this post should or shouldnât be allowed on the forum, but I am curious to hear if and where you would draw this line :)
Narrowing in even further on the example you gave, as an illustration: I just had an uncomfortable conversation about age of consent laws literally yesterday with an old friend of mine. Specifically, my friend was advocating that the most important driver of crime is poverty, and I was arguing that itâs cultural acceptance of crime. I pointed to age of consent laws varying widely across different countries as evidence that there are some cultures which accept behavior that most westerners think of as deeply immoral (and indeed criminal).
Picturing some responses you might give to this:
Thatâs not the sort of uncomfortable claim youâre worried about
But many possible continuations of this conversation would in fact have gotten into more controversial territory. E.g. maybe a cultural relativist would defend those other countries having lower age of consent laws. I find cultural relativism kinda crazy (for this and related reasons) but itâs a pretty mainstream position.
I could have made the point in more sensitive ways
Maybe? But the whole point of the conversation was about ways in which some cultures are better than others. This is inherently going to be a sensitive claim, and itâs hard to think of examples that are compelling without being controversial.
This is not the sort of thing people should be discussing on the forum
But EA as a movement is interested in things like:
Criminal justice reform (which OpenPhil has spent many tens of millions of dollars on)
Promoting womenâs rights (especially in the context of global health and extreme poverty reduction)
What factors make what types of foreign aid more or less effective
More generally, the relationship between the developed and the developing world
So this sort of debate does seem pretty relevant.
I think EA wouldâve broadly survived intact by lightly moderating other kinds of discomfort (or it may have even expanded).
The important point is that we didnât know in advance which kinds of discomfort were of crucial importance. The relevant baseline here is not early EAs moderating ourselves, itâs something like âthe rest of academic philosophy/âsociety at large moderating EAâ, which seems much more likely to have stifled early EAâs ability to identify important issues and interventions.
(I also think weâve ended up at some of the wrong points on some of these issues, but thatâs a longer debate.)
Do you have an example of the kind of early EA conversation which you think was really important which helped came up with core EA tenets might be frowned upon or censored on the forum now? Iâm still super dubious about whether leaving out a small number of specific topics really leaves much value on the table.
And I really think conversations can be had in more sensitive ways. In the the case of the original banned post, just as good a philosophical conversation could be had without explicitly talking about killing people. The conversation already was being had on another thread âthe meat eater problemâ
And as a sidebar yeah I wouldnât have any issue with that above conversation myself because we just have to practically discuss that with donors and internally when providing health care and getting confronted with tricky situations. Also (again sidebar) itâs interesting that age of marriage/âconsent conversations can be where classic left wing cultural relativism and gender safeguarding collide and donât know which way to swing. Weâve had to ask that question practically in our health centers, to decide who to give family planning to and when to think of referring to police etc. Super tricky.
My point is not that the current EA forum would censor topics that were actually important early EA conversations, because EAs have now been selected for being willing to discuss those topics. My point is that the current forum might censor topics that would be important course-corrections, just as if the rest of society had been moderating early EA conversations, those conversations might have lost important contributions like impartiality between species (controversial: youâre saying human lives donât matter very much!), the ineffectiveness of development aid (controversial: youâre attacking powerful organizations!), transhumanism (controversial, according to the people who say itâs basically eugenics), etc.
Re âconversations can be had in more sensitive waysâ, I mostly disagree, because of the considerations laid out here: the people who are good at discussing topics sensitively are mostly not the ones who are good at coming up with important novel ideas.
For example, it seems plausible that genetic engineering for human intelligence enhancement is an important and highly neglected intervention. But you had to be pretty disagreeable to bring it into the public conversation a few years ago (I think itâs now a bit more mainstream).
This moderation policy seems absurd. The post in question was clearly asking purely hypothetical questions, and wasnât even advocating for any particular answer to the question. May as well ban users for asking whether itâs moral to push a man off a bridge to stop a trolley, or ban Peter Singer for his thought experiments about infanticide.
Perhaps dstudiocode has misbehaved in other ways, but this announcement focuses on something that should be clearly within the bounds of acceptable discourse. (In particular, the standard of âcontent that could be interpreted as Xâ is a very censorious one, since you now need to cater to a wide range of possible interpretations.)
Ah, thanks, thatâs important contextâI semi-retract my strongly worded comment above, depending on exactly how bad the removed post was, but can imagine posts in this genre that I think are genuinely bad
I donât like my mod message, and I apologize for it. I was rushed and used some templated language that I knew damn well at the time that I wasnât excited about putting my name behind. I nevertheless did and bear the responsibility.
Thatâs all from me for now. The mods who werenât involved in the original decision will come in and reconsider the ban, pursuant to the appeal.
In the post that prompted the ban, they asked whether murdering meat-eaters could be considered ethical. I donât want to comment on whether this would be an appropriate topic for a late night philosophy club conversation, it is not an appropriate topic for the EA Forum.
I think speculating about what exactly constitutes the most good is perfectly on-topic. While âmurdering meat-eatersâ is perhaps an overly direct phrasing (and of course under most ethical frameworks murder raises additional issues as compared to mere inaction or deprioritization), the question of whether the negative utility produced by one marginal personâs worth of factory farming outweighs the positive utility that person experiencesâcolloquially referred to as the meat-eater problemâis one that hasbeendiscussedherea numberoftimes, and that I feel is quite relevant to the question of which interventions should be prioritized.
Iâd separate out the removal and the suspension, and dissent only as to the latter.
I get why the mods would feel the need to chart a wide berth around anything that some person could somehow âinterpret[] as promoting violence or illegal activities.â Making a rule against brief hypothetical mentions of the possible ethics of murder is defensible, especially in light of certain practical realities.
However, I canât agree with taking punitive action against a user where the case that they violated the norm is this tenuous and there is a lack of fair prior notice of the modsâ interpretation. For that kind of action, I think the minimum standard would be either clear notice or content that a reasonable person would recognize could reasonably be interpreted as promoting violence. In other words, was the poster negligent in failing to recognize that violence promotion was a reasonable interpretation?
I donât think the violence-promoting interpretation is a reasonable one here, and it sounds like several other users agreeâwhich I take as evidence of non-negligence.
Moderation update: Weâre issuing dstudiocode a one-month ban for breaking Forum norms in their recent post and subsequent behavior. Specifically:
Posting content that could be interpreted as promoting violence or illegal activities.
The post in question, which asked whether murdering meat-eaters could be considered âethical,â crosses a line in terms of promoting potential violence.
As a reminder, the ban affects the user, not just the account. During their ban period, the user will not be permitted to rejoin the Forum under another account name. If they return to the Forum after the ban period, weâll expect a higher standard of norm-following and compliance with moderator instructions.
You can reach out to forum-moderation@effectivealtruism.org with any questions. You can appeal this decision here.
I suggest editing this comment to note the partial reversal on appeal and/âor retracting the comment, to avoid the risk of people seeing only it and reading it as vaguely precedential.
Strong +1 to Richard, this seems a clear incorrect moderation call and I encourage you to reverse it.
Iâm personally very strongly opposed to killing people because they eat meat, and the general ethos behind that. I donât feel in the slightest offended or bothered by that post, itâs just one in a string of hypothetical questions, and it clearly is not intended as a call to action or to encourage action.
If the EA Forum isnât somewhere where you can ask a perfectly legitimate hypothetical question like that, what are we even doing here?
The moderators have reviewed the decision to ban @dstudioscode after users appealed the decision. Tl;dr: We are revoking the ban, and are instead rate-limiting dstudioscode and warning them to avoid posting content that could be perceived as advocating for major harm or illegal activities. The rate limit is due to dstudiocodeâs pattern of engagement on the Forum, not simply because of their most recent postâfor more on this, see the âthird considerationâ listed below.
More details:
Three moderators,[1] none of whom was involved in the original decision to ban dstudiocode, discussed this case.
The first consideration was âDoes the cited norm make sense?â For reference, the norm cited in the original ban decision was âMaterials advocating major harm or illegal activities, or materials that may be easily perceived as suchâ (under âWhat we discourage (and may delete or edit out)â in our âGuide to norms on the Forumâ). The panel of three unanimously agreed that having some kind of Forum norm in this vein makes sense.
The second consideration was âDoes the post that triggered the ban actually break the cited norm?â For reference, the post ended with the question âshould murdering a meat eater be considered âethicalâ?â (Since the post was rejected by moderators, users cannot see it.[2] We regret the confusion caused by us not making this point clearer in the original ban message.)
There was disagreement amongst the moderators involved in the appeal process about whether or not the given post breaks the norm cited above. I personally think that the post is acceptable since it does not constitute a call to action. The other two moderators see the post as breaking the norm; they see the fact that it is âjustâ a philosophical question as not changing the assessment.[3] (Note: The âmeat-eater problemâ has been discussed elsewhere on the Forum. Unlike the post in question, in the eyes of the given two moderators, these posts did not break the âadvocating for major harm or illegal activitiesâ norm because they framed the question as about whether to donate to save the life of a meat-eating person, rather than as about actively murdering people.)
Amongst the two appeals-panelist moderators who see the post as norm-breaking, there was disagreement about whether the correct response would be a temporary ban or just a warning.
The third consideration was around dstudiocodeâs other actions and general standing on the Forum. dstudiocode currently sits at â38 karma following 8 posts and 30 comments. This indicates that their contributions to the discourse have generally not been helpful.[4] Accordingly, all three moderators agreed that we should be more willing to (temporarily) ban dstudiocode for a potential norm violation.
dstudiocode has also tried posting very similar, low-quality (by our lights) content multiple times. The post that triggered the ban was similar to, though more âintenseâ than, this other post of theirs from five months ago. Additionally, they tried posting similar content through an alt account just before their ban. When a Forum team member asked them about their alt, they appeared to lie.[5] All three moderators agreed that this repeated posting of very similar, low-quality content warrants at least a rate limit (i.e., a cap on how much the user in question can post or comment).[6] (For context, eight months ago, dstudiocode published five posts in an eight-day span, all of which were low quality, in our view. We would like to avoid a repeat of that situation: a rate limit or a ban are the tools we could employ to this end.) Lying about their alt also makes us worried that the user is trying to skirt the rules.
Overall, the appeals panel is revoking dstudiocodeâs ban, and is replacing the ban with a warning (instructing them to avoid posting content that could be perceived as advocating for major harm of illegal activities) and a rate limit. dstudiocode will be limited to at most one comment every three days and one post per week for the next three weeksâi.e., until when their original ban would have ended. Moderators will be keeping an eye on their posting, and will remove their posting rights entirely if they continue to publish content that we consider sufficiently low quality or norm-bending.
We would like to thank @richard_ngo and @Neel Nanda for appealing the original decision, as well as @Jason and @dirk for contributing to the discussion. We apologize that the original ban notice was rushed, and failed to lay out all the factors that went into the decision.[7] (Reasoning along the lines of the âthird considerationâ given above went into the original decision, but we failed to communicate that.)
If anyone has questions or concerns about how we have handled the appeals process, feel free to comment below or reach out.
Technically, two moderators and one moderation advisor. (I write âthree moderatorsâ in the main text because that makes referring to them, as I do throughout the text, less cumbersome.)
The three of us discussed whether or not to quote the full version of the post that triggered the ban in this moderator comment, to allow users to see exactly what is being ruled on. By split decision (with me as the dissenting minority), we have decided not to do so: in general, we will probably avoid republishing content that is objectionable enough to get taken down in the first place.
Iâm not certain, but my guess is that the disagreement here is related to the high vs. low decoupling spectrum (where high decouplers, like myself, are fine with entertaining philosophical questions like these, whereas low decouplers tend to see such questions as crossing a line).
We donât see karma as a perfect measure of a userâs value by any means, but we do consider a userâs total karma being negative to be a strong signal that something is awry.
Looking through dstudiocodeâs post and comment history, I do think that they are trying to engage in good faith (as opposed to being a troll, say). However, the EA Forum exists for a particular purpose, and has particular standards in place to serve that purpose, and this means that the Forum is not necessarily a good place for everyone who is trying to contribute. (For what itâs worth, I feel a missing mood in writing this.)
In response to our request that they stop publishing similar content from multiple accounts, they said: âPosted from multiple accounts? I feel it is possible that the same post may have been created because maybe the topic is popular?â However, we are >99% confident, based on our usual checks for multiple account use, that the other account that tried to publish this similar content is an alt controlled by them. (They did subsequently stop trying to publish from other accounts.)
We do not have an official policy on rate limits, at present, although we have used rate limits on occasion. We aim to improve our process here. In short, rate limits may be a more appropriate intervention than bans are for users who arenât clearly breaking norms, but who are nonetheless posting low-quality content or repeatedly testing the edges of the norms.
Notwithstanding the notice we published, which was a mistake, I am not sure if the ban decision itself was a mistake. It turns out that different moderators have different views on the post in question, and I think the difference between the original decision to ban and the present decision to instead warn and rate limit can mostly be chalked up to reasonable disagreement between different moderators. (We are choosing to override the original decision since we spent significantly longer on the review, and we therefore have more confidence in the review decision being âcorrectâ. We put substantial effort into the review because established users, in their appeal, made some points that we felt deserved to be taken seriously. However, this level of effort would not be tenable for most âregularâ moderation callsâi.e., those involving unestablished or not-in-great-standing users, like dstudiocodeâgiven the tradeoffs we face.)
Seems reasonable (tbh with that context Iâm somewhat OK with the original ban), thanks for clarifying!
I appreciate the thought that went into this. I also think that using rate-limits as a tool, instead of bans, is in general a good idea. I continue to strongly disagree with the decisions on a few points:
I still think including the âmaterials that may be easily perceived as suchâ clause has a chilling effect.
I also remember someoneâs comment that the things youâre calling ânormsâ are actually rules, and itâs a little disingenuous to not call them that; I continue to agree with this.
The fact that youâre not even willing to quote the parts of the post that were objectionable feels like an indication of a mindset that I really disagree with. Itâs like⊠treating words as inherently dangerous? Not thinking at all about the use-mention distinction? I mean, hereâs a quote from the Hamas charter: âThere is no solution for the Palestinian question except through Jihad.â Clearly this is way way more of an incitement to violence than any quote of dstudiocodeâs, which youâre apparently not willing to quote. (I am deliberately not expressing any opinion about whether the Hamas quote is correct; Iâm just quoting them.) Whatâs the difference?
âThey see the fact that it is âjustâ a philosophical question as not changing the assessment.â Okay, let me now quote Singer. âHuman babies are not born self-aware, or capable of grasping that they exist over time. They are not persons⊠the life of a newborn is of less value than the life of a pig, a dog, or a chimpanzee.â Will you warn/âban me from the EA forum for quoting Singer, without endorsing that statement? What if I asked, philosophically, âIf Singer were right, would it be morally acceptable to kill a baby to save a dogâs life?â I mean, there are whole subfields of ethics based on asking about who you would kill in order to save whom (which is why Iâm pushing on this so strongly: the thing you are banning from the forum is one of the key ways people have had philosophical debates over foundational EA ideas). What if I defended Singerâs argument in a post of my own?
As I say this, I feel some kind of twinge of concern that people will find this and use it to attack me, or that crazy people will act badly inspired by my questions. I hypothesize that the moderators are feeling this kind of twinge more generally. I think this is the sort of twinge that should and must be overridden, because listening to it means that your discourse will forever be at the mercy of whoever is most hostile to you, or whoever is craziest. You canât figure out true things in that situation.
(On a personal level, I apologize to the moderators for putting them in difficult situations by saying things that are deliberately in the grey areas of their moderation policy. Nevertheless I think itâs important enough that I will continue doing this. EA is not just a group of nerds on the internet any more, itâs a force that shapes the world in a bunch of ways, and so it is crucial that we donât echo-chamber ourselves into doing crazy stuff (including, or especially, when the crazy stuff matches mainstream consensus). If you would like to warn/âban me, then I would harbor no personal ill-will about it, though of course I will consider that evidence that I and others should be much more wary about the quality of discourse on the forum.)
On point 4:
Iâm pretty sure we could come up with various individuals and groups of people that some users of this forum would prefer not to exist. Thereâs no clear and unbiased way to decide which of those individuals and groups could be the target of âphilosophical questionsâ about the desirability of murdering them and which could not. Unless weâre going to allow the question as applied to any individual or group (which I think is untenable for numerous reasons), the line has to be drawn somewhere. Would it be ethical to get rid of this meddlesome priest should be suspendable or worse (except that the meddlesome priest in question has been dead for over eight hundred years).
And I think drawing the line at weâre not going to allow hypotheticals about murdering discernable people[1] is better (and poses less risk of viewpoint suppression) than expecting the mods to somehow devise a rule for when that content will be allowed and consistently apply it. I think the effect of a bright-line no-murder-talk rule on expression of ideas is modest because (1) posters can get much of the same result by posing non-violent scenarios (e.g., leaving someone to drown in a pond is neither an act of violence nor generally illegal in the United States) and (2) there are other places to have discussions if the murder content is actually important to the philosophical point.[2]
By âdiscernable people,â I mean those with some sort of salient real-world characteristic as opposed to being 99-100% generic abstractions (especially if in a clearly unrealistic scenario, like the people in the trolley problem).
I am not expressing an opinion about whether there are philosophical points for which murder content actually is important.
Do you think it is acceptable to discuss the death penalty on the forum? Intuitively this seems within scopeâhistorically we have discussed criminal justice reform on the forum, and capital punishment is definitely part of that.
If so, is the distinction state violence vs individual violence? This seems not totally implausible to me, though it does suggest that the offending poster could simply re-word their post to be about state-sanctioned executions and leave the rest of the content untouched.
Iâve weak karma downvoted and disagreed with this, then hit the âinsightfulâ button. Definitely made me think and learn.
I agree that this is really tricky question, and some of those philosophical conversations (including this one) are important and should happen, but I donât think this particular EA forum is the best place for them, for a few reasons.
1) I think there are better places to have these often awkward, fraught conversations. I think they are often better had in-person where you can connect, preface, soften and easily retract. I recently got into a mini online-tiff, when a wise onlooker noted...
âOnline discussions can turn that way with a few misinterpretations creating a doom loop that wouldnât happen with a handshake and a drinkâ
Or alternatively perhaps in a more academic/ânarrow forum where people have similar discussion norms and understandings. This forum has a particularly wide range of users, from nerds to philosophers to practitioners to managers to donors so thereâs a very wide range of norms and understandings.
2) Thereâs potential reputational damage for all the people doing great EA work across the spectrum here. These kinds of discussions could lead to more hit-pieces and reduced funding. It would be a pity if the AI apocalypse hit us because of funding cuts due to these discussions. (OK now Iâm strawmanning a bit :D)
3) The forum might be an entry-point for some people into EA things. I donât think its a good idea for this to be these discussions to be the first thing someone looking into EA sees on the internet.
4) It might be a bit of a strawman to say our âdiscourse will forever be at the mercy of whoever is most hostile to you, or whoever is craziest.â I think people hostile to EA donât like many things said here on the forum, but we arenât forever at the mercy of them and we keep talking. I think there are just a few particular topics which give people more ammunition for public take-downs, and there is wisdom in sometimes avoiding loading balls into your opponents cannons.
5) I think if you (like Singer) write your own opinion in their own book its a different situationâyou are the one writing and take full responsibility for your workâon a public forum it at least feels like there is at least a smidgeon of shared accountability for what is said. Forms of this debate that has been going on for sometime about what is posted on Twitter /â Facebook etc.
6) I agree with you the quote from the Hamas charter is more dangerousâand think we shouldnât be publishing or discussing that on the forum either.
I have great respect for these free speech arguments, and think this is a super hard question where the âbestâ thing to do might well change a lot over time, but right now I donât think allowing these discussions and arguments on this particular EA forum will lead to more good in the long run.
Ty for the reply; a jumble of responses below.
You are literally talking about the sort of conversations that created EA. If people donât have these conversations on the forum (the single best way to create common knowledge in the EA commmunity), then it will be much harder to course-correct places where fundamental ideas are mistaken. I think your comment proceeds from the implicit assumption that weâre broadly right about stuff, and mostly just need to keep our heads down and do the work. I personally think that a version of EA that doesnât have the ability to course-correct in big ways would be net negative for the world. In general it is not possible to e.g. identify ongoing moral catastrophes when youâre optimizing your main venue of conversations for avoiding seeming weird.
If youâre not able to talk about evil people and their ideologies, then you will not be able to account for them in reasoning about how to steer the world. I think EA is already far too naive about how power dynamics work at large scales, given how much influence weâre wielding; this makes it worse.
Insofar as youâre thinking about this as a question of coalitional politics, I can phrase it in those terms too: the more censorious EA becomes, the more truth-seeking people will disaffiliate from it. Habryka, who was one of the most truth-seeking people involved in EA, has already done so; I wouldnât say it was directly because of EA not being truth-seeking enough, but I think that was one big issue for him amongst a cluster of related issues. I donât currently plan to, but Iâve considered the possibility, and the quality of EAâs epistemic norms is one of my major considerations (of course, the forumâs norms are only a small part of that).
However, having said this, I donât think you should support more open forum norms mostly as a concession to people like me, but rather in order to pursue your own goals more effectively. Movements that arenât able to challenge foundational assumptions end up like environmentalists: actively harming the causes theyâre trying to support.
Just to narrow in on a single pointâI have found the âEA fundamentally depends on uncomfortable conversationsâ point to be a bit unnuanced in the past. It seems like we could be more productive by delineating which kinds of discomfort we want to defendâfor example, most people here donât want to have uncomfortable conversations about age of consent laws (thankfully), but do want to have them about factory farming.
When I think about the founding myths of EA, most of them seem to revolve around the discomfort of applying utilitarianism in practice, or on how far we should expand our moral circles. I think EA wouldâve broadly survived intact by lightly moderating other kinds of discomfort (or it may have even expanded).
Iâm not keen to take a stance on whether this post should or shouldnât be allowed on the forum, but I am curious to hear if and where you would draw this line :)
Narrowing in even further on the example you gave, as an illustration: I just had an uncomfortable conversation about age of consent laws literally yesterday with an old friend of mine. Specifically, my friend was advocating that the most important driver of crime is poverty, and I was arguing that itâs cultural acceptance of crime. I pointed to age of consent laws varying widely across different countries as evidence that there are some cultures which accept behavior that most westerners think of as deeply immoral (and indeed criminal).
Picturing some responses you might give to this:
Thatâs not the sort of uncomfortable claim youâre worried about
But many possible continuations of this conversation would in fact have gotten into more controversial territory. E.g. maybe a cultural relativist would defend those other countries having lower age of consent laws. I find cultural relativism kinda crazy (for this and related reasons) but itâs a pretty mainstream position.
I could have made the point in more sensitive ways
Maybe? But the whole point of the conversation was about ways in which some cultures are better than others. This is inherently going to be a sensitive claim, and itâs hard to think of examples that are compelling without being controversial.
This is not the sort of thing people should be discussing on the forum
But EA as a movement is interested in things like:
Criminal justice reform (which OpenPhil has spent many tens of millions of dollars on)
Promoting womenâs rights (especially in the context of global health and extreme poverty reduction)
What factors make what types of foreign aid more or less effective
More generally, the relationship between the developed and the developing world
So this sort of debate does seem pretty relevant.
The important point is that we didnât know in advance which kinds of discomfort were of crucial importance. The relevant baseline here is not early EAs moderating ourselves, itâs something like âthe rest of academic philosophy/âsociety at large moderating EAâ, which seems much more likely to have stifled early EAâs ability to identify important issues and interventions.
(I also think weâve ended up at some of the wrong points on some of these issues, but thatâs a longer debate.)
Do you have an example of the kind of early EA conversation which you think was really important which helped came up with core EA tenets might be frowned upon or censored on the forum now? Iâm still super dubious about whether leaving out a small number of specific topics really leaves much value on the table.
And I really think conversations can be had in more sensitive ways. In the the case of the original banned post, just as good a philosophical conversation could be had without explicitly talking about killing people. The conversation already was being had on another thread âthe meat eater problemâ
And as a sidebar yeah I wouldnât have any issue with that above conversation myself because we just have to practically discuss that with donors and internally when providing health care and getting confronted with tricky situations. Also (again sidebar) itâs interesting that age of marriage/âconsent conversations can be where classic left wing cultural relativism and gender safeguarding collide and donât know which way to swing. Weâve had to ask that question practically in our health centers, to decide who to give family planning to and when to think of referring to police etc. Super tricky.
My point is not that the current EA forum would censor topics that were actually important early EA conversations, because EAs have now been selected for being willing to discuss those topics. My point is that the current forum might censor topics that would be important course-corrections, just as if the rest of society had been moderating early EA conversations, those conversations might have lost important contributions like impartiality between species (controversial: youâre saying human lives donât matter very much!), the ineffectiveness of development aid (controversial: youâre attacking powerful organizations!), transhumanism (controversial, according to the people who say itâs basically eugenics), etc.
Re âconversations can be had in more sensitive waysâ, I mostly disagree, because of the considerations laid out here: the people who are good at discussing topics sensitively are mostly not the ones who are good at coming up with important novel ideas.
For example, it seems plausible that genetic engineering for human intelligence enhancement is an important and highly neglected intervention. But you had to be pretty disagreeable to bring it into the public conversation a few years ago (I think itâs now a bit more mainstream).
Assuming weâre only talking about the post Richard linked (and the userâs one recent comment, which is similar), I agree with this.
This moderation policy seems absurd. The post in question was clearly asking purely hypothetical questions, and wasnât even advocating for any particular answer to the question. May as well ban users for asking whether itâs moral to push a man off a bridge to stop a trolley, or ban Peter Singer for his thought experiments about infanticide.
Perhaps dstudiocode has misbehaved in other ways, but this announcement focuses on something that should be clearly within the bounds of acceptable discourse. (In particular, the standard of âcontent that could be interpreted as Xâ is a very censorious one, since you now need to cater to a wide range of possible interpretations.)
That is not the post in question. We removed the post that prompted the ban.
Ah, thanks, thatâs important contextâI semi-retract my strongly worded comment above, depending on exactly how bad the removed post was, but can imagine posts in this genre that I think are genuinely bad
Another comment from me:
I donât like my mod message, and I apologize for it. I was rushed and used some templated language that I knew damn well at the time that I wasnât excited about putting my name behind. I nevertheless did and bear the responsibility.
Thatâs all from me for now. The mods who werenât involved in the original decision will come in and reconsider the ban, pursuant to the appeal.
In the post that prompted the ban, they asked whether murdering meat-eaters could be considered ethical. I donât want to comment on whether this would be an appropriate topic for a late night philosophy club conversation, it is not an appropriate topic for the EA Forum.
I think speculating about what exactly constitutes the most good is perfectly on-topic. While âmurdering meat-eatersâ is perhaps an overly direct phrasing (and of course under most ethical frameworks murder raises additional issues as compared to mere inaction or deprioritization), the question of whether the negative utility produced by one marginal personâs worth of factory farming outweighs the positive utility that person experiencesâcolloquially referred to as the meat-eater problemâis one that has been discussed here a number of times, and that I feel is quite relevant to the question of which interventions should be prioritized.
Iâd separate out the removal and the suspension, and dissent only as to the latter.
I get why the mods would feel the need to chart a wide berth around anything that some person could somehow âinterpret[] as promoting violence or illegal activities.â Making a rule against brief hypothetical mentions of the possible ethics of murder is defensible, especially in light of certain practical realities.
However, I canât agree with taking punitive action against a user where the case that they violated the norm is this tenuous and there is a lack of fair prior notice of the modsâ interpretation. For that kind of action, I think the minimum standard would be either clear notice or content that a reasonable person would recognize could reasonably be interpreted as promoting violence. In other words, was the poster negligent in failing to recognize that violence promotion was a reasonable interpretation?
I donât think the violence-promoting interpretation is a reasonable one here, and it sounds like several other users agreeâwhich I take as evidence of non-negligence.