Rate limiting on the EA Forum is too strict. Given that people karma downvote because of disagreement, rather than because of quality or civility — or they judge quality and/or civility largely on the basis of what they agree or disagree with — there is a huge disincentive against expressing unpopular or controversial opinions (relative to the views of active EA Forum users, not necessarily relative to the general public or relevant expert communities) on certain topics.
This is a message I saw recently:
You aren’t just rate limited for 24 hours once you fall below the recent karma threshold (which can be triggered by one comment that is unpopular with a handful of people), you’re rate limited for as many days as it takes you to gain 25 net karma on new comments — which might take a while, since you can only leave one comment per day, and, also, people might keep downvoting your unpopular comment. (Unless you delete it — which I think I’ve seen happen, but I won’t do, myself, because I’d rather be rate limited than self-censor.)
The rate limiting system is a brilliant idea for new users or users who have less than 50 total karma — the ones who have little plant icons next to their names. It’s an elegant, automatic way to stop spam, trolling, and other abuses. But my forum account is 2.5 years old and I have over 1,000 karma. I have 24 posts published over 2 years, all with positive karma. My average karma per post/comment is +2.3 (not counting the default karma that all post/comments start with; this is just counting karma from people’s votes).
Examples of comments I’ve gotten downvoted into the net −1 karma or lower range include a methodological critique of a survey that was later accepted to be correct and led to the research report of an EA-adjacent organization getting revised. In another case, a comment was downvoted to negative karma when it was only an attempt to correct the misuse of a technical term in machine learning — a topic which anyone can confirm I’ve gotten right with a few fairly quick Google searches. People are absolutely not just downvoting comments that are poor quality or rude by any reasonable standard. They are downvoting things they disagree with or dislike for some other reason. (There are many other examples like the ones I just gave, including everything from directly answering a question to clarifying a point of disagreement to expressing a fairly anodyne and mainstream opinion that at least some prominent experts in the relevant field agree with.) Given this, karma downvoting as an automatic moderation tool with thresholds this sensitive just discourages disagreement.
One of the most important cognitive biases to look out for in a context like EA is group polarization, which is the tendency of individuals’ views to become more extreme once they join a group, even if each of the individuals had less extreme views before joining the group (i.e., they aren’t necessarily being converted by a few zealots who already had extreme views before joining). One way to mitigate group polarization is to have a high tolerance for internal disagreement and debate. I think the EA Forum does have that tolerance for certain topics and within certain windows of accepted opinions for most topics that are discussed, but not for other topics or only within a window that is quite narrow if you compare it to, say, the general population or expert opinion.
For example, 76% of AI experts believe it’s unlikely or very unlikely that LLMs will scale to AGI according to one survey, yet the opinion of EA Forum users seems to be the opposite of that. Not everyone on the EA Forum seems to consider the majority expert opinion an opinion worth considering too seriously. To me, that looks like group polarization in action. It’s one thing to disagree with expert opinion with some degree of uncertainty and epistemic humility, it’s another thing to see expert opinion as beneath serious discussion.
I don’t know what specific tweaks to the rate limiting system would be best. Maybe just turn it off altogether for users with over 500 karma (and rely on reporting posts/comments and moderator intervention to handle real problems), or as Jason suggested here, have the karma threshold trigger manual review by a moderator rather than automatic rate limiting. Jason also made some other interesting suggestions for tweaks in that comment and noted, correctly:
Strong downvoting by a committed group is the most obvious way to manipulate the system into silencing those with whom you disagree.
This actually works. I am reluctant to criticize the ideas of or express disagreement with certain organizations/books because of rate limiting, and rate limiting is the #1 thing that makes me feel like just giving up on trying to engage in intellectual debate and discussion and just quit the EA Forum.
I may be slow to reply to any comments on this quick take due to the forum’s rate limiting.
I think this highlights why some necessary design features of the karma system don’t translate well to a system that imposes soft suspensions on users. (To be clear, I find a one-comment-per-day limit based on the past 20 comments/posts to cross the line into soft suspension territory; I do not suggest that rate limits are inherently soft suspensions.)
I wrote a few days ago about why karma votes need to be anonymous and shouldn’t (at least generally) require the voter to explain their reasoning; the votes suggested general agreement on those points. But a soft suspension of an established user is a different animal, and requires greater safeguards to protect both the user and the openness of the Forum to alternative views.
I should emphasize that I don’t know who cast the downvotes that led to Yarrow’s soft suspension (which were on this post about MIRI), or why they cast their votes. I also don’t follow MIRI’s work carefully enough to have a clear opinion on the merits of any individual vote through the lights of the ordinary purposes of karma. So I do not intend to imply dodgy conduct by anyone. But: “Justice must not only be done, but must also be seen to be done.” People who are considering stating unpopular opinions shouldn’t have to trust voters to the extent they have to at present to avoid being soft suspended.
Neutrality: Because the votes were anonymous, it is possible that people who were involved in the dispute were casting votes that had the effect of soft-suspending Yarrow.
Accountability: No one has to accept responsibility and the potential for criticism for imposing a soft-suspension via karma downvotes. Not even in their own minds—since nominally all they did was downvote particular posts.
Representativeness: A relatively small number of users on a single thread—for whom there is no evidence of being representative of the Forum community as a whole—cast the votes in question. Their votes have decided for the rest of the community that we won’t be hearing much from Yarrow (on any topic) for a while.[1]
Reasoning transparency: Stating (or at least documenting) one’s reasoning serves as a check on decisions made on minimal or iffy reasoning getting through. [Moreover, even if voters had been doing so silently, they were unlikely to be reasoning about a vote to soft suspend Yarrow, which is what their votes were whether they realized it or not.]
There are good reasons to find that the virtues of accountability, representativeness, and reasoning transparency are outweighed by other considerations when it comes to karma generally. (As for neutrality, I think we have to accept that technical and practical limitations exist.) But their absence when deciding to soft suspend someone creates too high a risk of error for the affected user, too high a risk of suppressing viewpoints that are unpopular with elements of the Forum userbase, and too much chilling effect on users’ willingness to state certain viewpoints. I continue to believe that, for more established users, karma count should only trigger a moderator review to assess whether a soft suspension is warranted.
Although the mods aren’t necessarily representative in the abstract, they are more likely to not have particular views on a given issue than the group of people who actively participate on a given thread (and especially those who read the heavily downvoted comments on that thread). I also think the mods are likely to have a better understanding of their role as representatives of the community than individual voters do, which mitigates this concern.
I’ve really appreciated comments and reflections from @Yarrow Bouchard 🔸 and I think in his case at least this does feel a bit unfair. Its good to encourage new people on the forum, unless they are posting particularly egrarious thing which I don’t think he has been.
Rate limits should not apply to comments on your own quick takes
Rate limits could maybe not count negative karma below −10 or so, it seems much better to rate limit someone only when they have multiple downvoted comments
2.4:1 is not a very high karma:submission ratio. I have 10:1 even if you exclude the april fool’s day posts, though that could be because I have more popular opinions, which means that I could double my comment rate and get −1 karma on the extras and still be at 3.5
if I were Yarrow I would contextualize more or use more friendly phrasing or something, and also not be bothered too much by single downvotes
From scanning the linked comments I think that downvoters often think the comment in question has bad reasoning and detracts from effective discussion, not just that they disagree
Deliberately not opining on the echo chamber question
Can you explain what you mean by “contextualizing more”? (What a curiously recursive question...)
You definitely have more popular opinions (among the EA Forum audience), and also you seem to court controversy less, i.e. a lot of your posts are about topics that aren’t controversial on the EA Forum. For example, if you were to make a pseudonymous account and write posts/comments arguing that near-term AGI is highly unlikely, I think you would definitely get a much lower karma to submission ratio, even if you put just as much effort and care into them as the posts/comments you’ve written on the forum so far. Do you think it wouldn’t turn out that way?
I’ve been downvoted on things that are clearly correct, e.g. the standard definitions of terms in machine learning (which anyone can Google); a methodological error that the Forecasting Research Institute later acknowledged was correct and revised their research to reflect. In other cases, the claims are controversial, but they are also claims where prominent AI experts like Andrej Karpathy, Yann LeCun, or Ilya Sutskever have said exactly the same thing as I said — and, indeed, in some cases I’m literally citing them — and it would be wild to think these sort of claims are below the quality threshold for the EA Forum. I think that should make you question whether downvotes are a reliable guide to the quality of contributions.
One-off instances of one person downvoting don’t bother me that much — that literally doesn’t matter, as long as it really is one-off — what bothers me is the pattern. It isn’t just with my posts/comments, either, it’s across the board on the forum. I see it all the time with other contributors as well. I feel uneasy dragging those people into this discussion without their permission — it’s easier to talk about myself — but this is an overall pattern.
Whether reasoning is good or bad is always bound to be controversial when debating about topics that are controversial, about which there is a lot of disagreement. Just downvoting what you judge to be bad reasoning will, statistically, amount to downvoting what you disagree with. Since downvotes discourage and, in some cases, disable (through the forum’s software) disagreement, you should ask: is that the desired outcome? Personally, I rarely, pretty much never, downvote based on what I perceive to be the reasoning quality for exactly this reason.
When people on the EA Forum deeply engage with the substance of what I have to say, I’ve actually found a really high rate of them changing their minds (not necessarily from P to ¬P,but shifting along a spectrum and rethinking some details). It’s a very small sample size, only a few people, but it’s something like out of five people that I’ve had a lengthy back-and-forth with over the last two months, three of them changed their minds in some significant way. (I’m not doing rigorous statistics here, just counting examples from memory.) And in two of the three cases, the other person’s tone started out highly confident, giving me the impression they initially thought there was basically no chance I had any good points that were going to convince them. That is the counterbalance to everything else because that’s really encouraging!
I put in an effort to make my tone friendly and conciliatory, and I’m aware I probably come off as a bit testy some of the time, but I’m often responding to a much harsher delivery from the other person and underreacting in order to deescalate the tension. (For example, the person who got the ML definitions wrong started out by accusing me of “bad faith” based on their misunderstanding of the definitions. There were multiple rounds of me engaging with politeness and cordiality before I started getting a bit testy. That’s just one example, but there are others — it’s frequently a similar dynamic. Disagreeing with the majority opinion of the group is a thankless job because you have to be nicer to people than they are to you, and then that still isn’t good enough and people say you should be even nicer.)
Can you explain what you mean by “contextualizing more”? (What a curiously recursive question...)
I mean it in this sense; making people think you’re not part of the outgroup and don’t have objectionable beliefs related to the ones you actually hold, in whatever way is sensible and honest.
Maybe LW is better at using disagreement button as I find it’s pretty common for unpopular opinions to get lots of upvotes and disagree votes. One could use the API to see if the correlations are different there.
I think this is a significant reason why people downvote some, but not all, things they disagree with. Especially a member of the outgroup who makes arguments EAs have refuted before and need to reexplain, not saying it’s actually you
Claude thinks possible outgroups include the following, which is similar to what I had in mind
Based on the EA Forum’s general orientation, here are five individuals/groups whose characteristic opinions would likely face downvotes:
Effective accelerationists (e/acc) - Advocates for rapid AI development with minimal safety precautions, viewing existential risk concerns as overblown or counterproductive
TESCREAL critics (like Emile Torres, as you mentioned) - Scholars who frame longtermism/EA as ideologically dangerous, often linking it to eugenics, colonialism, or techno-utopianism
Anti-utilitarian philosophers—Strong deontologists or virtue ethicists who reject consequentialist frameworks as fundamentally misguided, particularly on issues like population ethics or AI risk trade-offs
Degrowth/anti-progress advocates—Those who argue economic/technological growth is net-negative and should be reduced, contrary to EA’s generally pro-progress orientation
Left-accelerationists and systemic change advocates—Critics who view EA as a “neoliberal” distraction from necessary revolutionary change, or who see philanthropic approaches as fundamentally illegitimate compared to state redistribution
a) I’m not sure all of those count as someone who would necessarily be an outsider to EA (e.g. Will MacAskill only assigns a 50% probability to consequentialism being correct, and he and others in EA have long emphasized pluralism about normative ethical theories; there’s been an EA system change group on Facebook since 2015 and discourse around systemic change has been happening in EA since before then)
b) Even if you do consider people in all those categories to be outsiders to EA or part of “the out-group”, us/them or in-group/out-group thinking seems like a bad idea, possibly leading to insularity, incuriosity, and overconfidence in wrong views
c) It’s especially a bad idea to not only think in in-group/out-group terms and seek to shut down perspectives of “the out-group” but also to cast suspicion on the in-group/out-group status of anyone in an EA context who you happen to disagree with about something, even something minor — that seems like a morally, subculturally, and epistemically bankrupt approach
You’re shooting the messenger. I’m not advocating for downvoting posts that smell of “the outgroup”, just saying that this happens in most communities that are centered around an ideological or even methodological framework. It’s a way you can be downvoted while still being correct, especially from the LEAST thoughtful 25% of EA forum voters
Please read the quote from Claude more carefully. MacAskill is not an “anti-utilitarian” who thinks consequentialism is “fundamentally misguided”, he’s the moral uncertainty guy. The moral parliament usually recommends actions similar to consequentialism with side constraints in practice.
I probably won’t engage more with this conversation.
I don’t know, sorry. I admittedly tend to steer clear of community debates as they make me sad, probably shouldn’t have commented in the first place...
Rate limiting on the EA Forum is too strict. Given that people karma downvote because of disagreement, rather than because of quality or civility — or they judge quality and/or civility largely on the basis of what they agree or disagree with — there is a huge disincentive against expressing unpopular or controversial opinions (relative to the views of active EA Forum users, not necessarily relative to the general public or relevant expert communities) on certain topics.
This is a message I saw recently:
You aren’t just rate limited for 24 hours once you fall below the recent karma threshold (which can be triggered by one comment that is unpopular with a handful of people), you’re rate limited for as many days as it takes you to gain 25 net karma on new comments — which might take a while, since you can only leave one comment per day, and, also, people might keep downvoting your unpopular comment. (Unless you delete it — which I think I’ve seen happen, but I won’t do, myself, because I’d rather be rate limited than self-censor.)
The rate limiting system is a brilliant idea for new users or users who have less than 50 total karma — the ones who have little plant icons next to their names. It’s an elegant, automatic way to stop spam, trolling, and other abuses. But my forum account is 2.5 years old and I have over 1,000 karma. I have 24 posts published over 2 years, all with positive karma. My average karma per post/comment is +2.3 (not counting the default karma that all post/comments start with; this is just counting karma from people’s votes).
Examples of comments I’ve gotten downvoted into the net −1 karma or lower range include a methodological critique of a survey that was later accepted to be correct and led to the research report of an EA-adjacent organization getting revised. In another case, a comment was downvoted to negative karma when it was only an attempt to correct the misuse of a technical term in machine learning — a topic which anyone can confirm I’ve gotten right with a few fairly quick Google searches. People are absolutely not just downvoting comments that are poor quality or rude by any reasonable standard. They are downvoting things they disagree with or dislike for some other reason. (There are many other examples like the ones I just gave, including everything from directly answering a question to clarifying a point of disagreement to expressing a fairly anodyne and mainstream opinion that at least some prominent experts in the relevant field agree with.) Given this, karma downvoting as an automatic moderation tool with thresholds this sensitive just discourages disagreement.
One of the most important cognitive biases to look out for in a context like EA is group polarization, which is the tendency of individuals’ views to become more extreme once they join a group, even if each of the individuals had less extreme views before joining the group (i.e., they aren’t necessarily being converted by a few zealots who already had extreme views before joining). One way to mitigate group polarization is to have a high tolerance for internal disagreement and debate. I think the EA Forum does have that tolerance for certain topics and within certain windows of accepted opinions for most topics that are discussed, but not for other topics or only within a window that is quite narrow if you compare it to, say, the general population or expert opinion.
For example, 76% of AI experts believe it’s unlikely or very unlikely that LLMs will scale to AGI according to one survey, yet the opinion of EA Forum users seems to be the opposite of that. Not everyone on the EA Forum seems to consider the majority expert opinion an opinion worth considering too seriously. To me, that looks like group polarization in action. It’s one thing to disagree with expert opinion with some degree of uncertainty and epistemic humility, it’s another thing to see expert opinion as beneath serious discussion.
I don’t know what specific tweaks to the rate limiting system would be best. Maybe just turn it off altogether for users with over 500 karma (and rely on reporting posts/comments and moderator intervention to handle real problems), or as Jason suggested here, have the karma threshold trigger manual review by a moderator rather than automatic rate limiting. Jason also made some other interesting suggestions for tweaks in that comment and noted, correctly:
This actually works. I am reluctant to criticize the ideas of or express disagreement with certain organizations/books because of rate limiting, and rate limiting is the #1 thing that makes me feel like just giving up on trying to engage in intellectual debate and discussion and just quit the EA Forum.
I may be slow to reply to any comments on this quick take due to the forum’s rate limiting.
I think this highlights why some necessary design features of the karma system don’t translate well to a system that imposes soft suspensions on users. (To be clear, I find a one-comment-per-day limit based on the past 20 comments/posts to cross the line into soft suspension territory; I do not suggest that rate limits are inherently soft suspensions.)
I wrote a few days ago about why karma votes need to be anonymous and shouldn’t (at least generally) require the voter to explain their reasoning; the votes suggested general agreement on those points. But a soft suspension of an established user is a different animal, and requires greater safeguards to protect both the user and the openness of the Forum to alternative views.
I should emphasize that I don’t know who cast the downvotes that led to Yarrow’s soft suspension (which were on this post about MIRI), or why they cast their votes. I also don’t follow MIRI’s work carefully enough to have a clear opinion on the merits of any individual vote through the lights of the ordinary purposes of karma. So I do not intend to imply dodgy conduct by anyone. But: “Justice must not only be done, but must also be seen to be done.” People who are considering stating unpopular opinions shouldn’t have to trust voters to the extent they have to at present to avoid being soft suspended.
Neutrality: Because the votes were anonymous, it is possible that people who were involved in the dispute were casting votes that had the effect of soft-suspending Yarrow.
Accountability: No one has to accept responsibility and the potential for criticism for imposing a soft-suspension via karma downvotes. Not even in their own minds—since nominally all they did was downvote particular posts.
Representativeness: A relatively small number of users on a single thread—for whom there is no evidence of being representative of the Forum community as a whole—cast the votes in question. Their votes have decided for the rest of the community that we won’t be hearing much from Yarrow (on any topic) for a while.[1]
Reasoning transparency: Stating (or at least documenting) one’s reasoning serves as a check on decisions made on minimal or iffy reasoning getting through. [Moreover, even if voters had been doing so silently, they were unlikely to be reasoning about a vote to soft suspend Yarrow, which is what their votes were whether they realized it or not.]
There are good reasons to find that the virtues of accountability, representativeness, and reasoning transparency are outweighed by other considerations when it comes to karma generally. (As for neutrality, I think we have to accept that technical and practical limitations exist.) But their absence when deciding to soft suspend someone creates too high a risk of error for the affected user, too high a risk of suppressing viewpoints that are unpopular with elements of the Forum userbase, and too much chilling effect on users’ willingness to state certain viewpoints. I continue to believe that, for more established users, karma count should only trigger a moderator review to assess whether a soft suspension is warranted.
Although the mods aren’t necessarily representative in the abstract, they are more likely to not have particular views on a given issue than the group of people who actively participate on a given thread (and especially those who read the heavily downvoted comments on that thread). I also think the mods are likely to have a better understanding of their role as representatives of the community than individual voters do, which mitigates this concern.
I’ve really appreciated comments and reflections from @Yarrow Bouchard 🔸 and I think in his case at least this does feel a bit unfair. Its good to encourage new people on the forum, unless they are posting particularly egrarious thing which I don’t think he has been.
She, but thank you!
Assorted thoughts
Rate limits should not apply to comments on your own quick takes
Rate limits could maybe not count negative karma below −10 or so, it seems much better to rate limit someone only when they have multiple downvoted comments
2.4:1 is not a very high karma:submission ratio. I have 10:1 even if you exclude the april fool’s day posts, though that could be because I have more popular opinions, which means that I could double my comment rate and get −1 karma on the extras and still be at 3.5
if I were Yarrow I would contextualize more or use more friendly phrasing or something, and also not be bothered too much by single downvotes
From scanning the linked comments I think that downvoters often think the comment in question has bad reasoning and detracts from effective discussion, not just that they disagree
Deliberately not opining on the echo chamber question
Can you explain what you mean by “contextualizing more”? (What a curiously recursive question...)
You definitely have more popular opinions (among the EA Forum audience), and also you seem to court controversy less, i.e. a lot of your posts are about topics that aren’t controversial on the EA Forum. For example, if you were to make a pseudonymous account and write posts/comments arguing that near-term AGI is highly unlikely, I think you would definitely get a much lower karma to submission ratio, even if you put just as much effort and care into them as the posts/comments you’ve written on the forum so far. Do you think it wouldn’t turn out that way?
I’ve been downvoted on things that are clearly correct, e.g. the standard definitions of terms in machine learning (which anyone can Google); a methodological error that the Forecasting Research Institute later acknowledged was correct and revised their research to reflect. In other cases, the claims are controversial, but they are also claims where prominent AI experts like Andrej Karpathy, Yann LeCun, or Ilya Sutskever have said exactly the same thing as I said — and, indeed, in some cases I’m literally citing them — and it would be wild to think these sort of claims are below the quality threshold for the EA Forum. I think that should make you question whether downvotes are a reliable guide to the quality of contributions.
One-off instances of one person downvoting don’t bother me that much — that literally doesn’t matter, as long as it really is one-off — what bothers me is the pattern. It isn’t just with my posts/comments, either, it’s across the board on the forum. I see it all the time with other contributors as well. I feel uneasy dragging those people into this discussion without their permission — it’s easier to talk about myself — but this is an overall pattern.
Whether reasoning is good or bad is always bound to be controversial when debating about topics that are controversial, about which there is a lot of disagreement. Just downvoting what you judge to be bad reasoning will, statistically, amount to downvoting what you disagree with. Since downvotes discourage and, in some cases, disable (through the forum’s software) disagreement, you should ask: is that the desired outcome? Personally, I rarely, pretty much never, downvote based on what I perceive to be the reasoning quality for exactly this reason.
When people on the EA Forum deeply engage with the substance of what I have to say, I’ve actually found a really high rate of them changing their minds (not necessarily from P to ¬P, but shifting along a spectrum and rethinking some details). It’s a very small sample size, only a few people, but it’s something like out of five people that I’ve had a lengthy back-and-forth with over the last two months, three of them changed their minds in some significant way. (I’m not doing rigorous statistics here, just counting examples from memory.) And in two of the three cases, the other person’s tone started out highly confident, giving me the impression they initially thought there was basically no chance I had any good points that were going to convince them. That is the counterbalance to everything else because that’s really encouraging!
I put in an effort to make my tone friendly and conciliatory, and I’m aware I probably come off as a bit testy some of the time, but I’m often responding to a much harsher delivery from the other person and underreacting in order to deescalate the tension. (For example, the person who got the ML definitions wrong started out by accusing me of “bad faith” based on their misunderstanding of the definitions. There were multiple rounds of me engaging with politeness and cordiality before I started getting a bit testy. That’s just one example, but there are others — it’s frequently a similar dynamic. Disagreeing with the majority opinion of the group is a thankless job because you have to be nicer to people than they are to you, and then that still isn’t good enough and people say you should be even nicer.)
I mean it in this sense; making people think you’re not part of the outgroup and don’t have objectionable beliefs related to the ones you actually hold, in whatever way is sensible and honest.
Maybe LW is better at using disagreement button as I find it’s pretty common for unpopular opinions to get lots of upvotes and disagree votes. One could use the API to see if the correlations are different there.
Huh? Why would it matter whether or not I’m part of “the outgroup”...? What does that mean?
I think this is a significant reason why people downvote some, but not all, things they disagree with. Especially a member of the outgroup who makes arguments EAs have refuted before and need to reexplain, not saying it’s actually you
What is “the outgroup”?
Claude thinks possible outgroups include the following, which is similar to what I had in mind
a) I’m not sure all of those count as someone who would necessarily be an outsider to EA (e.g. Will MacAskill only assigns a 50% probability to consequentialism being correct, and he and others in EA have long emphasized pluralism about normative ethical theories; there’s been an EA system change group on Facebook since 2015 and discourse around systemic change has been happening in EA since before then)
b) Even if you do consider people in all those categories to be outsiders to EA or part of “the out-group”, us/them or in-group/out-group thinking seems like a bad idea, possibly leading to insularity, incuriosity, and overconfidence in wrong views
c) It’s especially a bad idea to not only think in in-group/out-group terms and seek to shut down perspectives of “the out-group” but also to cast suspicion on the in-group/out-group status of anyone in an EA context who you happen to disagree with about something, even something minor — that seems like a morally, subculturally, and epistemically bankrupt approach
You’re shooting the messenger. I’m not advocating for downvoting posts that smell of “the outgroup”, just saying that this happens in most communities that are centered around an ideological or even methodological framework. It’s a way you can be downvoted while still being correct, especially from the LEAST thoughtful 25% of EA forum voters
Please read the quote from Claude more carefully. MacAskill is not an “anti-utilitarian” who thinks consequentialism is “fundamentally misguided”, he’s the moral uncertainty guy. The moral parliament usually recommends actions similar to consequentialism with side constraints in practice.
I probably won’t engage more with this conversation.
I don’t know what he meant, but my guess FWIW is this 2014 essay.
I understand the general concept of ingroup/outgroup, but what specifically does that mean in this context?
I don’t know, sorry. I admittedly tend to steer clear of community debates as they make me sad, probably shouldn’t have commented in the first place...