Didn’t separate karma for helpfulness and disagreement (frequently used on LessWrong) get implemented on the EA forum recently? This post feels like the ideal use case for it:
There are some controversial comments with weakly positive karma despite lots of votes, where I suspect what’s going on is some people are signalling disagreement with downvotes, and others are signalling ‘this post constitutes meaningful engagement’ with upvotes.
There are also some comments where the tone seems to me to be over the line, with varying amounts of karma (from very positive to very negative), from various people.
Were a two-karma system available, I think I would use both [strong upvote, strong disagree] and [strong downvote, strong agree] at least once each.
the forum did offer the chance of having agree/disagree on the post, I just forgot to respond. I think it is a beta feature but happy for it to be used on this post
I think we also need renewed discussion of how the karma system contributes to groupthink and hierarchy, things that, to put it gently, EA sometimes struggles with somewhat.
As far as I can tell, the system gives far more voting power to highly-rated users, allowing a few highly active (and thus most likely highly orthodox) forum users to unilaterally boost or tank any given post.
This is especially bad when you consider that low-karma comments are hidden, allowing prominent figures (often with high karma scores) to soft-censor their own critics.
This is especially worrying given the groupthink that emerges on internet fora, where a comment having a score of −5 makes it much more likely for people to downvote it further on reflex, and vice versa.
I disagree that the problem here is groupthink, and I think if you look at highly rated posts, you can’t reasonably conclude that people who criticise the orthodox position will be reliably downvoted. I think the problem here is that some people vote based on tone and some on content, which means that when something is downvoted different people draw different conclusions about why.
I think the problem here is that some people vote based on tone and some on content,
I hope to encourage more people to instead upvote based on rigor/epistemics/quality on the margin, rather than based on tone or based on agreement (which is some of “content”) or vibe.
EDIT: I also think a surprisingly high number of people upvote low-quality criticisms that have a good tone, which makes me surprised when others assert than the movement is systematically biased against criticisms (“insufficient discernment” will be a fairer criticism, but that’s a mistake, not a bias).
Doubtful if you look at Gideon’s first comment and remember it was downvoted through the floor almost immediately.
Questioning orthodoxy is ok within some bounds (often technical/narrow disagreements), or when expressed in suitable terms, e.g.
(Significant) underconfidence, regardless of expertise and/or lack of expertise among those criticised
Unreasonable assumptions of good faith, even in the face of hostility or malpractice (double standards, perhaps a lesser form of the expectation of a ‘perfect victim’)
Extensive use of EA buzzwords
Huge amounts of extra work/detail that would not be deemed necessary for non-critical writing
Essentially making oneself as small as possible so as not to set off the Bad Tone hair-trigger
This is difficult because knowing what you are talking about and being lazily dismissed by people you know for a fact know far less than you about a given subject matter makes one somewhat frustrated
As several EAs have noted, e.g. weeatquince, this is time-consuming and (emotionally) exhausting, and often results in dismissal anyway.
This is even harder to pull off when questioning sensitive issues like politics, funding ethics, foundational intellectual issues (e.g. the ways in which the TUA uses utterly unsuitable tools for its subject matter due to a lack of outside reading), competence of prominent figures, etc.
I actually think this forms a sort of positive feedback loop, where EAs become increasingly orthodox (and confident in that orthodoxy) due to perceived lack of substantive critiques, which makes making those critiques so frustrating, time-consuming, and low-impact that people just don’t bother. I’ve certainly done it.
4 are straightforwardly criticisms: (“Free-spending EA might be bad...”, “Bad Omens”, “case against randomista development”, “Critiques of EA”)
4 are partial criticisms (“Long-Termism” vs. “Existential Risk”, “My mistakes on the path to impact”,”EA for dumb people?”, “Are you really in a race?”)
1 (the most upvoted) was a response to criticism (“EA and the current funding situation”)
1 was about the former EAForum head leaving (“Announcing my retirement”)
This is a total of 40-80%, depending on how you count.
(In the next 10 posts, I “only” see 3 posts that are criticisms, but I don’t think that 30% is particularly low either. It does get lower further down however).
I don’t find this example convincing. I just read the review and found it pretty underwhelming. Take this:
MacAskill’s personal position is to basically to throw up his hands, declare that none of the solutions to the problems with utilitarianism look very good, and we should just compromise between various repugnant theories of how to deal with populations, hoping that whatever compromise we take isn’t that bad.
The paragraph is reacting to the following passage in WWOTF:
There is still deep disagreement within philosophy about what the right view of population ethics is. . . Indeed, I don’t think that there’s any view in population ethics that anyone should be extremely confident in.
If you want to reject the Repugnant Conclusion, therefore, then you’ve got to reject one of the premises that this argument was based on. But each of these premises seem incontrovertible. We are left with a paradox. One option is to simply accept the Repugnant Conclusion. . . This is the view that I incline towards. Many other philosophers believe that we should reject one of the other premises instead.
Like all views in population ethics, the critical level view has some very unappealing downsides.
There is still deep disagreement about what the right view of population ethics is. . . Indeed, I don’t think that there’s any view in population ethics that anyone should be extremely confident in.
But MacAskill is here describing problems with rival views in population ethics, not problems with utilitarianism! Because the author (1) conflates the two, (2) mischaracterizes the Repugnant Conclusion (“worlds where all available land is turned into places worse than the worst slums of Bangladesh”), and (3) fails to distinguish the Repugnant Conclusion from standard “repugnant” implications of utilitarianism that have nothing to do with it, he ends up attributing to longtermism a number of “ridiculous” views that do not in fact follow from that position.
Separately, if criticizing WWOTF is considered to be a paradigmatic case of “heterodoxy”, it seems worth mentioning that a recent critical review by Magnus Vinding has been very favorably received (179 karma at the time of writing).
This response completely ignores the main point of my comment.
Separately, if criticizing WWOTF is considered to be a paradigmatic case of “heterodoxy”, it seems worth mentioning that a recent critical review by Magnus Vinding has been very favorably received (179 karma at the time of writing).
Please reread my comment because the whole point was that A.C.Skraeling said that criticism is accepted within some boundaries, or when expressed in suitable terms. You essentially just repeated Linch’s point except that my whole point was that Linch’s point is perfectly compatible with what A.C. Skraeling said.
Regarding Hoel’s review, you seem to have read my point as being that it was particularly good or convincing to EAs, which is incorrect. My point was that it was downvoted to −12, a karma score I associate with trollish posts, despite its content being much better than that, because of the combination of criticizing EA orthodoxy (longtermism, utilitarianism, population ethics etc) and not expressing it in a suitable manner. This makes it a decent example of what A.C.Skraeling said. You are free to disagree of course.
Please reread my comment because the whole point was that A.C.Skraeling said that criticism is accepted within some boundaries, or when expressed in suitable terms.
I did misread some parts of your original comment. I thought you were saying that criticizing WWOTF was itself an example of criticism that is beyond the bounds Skraeling was describing. But I now see that you were not saying this. My apologies. (I have crossed out the part of my comment that is affected by this misreading.)
Regarding Hoel’s review, you seem to have read my point as being that it was particularly good or convincing to EAs, which is incorrect.
That is not how I read your point. I interpreted you as saying that the quality of the book review justified higher karma than it received (which is confirmed by your reply). My comment was meant to argue against this point, by highlighting some serious blunders and sloppy reasoning by the author that probably justify the low rating. (-12 karma is appropriate for a post of very low quality, in my opinion, and not just a trollish post.)
Regarding the Hoel piece, the fact that you highlighted the section you did and the way you analyzed it suggests to me you didn’t understand what his position was, and didn’t try particularly hard to do so. I don’t think you can truly judge whether his content is very low quality if you don’t understand it. Personally, I think he made some interesting points really engaging with some cores of EA, even if I disagree with much of what he said. I completely disagree that his content, separate from its language and tone towards EAs, is anywhere near very low quality, certainly nowhere near −12. If you want to understand his views better, I found his comments replying to his piece on why he’s not an EA illuminating, such as his response to my attempted summary of his position. But we can agree to disagree.
Edit note: I significantly edited the part of this comment talking about Hoel’s piece within a few hours of posting with the aim of greater clarity.
I disagree that the problem here is groupthink, and I think if you look at highly rated posts, you can’t reasonably conclude that people who criticise the orthodox position will be reliably downvoted
The highly rated posts I’ve seen so far, on the topic of X risk in particular, appear to me to typically be a product of group think. They’re typically very articulate, very polished form, highly informed on details, (ie. good academic style) but not escaping group think.
As evidence, please direct us to the writers here who have been laser focused on the critical importance of managing the pace of the knowledge explosion. Where are they? If they exist, and I sincerely hope they do (because I don’t have the authority to sell the case) I really would like to meet them.
In my opinion, group think is to some immeasurable degree built in to the fabric of academia, because academia is a business, and one does not stay in business by alienating one’s clients. Thus, to the degree the academic depends on their salary, they are somewhat imprisoned within the limits of whoever is signing their paycheck.
Here’s an example.
I can afford to persistently sell a “world without men” idea as an ambitious solution to human violence because nobody owns me, I have nothing of value at stake. Whatever the merits of such a case might be, (very clearly debatable) academics can’t afford to make that case, because the group consensus of their community will not tolerate it. And before you protest, know that I’ve already been threatened with banning on this site just for bringing the subject up.
Academia is business, and is thus governed by fear, and that is the source of group think.
If you would, please down vote this post at least 100 times, as I believe I’ve earned it. :-)
I’m looking at your profile, you have almost nothing but downvotes, but I haven’t seen you say anything dumb—just sassy. FWIW, I really like this comment.
I frequently catch myself, and I’m embarrassed to admit that, being more likely to upvote posts of users that I know. I also find myself anchoring my vote to the existing vote count (if a post has a lot of upvotes then I am less likely to downvote it). Pretty sure I’m not the only one.
Furthermore, I observe how vote count influences my reading of each post more than it should. Groupthink at its best.
I suspect if the forum hid the vote count for a month, there would be significant changes in voting patterns. That being said, I’m not sure these changes would actually influence the votesorted order of the postings—but they might. I suspect it would also change the nature of certain discussions.
Because the voting system is in place to encourage high school students to participate in EA discussion. If you were to say something like “I still think Britney Spears is cool” then you’re gonna get down voted, so I’d try to avoid that topic if you can.
This time it’s me who downvoted. The first part (high school students) doesn’t seem close to being true, and the second (Britney Spears) is not related at all to the discussion?
Thank you for not being anonymous, and for explaining your down vote. That’s all I’ve been requesting from the beginning. I agree my colorful language was an imprecise description of the situation.
PS: Holy cow, I got −24 from just 6 votes. That’s awesome. I predict I will soon be the king of down votes! High school systems require high school participation.
Thank you so much, you’ve said what I’ve been thinking, better than I’ve been saying it.
Maybe this is helpful, not sure.
At least part of the issue may be the academic roots of EA. Academics turn intellectual inquiry in to a business, which introduces some competing agendas in to the process. Academics often like to pose themselves as rebels, but I think it’s closer to the truth to say that they are somewhat imprisoned within the group consensus of academic culture. You know, if you’re trying to put your kids through college using your salary as a professor, you might have to sidestep controversial ideas that could get you in trouble with whoever is writing your paycheck.
Point being, there may be some built-in aversion to unusual ideas, which then gets fed in to the reputation voting system.
To me it seems more like EA’s STEMlord-ism and roots in management consultancy, and its consequent maximiser-culture, rejection of democracy, and heavy preference for the latter aspect of the explore-exploit tradeoff.
“Number go bigger” etc. with a far lower value placed on critical reason, i.e. what the number actually is.
Orthodoxy is very efficient, you just end up pointed in the wrong direction.
I do think it’s reasonable to feel frustrated by your experience commenting on this post. I think you should have been engaged more respectfully, with more of an assumption of good faith, and that a number of your comments shouldn’t have been so heavily downvoted. I do also agree with some of the concerns you’ve raised in your comments and think it was useful for you to raise them.[1]
At the same time, I do think this comment isn’t conducive to good conversation, and the content mostly strikes me as off-base.
The EA community doesn’t have its roots in management consultancy. Off the top of my head, I can’t think of anyone who’s sometimes considered a founding figure (e.g. Singer, Parfit, Ord, MacAskill, Yudkowsky, Karnofsky, Hassenfeld) who was a management consultant. Although the community does have some people who were or are management consultants, they don’t seem overrepresented in any interesting way.
At least on the two most obvious interpretations, I don’t think the EA community rejects democracy to any unusual degree. If you mean “people involved in EA reject democracy as a political system,” then I think I’ve literally never heard anyone express pro-autocracy views. If you mean “organizations in the EA space reject directly democratic approaches to decision-making,” then that is largely true, but I don’t think it’s in any way a distinctive feature of the community. I think that almost no philanthropic foundations, anywhere, decide where to give money using anything like a popular vote; I think the same is generally true of advocacy and analysis organizations. I’d actually guess that EA organizations are actually somewhat more democratic-leaning than comparable organizations in other communities; for example, FTX’s regranting program is both pretty unusual and arguably a bit “more democratic” than other approaches to giving away money. (If you mean something else by “rejection of democracy,” then I apologize for the incorrect interpretations!)
Lastly, I don’t think the EA community has an unusually heavy preference for the exploit end of the explore-exploit trade-off; I think the opposite is true. I can’t think of any comparable community that devotes a larger amount of energy to the question “What should we try to do?”, relative to actually trying to do things. I think this is actually something that turns off a lot of entrepreneurial and policy-minded people who enter the community, who want to try to accomplish concrete things and then get discouraged by what they perceive as a culture of constant second-guessing and bias against action.[2]
For example, although I’m on balance in favor of the current strong upvote system, I agree it also has important downsides. And although I’m pretty bearish on the value of standard academic peer-review processes, I do think it’s really useful for especially influential reports to be published alongside public reviews from subject matter experts. For example, when it publishes long reports, OpenPhil sometimes also publishes open reviews from subject matter experts; I think it would be great to see more of that, even though it’s costly.
On the other hand, even though I don’t like the term, I do think it’s fair to say there’s an unusually large “STEMlord-ism” undercurrent to the culture. People often do have much more positive impressions of STEM disciplines (+econ and the more technical parts of analytic philosophy), relative to non-STEM disciplines. I think this attitude isn’t necessarily wrong, but I do think you’re correct to perceive that it’s there.
This is pretty far afield from what the post is about, but to me the most natural reason why someone might say EA rejects democracy are neither of the two interpretations you mentioned, but rather that EAs are technocrats suspicious of democracy, to quote Rob Reich:
In my experience, effective altruists are unabashed technocrats. They seek to maximize good in the world, and they deploy the best evidence they can marshal to identify the mechanisms by which one can pursue that goal. Effective altruists might locate instrumental value in politics—to the extent that political engagement is necessary to promote good—but not, I suspect, intrinsic value.
Plato identified the best city as that in which philosophers were the rulers. Effective altruists see the best state of affairs, I think, as that in which good-maximizing technocrats are in charge. Perhaps it is possible to call this a politics: technocracy.
But this politics is suspicious of, or rejects, the form of politics to which most people attach enormous value: democracy. Would effective altruists attach any independent value to democracy? Given the chance to craft social and political arrangements from scratch, would effective altruists select democratic rather than technocratic rule? I suspect the answer is no, and to that extent, effective altruism is in tension with the commonplace philosophy that identifies in democracy a powerful normative force.
I upvoted since I also thought Ben’s claims in that section was too strong.
That said, I think “suspicious of democracy” seems fairly extreme as a way to describe it. I think some EAs are healthily skeptical that democracy is the best possible governance mechanism (or more controversially, best realistically attainable governance mechanisms).
I would certainly consider myself one of them. I think we should generally have a healthy degree of skepticism towards our existing institutions, and I don’t see clear reasons why we should privilege the “democracy” hypothesis over technocracy or more futuristic setups, other than general conservatism (“Chesterton’s fence”) preferences/heuristics. In contrast, we have substantially more evidence for the benefits of democracies over monarchies or other autocratic systems.
I do think the track record where so-called elite people overestimate the efficiency gains of less free systems is suboptimal (LOL at the 1950s economists who thought that the Soviet Union will be more productive than the US). But I don’t think bias arguments should be dominant.
I don’t think the EA community rejects democracy to any unusual degree
Every time the issue of taxes comes up, it’s a very popular opinion that people should avoid as much taxes as possible to redirect the money to what they personally deem effective. This is usually accompanied by insinuations that democratically elected governments are useless or harmful.
While it is true that aid and charity in general tend to be far from democratic, it is also widely accepted that they often cause harm or just fail to have an effect—indeed, this is the basis for our very movement. There are also many known cases where bad effects were the result of lack of participation by the recipients of aid. So it’s not enough to be “no less democratic than other charity orgs”. I believe we should strive to be much more democratic than that average—which seems to me like a minority view here.
I’m assuming you’re right about the amount of democracy in other non-profits, but the situation in my country is actually different. All non-profits have members who can call an assembly and have final say on any decision or policy of the non-profit.
So it’s not enough to be “no less democratic than other charity orgs”. I believe we should strive to be much more democratic than that average—which seems to me like a minority view here.
I do think that this position—“EA foundations aren’t unusually undemocratic, but they should still be a lot more democratic than they are”—is totally worthy of discussion. I think you’re also right to note that other people in the community tend to be skeptical of this position; I’m actually skeptical of it, myself, but I would be interested in reading more arguments in favor of it.
(My comment was mostly pushing back against the suggestion that the EA community is distinctly non-democratic.)
I’m assuming you’re right about the amount of democracy in other non-profits, but the situation in my country is actually different. All non-profits have members who can call an assembly and have final say on any decision or policy of the non-profit.
I’ve never heard of this—that sounds very like a really interesting institutional structure! Can I ask what you’re country you’re in, or if there’s anything to read on how this works in practice?
Every time the issue of taxes comes up, it’s a very popular opinion that people should avoid as much taxes as possible to redirect the money to what they personally deem effective. This is usually accompanied by insinuations that democratically elected governments are useless or harmful.
The first part of this does seem like a pretty common opinion to me—fair to point that out!
On the second: I don’t think “democratic governments are useless or harmful” is a popular opinion, if the comparison point is either to non-democratic governments or no government. I do think “government programs are often really inefficient or poorly targeted” and “governments often fail to address really important issues” are both common opinions, on the other hand, but I don’t really interpret these as being about democracy per se.[1]
One thing that’s also complicated, here, is that the intended beneficiaries of EA foundations’ giving tend to lack voting power in the foundations’ host countries: animals, the poor in other countries, and future generations. So trying to redirect resources to these groups, rather than the beneficiaries preferred by one’s national government, can also be framed as a response to the fact that (e.g.) the US government is insufficiently democratic: the US government doesn’t have any formal mechanisms for representing the interests of most of the groups that have a stake in its decisions. Even given this justification, I think it probably would still be a stretch to describe the community tendency here as overall “democratic” in nature. Nonetheless, I think it does at least make the situation a little harder to characterize.
At least speaking parochially, I also think of these as relatively mainstream opinions in the US rather than opinions that feel distinctly EA. Something I wonder about, sometimes, is whether cross-country differences are underrated as a source of disagreement within and about the EA community. Your comment about how non-profits work in your country was also thought-provoking in this regard!
One thing that’s also complicated, here, is that the intended beneficiaries of EA foundations’ giving tend to lack voting power in the foundations’ host countries: animals, the poor in other countries, and future generations. So trying to redirect resources to these groups, rather than the beneficiaries preferred by one’s national government, can also be framed as a response to the fact that (e.g.) the US government is insufficiently democratic: the US government doesn’t have any formal mechanisms for representing the interests of most of the groups that have a stake in its decisions.
I don’t disagree, but I think the discussion is not as simple. When it comes to “legitimate” EA money, I think it would be much better to have some mechanism that includes as many of the potential beneficiaries as possible, rather than one national government. I just view tax money as “not legitimate EA money” (Edit: and I see people who do want to avoid taxes, as wanting to subvert the democratic system they’re in in favor of their own decisionmaking).
Can I ask what you’re country you’re in, or if there’s anything to read on how this works in practice?
I live in Israel. A short Google search didn’t turn up much in terms of English language information about this, other than this government document outlining the relevant laws and rules. The relevant part of it is the chapter about the institutions of an Amuta(=Israeli non-profit), starting page 9.
In practice, since members have to be admitted by already existing bodies of the non-profit, the general assembly can be just the executive board and the auditor(s), and thus be meaningless. I’m sure this happens often (maybe most of the time). In particular, EA Israel (the org) has very few members. But I’ve been a member of a non-profit with a much larger (~100 people) general assembly in the past.
You can draw some parallels between the general assembly and a board of directors (Edit: trustees? I don’t know what the right word is). On the other hand, you can also draw parallels between the executive board and a board of directors—since in many (most?) cases, including EA Israel, the actual day-to-day management of the non-profit is done by a paid CEO and other employees. So the executive board makes strategy decisions and oversees the activity, and doesn’t implement it itself. Meaning it’s kind of a board of directors, which still answers to a possibly much larger general assembly.
Thank you for providing an excellent example of how one should down vote, if that is what you’re doing. Not meaning to put words in your mouth, just applauding a reasoned challenge.
To be clear, though, I also don’t think people should feel like they need to write out comments explaining their strong downvotes. I think the time cost is too high for it to be a default expectation, particularly since it can lead to getting involved in a fraught back-and-forth and take additional time and energy that way. I don’t use strong downvotes all that often, but, when I do use them, it’s rare that I’ll also write up an explanatory comment.
(Insofar as I disagree with forum voting norms, my main disagreement is that I’d like to see people have somewhat higher bars for strong downvoting comments that aren’t obviously substanceless or norm-violating. I think there’s an asymmetry between upvotes and downvotes, since downvotes often feel aggressive or censorious to the downvoted person and the people who agree with them. For that reason, I think that having a higher bar for downvotes than for upvotes helps to keep discussions from turning sour and helps avoid alienating people more than necessary.)
To be clear, though, I also don’t think people should feel like they need to write out comments explaining their strong downvotes.
Ok, no problem, thanks for sharing that. For me, without explanations the entire voting system up and down generates entirely worthless information. With explanations then there is an opportunity to evaluate the quality of the votes.
To be fair, I’ve been using forums regularly since they first appeared on the net, and this is probably the most intelligent forum I’ve ever discovered, which I am indeed quite grateful for. Perhaps the reason I’ve complained about the voting system is that, in my mind, it contaminates what is otherwise a pretty close to perfect site. The contrast between near perfection, and high school level popularity contest gimmickry offends my delicate aesthetic sensibility. :-)
Ha! STEMlord-ism. Good one! Though I noticed that the anonymous click happy hordes who can’t be bothered to explain their votes have already downvoted your STEMlord-ism comment, so that must mean it’s completely wrong. :-)
Well, you seem to be even more ruthless than myself on this topic, so we should get along great. That said, I have decided to stop swimming upstream and am now devoting myself to accumulating as many down votes as possible. That way, should anyone wish to find my posts, they can simply power scroll to the bottom of any listings, and there I’ll be! :-)
Didn’t separate karma for helpfulness and disagreement (frequently used on LessWrong) get implemented on the EA forum recently? This post feels like the ideal use case for it:
There are some controversial comments with weakly positive karma despite lots of votes, where I suspect what’s going on is some people are signalling disagreement with downvotes, and others are signalling ‘this post constitutes meaningful engagement’ with upvotes.
There are also some comments where the tone seems to me to be over the line, with varying amounts of karma (from very positive to very negative), from various people.
Were a two-karma system available, I think I would use both [strong upvote, strong disagree] and [strong downvote, strong agree] at least once each.
I notice a two-karma system has been implemented in at least one EA Forum post before, see the comments section to this “Fanatical EAs should support very weird projects” post.
the forum did offer the chance of having agree/disagree on the post, I just forgot to respond. I think it is a beta feature but happy for it to be used on this post
I think we also need renewed discussion of how the karma system contributes to groupthink and hierarchy, things that, to put it gently, EA sometimes struggles with somewhat.
As far as I can tell, the system gives far more voting power to highly-rated users, allowing a few highly active (and thus most likely highly orthodox) forum users to unilaterally boost or tank any given post.
This is especially bad when you consider that low-karma comments are hidden, allowing prominent figures (often with high karma scores) to soft-censor their own critics.
This is especially worrying given the groupthink that emerges on internet fora, where a comment having a score of −5 makes it much more likely for people to downvote it further on reflex, and vice versa.
I am not going to go into details here beyond saying that this is the plot of the MeowMeowBeenz episode of Community.
MeowMeowBeenz does not contribute to good epistemics.
I disagree that the problem here is groupthink, and I think if you look at highly rated posts, you can’t reasonably conclude that people who criticise the orthodox position will be reliably downvoted. I think the problem here is that some people vote based on tone and some on content, which means that when something is downvoted different people draw different conclusions about why.
I hope to encourage more people to instead upvote based on rigor/epistemics/quality on the margin, rather than based on tone or based on agreement (which is some of “content”) or vibe.
EDIT: I also think a surprisingly high number of people upvote low-quality criticisms that have a good tone, which makes me surprised when others assert than the movement is systematically biased against criticisms (“insufficient discernment” will be a fairer criticism, but that’s a mistake, not a bias).
Doubtful if you look at Gideon’s first comment and remember it was downvoted through the floor almost immediately.
Questioning orthodoxy is ok within some bounds (often technical/narrow disagreements), or when expressed in suitable terms, e.g.
(Significant) underconfidence, regardless of expertise and/or lack of expertise among those criticised
Unreasonable assumptions of good faith, even in the face of hostility or malpractice (double standards, perhaps a lesser form of the expectation of a ‘perfect victim’)
Extensive use of EA buzzwords
Huge amounts of extra work/detail that would not be deemed necessary for non-critical writing
Essentially making oneself as small as possible so as not to set off the Bad Tone hair-trigger
This is difficult because knowing what you are talking about and being lazily dismissed by people you know for a fact know far less than you about a given subject matter makes one somewhat frustrated
As several EAs have noted, e.g. weeatquince, this is time-consuming and (emotionally) exhausting, and often results in dismissal anyway.
This is even harder to pull off when questioning sensitive issues like politics, funding ethics, foundational intellectual issues (e.g. the ways in which the TUA uses utterly unsuitable tools for its subject matter due to a lack of outside reading), competence of prominent figures, etc.
I actually think this forms a sort of positive feedback loop, where EAs become increasingly orthodox (and confident in that orthodoxy) due to perceived lack of substantive critiques, which makes making those critiques so frustrating, time-consuming, and low-impact that people just don’t bother. I’ve certainly done it.
Quantitatively, if you look at the top 10 most upvoted posts:
4 are straightforwardly criticisms: (“Free-spending EA might be bad...”, “Bad Omens”, “case against randomista development”, “Critiques of EA”)
4 are partial criticisms (“Long-Termism” vs. “Existential Risk”, “My mistakes on the path to impact”,”EA for dumb people?”, “Are you really in a race?”)
1 (the most upvoted) was a response to criticism (“EA and the current funding situation”)
1 was about the former EAForum head leaving (“Announcing my retirement”)
This is a total of 40-80%, depending on how you count.
(In the next 10 posts, I “only” see 3 posts that are criticisms, but I don’t think that 30% is particularly low either. It does get lower further down however).
I think this is a non-sequitur in response to A.C.Skraeling’s comment. They said:
A high percentage of the most upvoted posts of all time being criticism of some sort is perfectly compatible with this.
Here’s a recent case of someone questioning orthodoxy (writing a negative review of WWOTF), not bothering to express it in EA-friendly enough language, and subsequently being downvoted to a trollish level (-12) for it despite their content being much better than that: https://forum.effectivealtruism.org/posts/AyPTZLTwm5hN2Kfcb/book-review-what-we-owe-the-future-erik-hoel
I don’t find this example convincing. I just read the review and found it pretty underwhelming. Take this:
The paragraph is reacting to the following passage in WWOTF:
But MacAskill is here describing problems with rival views in population ethics, not problems with utilitarianism! Because the author (1) conflates the two, (2) mischaracterizes the Repugnant Conclusion (“worlds where all available land is turned into places worse than the worst slums of Bangladesh”), and (3) fails to distinguish the Repugnant Conclusion from standard “repugnant” implications of utilitarianism that have nothing to do with it, he ends up attributing to longtermism a number of “ridiculous” views that do not in fact follow from that position.
Separately, if criticizing WWOTF is considered to be a paradigmatic case of “heterodoxy”, it seems worth mentioning thata recent critical review by Magnus Vindinghas been very favorably received (179 karma at the time of writing).This response completely ignores the main point of my comment.
Please reread my comment because the whole point was that A.C.Skraeling said that criticism is accepted within some boundaries, or when expressed in suitable terms. You essentially just repeated Linch’s point except that my whole point was that Linch’s point is perfectly compatible with what A.C. Skraeling said.
Regarding Hoel’s review, you seem to have read my point as being that it was particularly good or convincing to EAs, which is incorrect. My point was that it was downvoted to −12, a karma score I associate with trollish posts, despite its content being much better than that, because of the combination of criticizing EA orthodoxy (longtermism, utilitarianism, population ethics etc) and not expressing it in a suitable manner. This makes it a decent example of what A.C.Skraeling said. You are free to disagree of course.
I did misread some parts of your original comment. I thought you were saying that criticizing WWOTF was itself an example of criticism that is beyond the bounds Skraeling was describing. But I now see that you were not saying this. My apologies. (I have crossed out the part of my comment that is affected by this misreading.)
That is not how I read your point. I interpreted you as saying that the quality of the book review justified higher karma than it received (which is confirmed by your reply). My comment was meant to argue against this point, by highlighting some serious blunders and sloppy reasoning by the author that probably justify the low rating. (-12 karma is appropriate for a post of very low quality, in my opinion, and not just a trollish post.)
Thanks for the retraction.
Regarding the Hoel piece, the fact that you highlighted the section you did and the way you analyzed it suggests to me you didn’t understand what his position was, and didn’t try particularly hard to do so. I don’t think you can truly judge whether his content is very low quality if you don’t understand it. Personally, I think he made some interesting points really engaging with some cores of EA, even if I disagree with much of what he said. I completely disagree that his content, separate from its language and tone towards EAs, is anywhere near very low quality, certainly nowhere near −12. If you want to understand his views better, I found his comments replying to his piece on why he’s not an EA illuminating, such as his response to my attempted summary of his position. But we can agree to disagree.
Edit note: I significantly edited the part of this comment talking about Hoel’s piece within a few hours of posting with the aim of greater clarity.
The highly rated posts I’ve seen so far, on the topic of X risk in particular, appear to me to typically be a product of group think. They’re typically very articulate, very polished form, highly informed on details, (ie. good academic style) but not escaping group think.
As evidence, please direct us to the writers here who have been laser focused on the critical importance of managing the pace of the knowledge explosion. Where are they? If they exist, and I sincerely hope they do (because I don’t have the authority to sell the case) I really would like to meet them.
In my opinion, group think is to some immeasurable degree built in to the fabric of academia, because academia is a business, and one does not stay in business by alienating one’s clients. Thus, to the degree the academic depends on their salary, they are somewhat imprisoned within the limits of whoever is signing their paycheck.
Here’s an example.
I can afford to persistently sell a “world without men” idea as an ambitious solution to human violence because nobody owns me, I have nothing of value at stake. Whatever the merits of such a case might be, (very clearly debatable) academics can’t afford to make that case, because the group consensus of their community will not tolerate it. And before you protest, know that I’ve already been threatened with banning on this site just for bringing the subject up.
Academia is business, and is thus governed by fear, and that is the source of group think.
If you would, please down vote this post at least 100 times, as I believe I’ve earned it. :-)
I’m looking at your profile, you have almost nothing but downvotes, but I haven’t seen you say anything dumb—just sassy. FWIW, I really like this comment.
I frequently catch myself, and I’m embarrassed to admit that, being more likely to upvote posts of users that I know. I also find myself anchoring my vote to the existing vote count (if a post has a lot of upvotes then I am less likely to downvote it). Pretty sure I’m not the only one.
Furthermore, I observe how vote count influences my reading of each post more than it should. Groupthink at its best.
I suspect if the forum hid the vote count for a month, there would be significant changes in voting patterns. That being said, I’m not sure these changes would actually influence the votesorted order of the postings—but they might. I suspect it would also change the nature of certain discussions.
Admirable honesty, well done.
Why was this downvoted?
Because the voting system is in place to encourage high school students to participate in EA discussion. If you were to say something like “I still think Britney Spears is cool” then you’re gonna get down voted, so I’d try to avoid that topic if you can.
This time it’s me who downvoted. The first part (high school students) doesn’t seem close to being true, and the second (Britney Spears) is not related at all to the discussion?
Thank you for not being anonymous, and for explaining your down vote. That’s all I’ve been requesting from the beginning. I agree my colorful language was an imprecise description of the situation.
PS: Holy cow, I got −24 from just 6 votes. That’s awesome. I predict I will soon be the king of down votes! High school systems require high school participation.
Thank you so much, you’ve said what I’ve been thinking, better than I’ve been saying it.
Maybe this is helpful, not sure.
At least part of the issue may be the academic roots of EA. Academics turn intellectual inquiry in to a business, which introduces some competing agendas in to the process. Academics often like to pose themselves as rebels, but I think it’s closer to the truth to say that they are somewhat imprisoned within the group consensus of academic culture. You know, if you’re trying to put your kids through college using your salary as a professor, you might have to sidestep controversial ideas that could get you in trouble with whoever is writing your paycheck.
Point being, there may be some built-in aversion to unusual ideas, which then gets fed in to the reputation voting system.
To me it seems more like EA’s STEMlord-ism and roots in management consultancy, and its consequent maximiser-culture, rejection of democracy, and heavy preference for the latter aspect of the explore-exploit tradeoff.
“Number go bigger” etc. with a far lower value placed on critical reason, i.e. what the number actually is.
Orthodoxy is very efficient, you just end up pointed in the wrong direction.
I do think it’s reasonable to feel frustrated by your experience commenting on this post. I think you should have been engaged more respectfully, with more of an assumption of good faith, and that a number of your comments shouldn’t have been so heavily downvoted. I do also agree with some of the concerns you’ve raised in your comments and think it was useful for you to raise them.[1]
At the same time, I do think this comment isn’t conducive to good conversation, and the content mostly strikes me as off-base.
The EA community doesn’t have its roots in management consultancy. Off the top of my head, I can’t think of anyone who’s sometimes considered a founding figure (e.g. Singer, Parfit, Ord, MacAskill, Yudkowsky, Karnofsky, Hassenfeld) who was a management consultant. Although the community does have some people who were or are management consultants, they don’t seem overrepresented in any interesting way.
At least on the two most obvious interpretations, I don’t think the EA community rejects democracy to any unusual degree. If you mean “people involved in EA reject democracy as a political system,” then I think I’ve literally never heard anyone express pro-autocracy views. If you mean “organizations in the EA space reject directly democratic approaches to decision-making,” then that is largely true, but I don’t think it’s in any way a distinctive feature of the community. I think that almost no philanthropic foundations, anywhere, decide where to give money using anything like a popular vote; I think the same is generally true of advocacy and analysis organizations. I’d actually guess that EA organizations are actually somewhat more democratic-leaning than comparable organizations in other communities; for example, FTX’s regranting program is both pretty unusual and arguably a bit “more democratic” than other approaches to giving away money. (If you mean something else by “rejection of democracy,” then I apologize for the incorrect interpretations!)
Lastly, I don’t think the EA community has an unusually heavy preference for the exploit end of the explore-exploit trade-off; I think the opposite is true. I can’t think of any comparable community that devotes a larger amount of energy to the question “What should we try to do?”, relative to actually trying to do things. I think this is actually something that turns off a lot of entrepreneurial and policy-minded people who enter the community, who want to try to accomplish concrete things and then get discouraged by what they perceive as a culture of constant second-guessing and bias against action.[2]
For example, although I’m on balance in favor of the current strong upvote system, I agree it also has important downsides. And although I’m pretty bearish on the value of standard academic peer-review processes, I do think it’s really useful for especially influential reports to be published alongside public reviews from subject matter experts. For example, when it publishes long reports, OpenPhil sometimes also publishes open reviews from subject matter experts; I think it would be great to see more of that, even though it’s costly.
On the other hand, even though I don’t like the term, I do think it’s fair to say there’s an unusually large “STEMlord-ism” undercurrent to the culture. People often do have much more positive impressions of STEM disciplines (+econ and the more technical parts of analytic philosophy), relative to non-STEM disciplines. I think this attitude isn’t necessarily wrong, but I do think you’re correct to perceive that it’s there.
This is pretty far afield from what the post is about, but to me the most natural reason why someone might say EA rejects democracy are neither of the two interpretations you mentioned, but rather that EAs are technocrats suspicious of democracy, to quote Rob Reich:
I upvoted since I also thought Ben’s claims in that section was too strong.
That said, I think “suspicious of democracy” seems fairly extreme as a way to describe it. I think some EAs are healthily skeptical that democracy is the best possible governance mechanism (or more controversially, best realistically attainable governance mechanisms).
I would certainly consider myself one of them. I think we should generally have a healthy degree of skepticism towards our existing institutions, and I don’t see clear reasons why we should privilege the “democracy” hypothesis over technocracy or more futuristic setups, other than general conservatism (“Chesterton’s fence”) preferences/heuristics. In contrast, we have substantially more evidence for the benefits of democracies over monarchies or other autocratic systems.
I do think the track record where so-called elite people overestimate the efficiency gains of less free systems is suboptimal (LOL at the 1950s economists who thought that the Soviet Union will be more productive than the US). But I don’t think bias arguments should be dominant.
Every time the issue of taxes comes up, it’s a very popular opinion that people should avoid as much taxes as possible to redirect the money to what they personally deem effective. This is usually accompanied by insinuations that democratically elected governments are useless or harmful.
While it is true that aid and charity in general tend to be far from democratic, it is also widely accepted that they often cause harm or just fail to have an effect—indeed, this is the basis for our very movement. There are also many known cases where bad effects were the result of lack of participation by the recipients of aid. So it’s not enough to be “no less democratic than other charity orgs”. I believe we should strive to be much more democratic than that average—which seems to me like a minority view here.
I’m assuming you’re right about the amount of democracy in other non-profits, but the situation in my country is actually different. All non-profits have members who can call an assembly and have final say on any decision or policy of the non-profit.
Thanks for the thoughtful comment!
I do think that this position—“EA foundations aren’t unusually undemocratic, but they should still be a lot more democratic than they are”—is totally worthy of discussion. I think you’re also right to note that other people in the community tend to be skeptical of this position; I’m actually skeptical of it, myself, but I would be interested in reading more arguments in favor of it.
(My comment was mostly pushing back against the suggestion that the EA community is distinctly non-democratic.)
I’ve never heard of this—that sounds very like a really interesting institutional structure! Can I ask what you’re country you’re in, or if there’s anything to read on how this works in practice?
The first part of this does seem like a pretty common opinion to me—fair to point that out!
On the second: I don’t think “democratic governments are useless or harmful” is a popular opinion, if the comparison point is either to non-democratic governments or no government. I do think “government programs are often really inefficient or poorly targeted” and “governments often fail to address really important issues” are both common opinions, on the other hand, but I don’t really interpret these as being about democracy per se.[1]
One thing that’s also complicated, here, is that the intended beneficiaries of EA foundations’ giving tend to lack voting power in the foundations’ host countries: animals, the poor in other countries, and future generations. So trying to redirect resources to these groups, rather than the beneficiaries preferred by one’s national government, can also be framed as a response to the fact that (e.g.) the US government is insufficiently democratic: the US government doesn’t have any formal mechanisms for representing the interests of most of the groups that have a stake in its decisions. Even given this justification, I think it probably would still be a stretch to describe the community tendency here as overall “democratic” in nature. Nonetheless, I think it does at least make the situation a little harder to characterize.
At least speaking parochially, I also think of these as relatively mainstream opinions in the US rather than opinions that feel distinctly EA. Something I wonder about, sometimes, is whether cross-country differences are underrated as a source of disagreement within and about the EA community. Your comment about how non-profits work in your country was also thought-provoking in this regard!
I don’t disagree, but I think the discussion is not as simple. When it comes to “legitimate” EA money, I think it would be much better to have some mechanism that includes as many of the potential beneficiaries as possible, rather than one national government. I just view tax money as “not legitimate EA money” (Edit: and I see people who do want to avoid taxes, as wanting to subvert the democratic system they’re in in favor of their own decisionmaking).
I live in Israel. A short Google search didn’t turn up much in terms of English language information about this, other than this government document outlining the relevant laws and rules. The relevant part of it is the chapter about the institutions of an Amuta(=Israeli non-profit), starting page 9.
In practice, since members have to be admitted by already existing bodies of the non-profit, the general assembly can be just the executive board and the auditor(s), and thus be meaningless. I’m sure this happens often (maybe most of the time). In particular, EA Israel (the org) has very few members. But I’ve been a member of a non-profit with a much larger (~100 people) general assembly in the past.
You can draw some parallels between the general assembly and a board of directors (Edit: trustees? I don’t know what the right word is). On the other hand, you can also draw parallels between the executive board and a board of directors—since in many (most?) cases, including EA Israel, the actual day-to-day management of the non-profit is done by a paid CEO and other employees. So the executive board makes strategy decisions and oversees the activity, and doesn’t implement it itself. Meaning it’s kind of a board of directors, which still answers to a possibly much larger general assembly.
Thank you for providing an excellent example of how one should down vote, if that is what you’re doing. Not meaning to put words in your mouth, just applauding a reasoned challenge.
Thanks!
To be clear, though, I also don’t think people should feel like they need to write out comments explaining their strong downvotes. I think the time cost is too high for it to be a default expectation, particularly since it can lead to getting involved in a fraught back-and-forth and take additional time and energy that way. I don’t use strong downvotes all that often, but, when I do use them, it’s rare that I’ll also write up an explanatory comment.
(Insofar as I disagree with forum voting norms, my main disagreement is that I’d like to see people have somewhat higher bars for strong downvoting comments that aren’t obviously substanceless or norm-violating. I think there’s an asymmetry between upvotes and downvotes, since downvotes often feel aggressive or censorious to the downvoted person and the people who agree with them. For that reason, I think that having a higher bar for downvotes than for upvotes helps to keep discussions from turning sour and helps avoid alienating people more than necessary.)
Ok, no problem, thanks for sharing that. For me, without explanations the entire voting system up and down generates entirely worthless information. With explanations then there is an opportunity to evaluate the quality of the votes.
To be fair, I’ve been using forums regularly since they first appeared on the net, and this is probably the most intelligent forum I’ve ever discovered, which I am indeed quite grateful for. Perhaps the reason I’ve complained about the voting system is that, in my mind, it contaminates what is otherwise a pretty close to perfect site. The contrast between near perfection, and high school level popularity contest gimmickry offends my delicate aesthetic sensibility. :-)
Ha! STEMlord-ism. Good one! Though I noticed that the anonymous click happy hordes who can’t be bothered to explain their votes have already downvoted your STEMlord-ism comment, so that must mean it’s completely wrong. :-)
Well, you seem to be even more ruthless than myself on this topic, so we should get along great. That said, I have decided to stop swimming upstream and am now devoting myself to accumulating as many down votes as possible. That way, should anyone wish to find my posts, they can simply power scroll to the bottom of any listings, and there I’ll be! :-)