Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Forum users can, should, and do downvote posts that are bad, distracting, etc. (The trolls should soon get the message and leave.) Iâm very opposed to top-down hierarchical interventions of the sort you describe. I donât particularly think that EA spaces should host âunequivocal condemnationsâ of things that (as you rightly note) have nothing to do with EA, so Iâd also encourage people to downvote those. Itâs groupthinky and cringe, and risks being massively off-putting to the kinds of independent thinkers who value epistemic integrity and have little tolerance for groupthink or witch-hunts, however meritorious the message (or wicked the witches).
You should look into how universities work! Academic freedom means that individual professors are free to condemn whatever views they find obnoxious. Theyâre also free to invite speakers that their colleagues find obnoxious, and sometimes they do (even, e.g., at Princeton). Their colleaguesâmany of whom work on important problems! -- must then tolerate this. Note that many of the best universities follow the Chicago Principles & Kalven Report guidance on institutional neutrality, according to which the university leadership should express no official opinion on matters that arenât directly relevant to the running of the university. The university is a community of scholars, of diverse opinions, not a political party with a shared orthodoxy.
I would much prefer for the EA community to model itself on universities than on political parties.
Again, that doesnât mean that anything goes. We already have the solution to bad contributions. Itâs called âdownvotingâ.
Effective altruism is meant to be a social movement, not a university debate. And unlike in a university setting, there are zero requirements for someone to be accurate or to have relevant expertise before posting here.
It is common here for people with little expertise in a topic to do an arbitrary amount of online research and throw out their resulting opinions. This results in something like the post where someone cited âmankind quarterlyâ for their human genetics posts, without mentioning that it was a publication with a history of white supremacy, fraud and incompetence. That issue was caught, eventually, but I guarantee you the forum is riddled with similar problems that are not caught.
For a regular topic, these loose standards may be acceptable, as it makes it easier to throw out ideas and collaborate, and the air of loose discussion makes things fun. Someone may chime in with corrections, someone may not, ultimately it is not a big deal.
But when it comes to race science, the consequences of this sort of loose, non-quality controlled discussion is worse. As the OP mentioned, you drive away minorities, and make the forum an unpleasant place to be.
But it also might convince more people to be racist. At least one white supremacist has traced their radicalisation pipeline to go through lesswrong and Slatestarcodex. That was just one person out of forty, so perhaps it was a fluke, or perhaps it wasnât. Perhaps there are a few that didnât go all the way to posting on white supremacist forums, but became just a little bit more dismissive of black people on job applications. I donât know how high the cost is, but it exists.
The way I see it, the forum should either hold back any race science related post and ensure that every claim made within it is thoroughly fact checked by relevant independent experts, or it should just ban the things. I prefer the latter, so we donât waste anybodyâs time.
In fairness, expertise is not required in all university settings. Student groups invite non-experts political figures to speak, famous politicians give speeches at graduation ceremonies etc. I am generally against universities banning student groups from having racist/âoffensive speakers, although I might allow exceptions in extreme cases.
Though I am nonetheless inclined to agree that the distinction between universities, which have as a central purpose free, objective, rational debate, and EA as a movement, which has a central purpose of carrying out a particular (already mildly controversial) ethical program, and which also, frankly, is in more danger of âbe safe for witches, becomes 90% witchâ than universities are, is important and means that EA should be less internally tolerant of speech expressing bad ideas.
You seem to be imagining the choice as being between âhost bad discussionsâ or âdo something about it via centralized hierarchical controlâ. But Iâm trying to emphasize a third option: âdo something about it via decentralized mechanisms.â (Posts with negative karma are basically invisible, after all.)
The downside of centralized responses is that it creates a precedent for people to use social/âpolitical pressure to try to impose their opinions on the whole community. Decentralization protects against that. (I donât so strongly object to the mods just deciding, on their own, to ban certain topics. What especially troubles me is social/âpolitical pressure aimed towards this end.)
As I see it, the crucial question to ask is which mechanism is more reliable: top-down control in response to social/âpolitical pressure from vocal advocates, or decentralized community judgment via âsecret ballotâ karma voting. As I see it, the primary difference between the two is that the former is more âpoliticizedâ and subject to social desirability bias. (A secondary effect of politicization is to encourage factions to fight over control of this new power.) So I think the decentralized approach is much better.
One important difference between the Forum and most other fora is the strong voteâa minority faction can use their strong votes to keep things in circulation unless the majority acts the same way. The Forum is also small enough for brigading to be a major concern.
I think encouraging people who take the view to just strong downvote race science material off the Forum poses its own set of epistemic problems for the Forum. And encouraging them to âmerelyâ downvote may not be effective if the other side is employing the strongvotes.
Maybe this proposed solution would be more viable if there were special voting rules for current topics at mod discretionâe.g., 500 karma required to vote, no strongvotes allowed? Iâm not sure, though. If all the race science folks vote and everyone else mostly stays away, the result is a false sense of community views.
Still, I support a banârace science is off topic, there are other places people can go if they want to talk about it, and these discussions cause significant harmful effects. If discussions of why the New England Patriots and New York Yankees were a scourge on humanity were causing these problems for the Forum, Iâd support a ban even though I believe those teams are. :)
Karma is not a straightforward signal of the value of contributions
This statement and the idea of karma as the decentralized solution to the problems OP describes feels overconfident to me. To reference this comment, I also would push back on karma not being subject to social desirability bias (ex: someone sees a post already has relatively high karma, so theyâre more inclined to upvote it knowing that others on the Forum or in the EA community have, even if they, letâs say, havenât read the whole post).
I would argue that karma isnât a straightforward or infallible signal of âbadâ or âgoodâ contributions. As those working on the Forum have discussed in the past, karma can overrate certain topics. It can signal interest from a large fraction of the community, or âlowest-common-denominatorâ posts, rather than the value or quality of a contribution. As a current Forum staff member put it, âthe karma system is designed to show people posts which the Forum community judges as valuable for Forum readers.â
I would note, though, that karma also does not straightforwardly represent the opinions of the Forum community as a whole regarding whatâs valuable. The recent data from the 2023 EA Forum user survey shows that a raw estimate of 46.5% of those surveyed and a weighted estimate of 70.9% of those surveyed upvoted or downvoted a post or comment. Of 13.7k distinct users in a year, 4.4k of those are distinct commenters, and only 171 are distinct post authors. Engagement across users is âquite unequal,â and a small number of users create an outsized amount of comments, posts, and karma. Weighted upvotes and downvotes also mean that certain users can have more influence on karma than others.
I appreciate the karma system and its values (of which there are several!), and maybe your argument is that more people should vote and contribute to the karma system. I just wanted to point out how karma seems to currently function and the ways it might not directly correlate with value, which brings me to my next pointâŠ
Karma seems unlikely to address the concerns the OP describes
Without making a claim for or against the OPâs proposed solutions, Iâm unsurprised by their proposal for a centralized approach. One argument against relying on a mechanism like karma, particularly for discussions of race on the Forum, is that it hasnât been a solution for upholding the values or conditions I think the OP is referencing and advocating for (like not losing the potential involvement of people who are alienated by race science, engaging in broader intellectual diversity, and balancing the implications of truth-seeking with other values).
To give an example: I heard from six separate people involved in the EA community that they felt alienated by the discussions around Manifest on the Forum and chose to not engage or participate (and for a few people, that this was close to a last straw for them wanting to remain involved in EA at all). The costs and personal toll for them to engage felt too high, so they didnât add their votes or voices to the discussion. Iâve heard of this dynamic happening for different race-related discussions on the Forum in the past few years, and I suspect it leads to some perspectives being more represented on the Forum than others (even if they might be more balanced in the EA community or movement as a whole). In these situations, the high karma of some topically related comments or posts in fact seemed to further some of the problems OP describes.
I respect and agree with wanting to maintain a community that values epistemic integrity. Maybe you think that costs incurred by race science discussions on the Forum are not costly enough for the Forum to ban discussion of the topic, which is an argument to be made. I would be curious for what other ideas or proposals you would have for addressing some of the dynamics OP describes, or thoughts on the tradeoffs between allowing/âencouraging discussions of race science in EA-funded spaces and the effects that can have on the community or the movement.
Academic freedom is not and has never been meant to protect professors on topics that have no relevance to their discipline: âTeachers are entitled to freedom in the classroom in discussing their subject, but they should be careful not to introduce into their teaching controversial matter which has no relation to their subject. Limitations of academic freedom because of religious or other aims of the institution should be clearly stated in writing at the time of the appointment.â
If, say, a philosophy professor wants to express opinions on infanticide, that is covered under academic freedom. If they want to encourage students to drink bleach, saying it is good for their health, that is not covered.
We can and should have a strong standard of academic freedom for relevant, on-topic contributions. But race science is off topic and irrelevant to EA. Itâs closer to spam. Should the forum have no spam filter and rely on community members to downvote posts as the method of spam control?
You elsewhere link to this post as a âclear example of a post that would be banned under the rulesâ. That post includes the following argument:
The post concludes, âEAâs existing taboos are preventing it from answering questions like these, and as new taboos are accepted, the effectiveness of the movement will continue to wain.â
You may well judge this to be wrong, as a substantive matter. But I donât understand how anyone could seriously claim that this is âoff topic and irrelevant to EA.â (The effectiveness of the movement is obviously a matter of relevant concern for EA.) Peopleâs tendency to dishonestly smuggle substantive judgments under putatively procedural grounds is precisely why Iâm so suspicious of such calls for censorship.
As an Ashkenazi Jew myself, saying âweâd like to make everyone like Ashkenazi Jewsâ feels just like a mirror image of Nazism that very clearly should not appear on the forum
Iâm not making any claims either way about that. Iâm just pointing out (contra Matthew) that it is clearly not âirrelevant spamâ. Your objections are substantive, not procedural. Folks who want to censor views they find offensive should be honest about what theyâre doing, not pretend that theyâre just filtering out viagra ads.
I think it is irrelevant, and in every context where Iâve seen it presented as âon topicâ in EA, the connection between it and any positive impact was simplistic to the point of being imaginary, while at the same time promoting dangerous viewsâjust like in the post you quoted.
Topical relevance is independent of the position one takes on a topic, so the rule youâre suggesting also implies that condemnations of race science are spam and should be deleted. (I think Iâd be fine with a consistently applied rule of that form. But itâs clearly not the OPâs position.)
Thanks for sharing your thoughts.
Some thoughts on your suggestions of where to go from here:
Iâm somewhat sympathetic to banning discussion on the forum. I think this topic takes up way more attention than it deserves, and it generates that attention because of its edgy and controversial status.
However:
Iâd at least have some worry that this would be taken by people as confirming evidence for unfortunate beliefs they already held --
Some people might think that this was evidence in favour of scientific racism (why else would they try to close down discussion?)
Some people might think this was evidence of widespread belief in scientific racism in EA (what are they trying to hide?)
This would be in tension with another of your suggestions, âEAs should be empowered to speak out against race science and its proponentsâ
I think that people are currently welcome to speak out on this, including on the forum, and this often attracts a lot of upvotes
I canât see a good way to draw boundaries which would continue to allow this, while also banning the discussion you donât want to see on the forum
Note that I think there is very little actual discussion of race science on the forum; most of the discussion is about social responses
(The most substantive thing I remember reading is someone saying basically âFYI, I looked into this and the claims of scientific racism seem to be falseâ, although of course there may be things I missed)
It seems kind of unhealthy to facilitate people saying âX makes me uncomfortableâ while in the same space banning people from saying âX doesnât make me uncomfortableâ (though I think itâs fine to ban direct discussion of X)
Overall, this makes me feel worse about banning than not-banning, but I could imagine being persuaded otherwise on that point, and am curious about your takes on the downsides
On âMajor EA organizations and leaders should publicly disavow race science and human biodiversityâ --
Iâm into public condemnation of problematic actions, e.g. the CEA statement on Bostromâs old emails:
âEffective altruism is based on the core belief that all people count equally. We unequivocally condemn Nick Bostromâs recklessly flawed and reprehensible words. We reject this unacceptable racist language, and the callous discussion of ideas that can and have harmed Black people. It is fundamentally inconsistent with our mission of building an inclusive and welcoming community.â
I think it would be inappropriate for people to present as epistemic authorities on topics where they arenât
On âEA should avoid any public association with people who have a history of making statements sympathetic to race scienceâ --
I donât think EA does have any direct public or private association with folks like Hanania
This is IMO absolutely the right choice
Itâs possible that that lack of association should be more publicised
Itâs kind of funny to stress it? Itâs not like âwe are cutting tiesâ, because I donât think there ever were any ties
But maybe it would be helpful to do anyway
However, EA is publicly associated with orgs (or at least with Manifest) which are publicly associated with Hanania
But here the grounds for rejecting public association seem much shakier
Manifest may be making mistakes, but they seem to be grounded in a desire for intellectual freedom
I can see an argument for making public statements making clear that Manifestâs actions are not the ones EA orgs would choose
Generically these feel a bit bad-neighbourly (like itâs impolite to criticise other peopleâs choices, rather than just get on and do your own thing well), but if thereâs a risk that people are perceiving EA as making the choices that Manifest does, perhaps it would be worthwhile
I think the idea that it would be appropriate to disavow association with Bostrom is wrong
I think itâs right and good to condemn his old emails
I think his apology was underwhelming in his insight about the harms of the original email, and itâs fair to criticise that (although I think itâs important that he repudiated the original email, and I believe him to be sincere in that)
The University of Oxford investigated here, and they concluded that he was not a racist and did not hold racist views (source: quotes from the outcome of that investigation, now included at the bottom of Bostromâs apology)
Although the nearly-three-decade-old emails were obviously problematic, I worked with Bostrom for several years, and never observed anything even resembling being edgy about these topics; nor have I heard reports of that from anyone who did
I also think he is clearly a person of high integrity, and generally sincere (even at times when it might be politically convenient for him to be less sincere), and while Iâm sympathetic to your concerns about good faith in Hananiaâs case, I think it would be quite unfair to tar Bostrom with the same brush
Can you link to concrete examples of things on the EA Forum that would be deleted under the proposed new EA Forum rules?
I tried searching for âhuman biodiversityâ but few of these posts seem like the kind of post where I would guess that you want them deleted.
Things that are found were mostly about the Manifest or Bostrom controversy. I am guessing you do not want to delete these. Or this post. In the wake of the Bostrom controversy there was also this heavily downvoted post that complained about âwokismâ. I am guessing this is the type of post that you want to see deleted. There is also this upvoted comment that argues against âhuman biodiversityâ, which, if I interpret your proposed rule change correctly, should also be deleted. (A rule that says âyou are allowed to argue against HBD, but not for itâ would be naive IMO, and I do not get the impression that you would endorse such a rule).
Overall, I do not remember seeing people discussing âhuman biodiversityâ on the object level. It indeed seems off-topic for EA. And explicitly searching for it does not bring up a lot, and only in relation to EA controversies.
A clear example of a post that would be banned under the rules: why-ea-will-be-anti-woke-or-die.
My hope is that in practice it would be pretty rare for this rule to be invoked, although I think it does depend a bit on how the final rule is worded. The comment you linked arguing against human biodiversity is a tough edge case. On the one hand, I am a lot more concerned about people arguing for human biodiversity than against it, but one the other hand it doesnât seem like the end of the world if a prohibition on discussing the topic also took down comments like that.
IMO the forum rule I proposed is in my view the least important of the reforms/âpolicies I suggested. The value comes more from signalling an opposition to racism/ârace science than it does from actually taking down a couple of comments here and there. Given how controversial the rule is, it would clearly be a pretty costly signal. That seems good by the lights of âmaking it more likely people of color engage with the EA movement.â
Iâll be honest, Iâm not a fan of the argumentation style of this post.
It makes some good points, but too much of it is designed to circumvent rational discussion of what action the community should take for my liking, by using social pressure. It also encourages EA to focus on maintaining its image more than I would see as healthy (optics is important, but it shouldnât become EAâs main concern).
FWIW, Iâm not a fan of race science posts here either. I agree with you that itâs hardly the most relevant topic and it creates a big distraction. However, if the community decided to ban such topics from the forum I would not want it to do so on the basis of many of the things youâve said here.
Additionally, asking a bunch of orgs to issue a statement would cause a bunch of unnecessary politicization and wouldnât really help our reputation. Talking about an issue associates you with that issue and our critics would continue to use these issues to bludgeon EA because they have no reason not to.
My current take is:
1) We canât really do anything to prevent our opponents using this criticism against us and âwokenessâ is on the decline, so EA will be fine reputationally, we just have to wait this out.
2) Thereâs a tension between focusing on optics and focusing on being excellent. I used to think that there wasnât so much of a tension, but once a community starts focusing too much on optics, it seems to be quite easy for it to continue sliding in that direction, making this more of a tension than one might expect. I believe that the community should focus more on being excellent and that will draw the kinds of people weâre looking for to us, even if some fraction of them find certain aspects frustrating.
3) Regarding the reputational impact on particular cause areas, I think we should try really hard to further establish these cause areas as their own entities outside of EA. There are many people who might be interested in these causes, but not interested in EA and it would also provide some degree of reputational separation.
4) I believe that there are advantages to specialization in that groups focused on individual cause areas can focus more on âgetting the thing doneâ, whilst EA can focus more on epistemics and thinking things through from first principles. Insofar as EA makes providing epistemic support one of its goals, itâs important for us to try to avoid this kind of internal politicization.
Re: the first footnote: Max Tegmark has a Jewish father according to Wikipedia. I think that makes it genuinely very unlikely that he believes Holocaust denial specifically is OK. That doesnât necessarily mean that he is not racist in any way or that the grant to the Nazi newspaper was just an innocent mistake. But I think we can be fairly sure he is not literally a secret Nazi. Probably what he is guilty of is trusting his right-wing brother, who had written for the fascist paper, too much, and being too quick (initially) to believe that the Nazis were only âright-wing populistsâ.
Iâm an Israeli Jew and was initially very upset about the incident. I donât remember the details, but I recall that in the end I was much less sure that there was anything left to be upset about. It took time but Tegmark did answer many questions posed about this.
The net upvotes on this were, if I recall correctly, significantly higher than they are at the time of this comment. The downward trend in voting on this post raises some concerns about possible brigading from an outside (or semi-outside) source.
I do remember this post having around 20 net upvotes about a day ago.
But some changes over time can also just be noise (if some people have strong-votes). Also, timezone correlations could also be an explanation (it would not surprise me if the US is more free-speech than Europe). Or there could be changes in the way the article gets found by different people. Or people change their vote after they changed their mind over the article. Or the article gets posted in a discord channel, without any intentions or instructions of brigading. Of course its still possible that the vote changes have some sketchy origin, and I am not against the forum moderators investigating these patterns.
This post is on a controversial topic, so lots of votes in both directions are to be expected.
Noise is certainly a viable alternative explanation, which is why I limited myself to âraises some concerns about possible brigading.â[1]
I donât think a mod investigation would be a good use of time here. The mods have pointed out a past influx of new accounts triggered by this topic being hot, along with possible non-representative voting patterns on this topic before. In contrast to events at the time of the Bostrom affair, it would be much harder to rule in /â rule out irregular voting with confidence where the activity volume is much smaller.
However, since people do cite to Forum upvotes/âdownvotes as evidence of broader EA sentiment (whether justified or not), I think itâs fair to point vote distortion out as a possibility.
I note that there is a range of opinions about what count as brigading. There are, for instance, places on Reddit where voting in the original subreddit if you learn about crossposted content from a different subreddit is counted as brigading. That is not my own view (although I understand why Reddit communities have operationalized it that way for administrability reasons). However, I do think it is possible for brigading to occur without specific intent or instruction. In particular, people whose involvement with the Forum is limited to threads on their pet issue and who are otherwise uninvolved with EA should not be voting in my opinion.
The backlink-checker doesnât show anything of the sorts; but I think it doesnât work for discord or big social media websites like đ.
Thatâs a useful tool; thanks for sharing. That being said, I think the absence of evidence from that source is fairly weak evidence against a brigading hypothesis if discord and big social media sites are excluded from its scope. Those are some of the primary means by which I would predict brigading to occur (conditional on it actually occuring). Based on past behavior, I believe the base rate of brigading on race-science posts is fairly significant. So this evidence does not move the needle very much for me.
To clarify my reason for concern: I think there is good reason to suspect brigading when there is a âlateâ voting bump that moves considerably in one direction or the other. We saw that with one of the race-science posts for which there was evidence of an exterior link driving the traffic. Unfortunately, Wayback Machineâs captures are all on August 1, and so I have only my (not reliable) memory of where the net karma was during this postâs history.
Without better data, the best I think we can do in terms of outside influence is âmaybe.â For instance, Iâd update more on knowing the timing of votes, the vote patterns for medium+ karma/âengagement accounts vs. new or intermittent ones, whether there were votes from any account that tends to show up and vote when a small set of issues is discussed, etc. In light of the maybe, I feel thereâs value for flagging the possibility for the reader who may not be aware of the broader context.
IMO this post violates its own proposed rule of avoiding discussion of race science on the EA forum.
Similarly, there is some tension between the ideas âEA should publicly disavow race scienceâ and âEA should never discuss race scienceâ. Normally taking stances invites discussion.
Race science is well-established pseudoscience recognised by the scientific community. This why I roll my eyes when EAs think of themselves as elite or smarter than average. There are, fortunately or unfortunately, anti-intellectual currents within this movement, and race science isnât the only pseudoscientific inclination in my opinion, and in the opinion of a few others in this movement I have learned.
Unlike you however I actually am grateful for EAs anti-science streak to be so nakedly visible, because it is actually valuable information for outsiders and insiders to know. Knowledge of the EAs embracing race science should inform the public how seriously to take this movement, and can only help weed out the good parts of EA from the bad.
We shouldnât mask up the shortcomings of EA to make it look like a better movement than it actually is.