The post you linked to from Will MacAskill (“The history of the term ‘effective altruism’” from 2014) doesn’t reference the Rationality community (and the other links you included are to posts or pages that aren’t from Will or Toby, but by Jacy Reese Anthis and some wiki-style pages).
Do you have examples or links to talks or posts on EA history from Toby and Will that do discuss the Rationality community? (I’d be curious to read them. Thanks!)
Karma is not a straightforward signal of the value of contributions
This statement and the idea of karma as the decentralized solution to the problems OP describes feels overconfident to me. To reference this comment, I also would push back on karma not being subject to social desirability bias (ex: someone sees a post already has relatively high karma, so they’re more inclined to upvote it knowing that others on the Forum or in the EA community have, even if they, let’s say, haven’t read the whole post).
I would argue that karma isn’t a straightforward or infallible signal of “bad” or “good” contributions. As those working on the Forum have discussed in the past, karma can overrate certain topics. It can signal interest from a large fraction of the community, or “lowest-common-denominator” posts, rather than the value or quality of a contribution. As a current Forum staff member put it, “the karma system is designed to show people posts which the Forum community judges as valuable for Forum readers.”
I would note, though, that karma also does not straightforwardly represent the opinions of the Forum community as a whole regarding what’s valuable. The recent data from the 2023 EA Forum user survey shows that a raw estimate of 46.5% of those surveyed and a weighted estimate of 70.9% of those surveyed upvoted or downvoted a post or comment. Of 13.7k distinct users in a year, 4.4k of those are distinct commenters, and only 171 are distinct post authors. Engagement across users is “quite unequal,” and a small number of users create an outsized amount of comments, posts, and karma. Weighted upvotes and downvotes also mean that certain users can have more influence on karma than others.
I appreciate the karma system and its values (of which there are several!), and maybe your argument is that more people should vote and contribute to the karma system. I just wanted to point out how karma seems to currently function and the ways it might not directly correlate with value, which brings me to my next point…
Karma seems unlikely to address the concerns the OP describes
Without making a claim for or against the OP’s proposed solutions, I’m unsurprised by their proposal for a centralized approach. One argument against relying on a mechanism like karma, particularly for discussions of race on the Forum, is that it hasn’t been a solution for upholding the values or conditions I think the OP is referencing and advocating for (like not losing the potential involvement of people who are alienated by race science, engaging in broader intellectual diversity, and balancing the implications of truth-seeking with other values).
To give an example: I heard from six separate people involved in the EA community that they felt alienated by the discussions around Manifest on the Forum and chose to not engage or participate (and for a few people, that this was close to a last straw for them wanting to remain involved in EA at all). The costs and personal toll for them to engage felt too high, so they didn’t add their votes or voices to the discussion. I’ve heard of this dynamic happening for different race-related discussions on the Forum in the past few years, and I suspect it leads to some perspectives being more represented on the Forum than others (even if they might be more balanced in the EA community or movement as a whole). In these situations, the high karma of some topically related comments or posts in fact seemed to further some of the problems OP describes.
I respect and agree with wanting to maintain a community that values epistemic integrity. Maybe you think that costs incurred by race science discussions on the Forum are not costly enough for the Forum to ban discussion of the topic, which is an argument to be made. I would be curious for what other ideas or proposals you would have for addressing some of the dynamics OP describes, or thoughts on the tradeoffs between allowing/encouraging discussions of race science in EA-funded spaces and the effects that can have on the community or the movement.